The Climate Models

…are getting worse than we thought:

The author hypothesizes the reasons for this are that attempts in the latest generation of models to reproduce observed changes in Arctic sea ice are causing “significant and widening discrepancy between the modeled and observed warming rates outside of the Arctic,” i.e. they have improved Arctic simulation at the expense of poorly simulating the rest of the globe.

It continues to amaze me that so many supposedly smart people take this junk science seriously. You know what this stuff looks more and more like to me? Epicycles.

21 thoughts on “The Climate Models”

  1. I’ve been thinking that for some time, and was meaning to write a blog post about it. Every time the temperature fails to increase when CO2 levels do, rather than ask whether the models make any sense, they add another epicycle until the model fits the measurements (or ‘adjust’ the measurements until they fit the model).

  2. It’s almost as though instead of climate models being robust from-first-principles descriptions of rules of climate behavior they are just messy piles of kludges with enough free parameters to overfit almost any data.

    Well, on the plus side at least the climate science community is very accepting of scientific criticism so that they aren’t burdened with non-physical models or bad data for very long.

  3. I wouldn’t mind half as much if they could manage to do the “simple” parts right. And focus attention on locking that down before screaming about the falling sky. Starting from “Measure the current ‘Global Mean Surface Temperature'” it all goes downhill.

  4. I have spent a couple of hours searching in vain, but I recall reading a cople of yeas ago that no climate model was able to “hindcast” the observed temps in the 29tyh century; i.e., you could not start running a model in say 1930 and get the observed temps of 1940, 1950, etc. is this in fact the case?

    1. John, I remember that issue as well, though from back around 2000. The models, when run backwards, did not accurately predict temps in the past.

      I’ve always been a skeptic when it comes to the modeling of any chaotic system. Look at the problems engineers have with modeling chaotic flow in aerodynamics. Or, for me, the real kicker; the most advanced and complex computer models mankind uses are for modeling the financial system and the stock markets for risk management, and look how that failed disastrously in 2008.

      There’s also the GIGO problem; garbage in, garbage out. The base datasets the climate models use are not accurate; they have been “adjusted” in ways that almost always bias towards a warming trend. There’s also the heat island effect; in a great many cases, cities and towns have grown up around the weather stations, creating a strictly local warming. There are also many cases of weather stations sited at sewage plants, and even co located with air conditioners and barbecues. (Anthony Watt’s surfacestations.org has a great collection of such station reports, complete with pictures).

      The climate models also can’t account for the post-1998 cessation in warming. So, we have climate models that can predict neither the past nor the future.

      I call that useless.

      1. What John was describing isn’t running the climate models backwards. Instead, he’s describing starting at a known point in the past and seeing how well the models reflect what actually happened up to the present. From what I’ve read, this either isn’t done or hasn’t worked but we’re supposed to believe the models can reliably predict what will happen in the future.

        Your point about GIGO is the same as what I wrote in an earlier thread. The Earth’s climate is very complex and has many inputs. The attempts to derive those inputs are error prone and may be based on erroneous assumptions (e.g. tree ring thickness as an indicator of temperature). How well do we know solar output from centuries ago? The Chinese have recorded sunspots for centuries but how well does that correlate with solar intensity? What about rainfall rates and cloud cover? What about volcanic activity or any of the factors that can drive global climate? If the input data are poor, there’s no way to derive a high quality output.

        Any computer model is only as good as the fundamental understanding of what’s being modeled. Our knowledge of these climatic factors is slim so even the best models are going to be imprecise. The ClimateGate “Harry Read Me” files show what one person went through trying to recreate earlier results. The code, data and processes were all a mess. And we’re supposed to use that mess to drive economic policy decisions with trillions of dollars of impact? Sorry, but no.

        1. This was done with some of the models. But they ended up wildly off for the period 1945-1970. There was a ‘dip’ in all of the temperature reconstructions, and all of the models just kept climbing through it.

          So it was decided that this must be because of the influence of aerosols (which were not part of the model at that time) and they produce a strong cooling effect (tweak, tweak tweak) -still- not enough.

          So then the temperature records were reimagined. New pseudo-calibration baselines, new adjustments for site conditions, and a willingness to add aphysical adjustments. Voila, now the two are barely “failing to disagree”. So it’s “working”.

          Except … aerosol production has dropped off a cliff, so the 2000-now temperatures should look like a rocket ascent pattern. But the (erroneous) use of ‘ensembles’ covers the convenient -dropping- of the inconvenient models. And adding new models without re-testing the hindcasting.

          1. The key takeaway from all this is that when your model doesn’t reflect reality, it isn’t reality that wrong. Tweaking and kludging your model to try and forcefit it closer to reality isn’t science and only proves that the underlying assumptions and calculations were wrong.

          2. The language of Monte Carlo simulators is practically designed to confuse the modellers into the idea that their “one real experiment” can be wrong though.

          3. Getting a fit is actually pretty easy. Just take past temperature records are run a polynomial curve fit and you’ll get an equation for temperature over time. Add in a bizarro future projection to get an equation that both matches past data and predicts catastrophe. ^_^

            The trick is hiding the underlying equation (which has no connection to anything but an arbitrary function) in the nearly impenetrable guts of a FORTRAN of a climate model, so the simulations crunches through trillions of calculations in the world’s most inefficiently written simple equation solver.

            Not surprisingly, someone has already replicated climate model results with a trivial Excel spreadsheet, taking a more direct approach to generating the same BS.

  5. I have spent a couple of hours searching in vain, but I recall reading a cople of yeas ago that no climate model was able to “hindcast” the observed temps in the 29tyh century; i.e., you could not start running a model in say 1930 and get the observed temps of 1940, 1950, etc. is this in fact the case?

  6. It amazed me how poor NASA’s model were in predicting the heating profile on the Space Shuttle during entry. Poor here being relative, because they were quite good, but when ever you started discussing inches of remaining insulation, you could find excessive margin in the modelling. This is small system compared to the Earth’s climate, and we had many flights (can’t so over a hundred, because not all Orbiters had sufficient instrumentation) of empirical data.

    If NASA’s models for space vehicles can be that off, then yeah I’m skeptical of how good the models are for “climate science”. Especially when the models supposedly predict temperature down to the 1/10th Celsius degree 30+ years into the future. That’s impressive precision for any model.

    1. Especially when the models supposedly predict temperature down to the 1/10th Celsius degree 30+ years into the future. That’s impressive precision for any model.

      Actually, it’s impossible precision for any model. Every high school science student is (or was) taught the importance of significant digits. If your inputs are only accurate to a single digit of precision, then that’s the most accurate you can have for an output. Just because a calculator or computer spits out a number with a bunch of digits behind the decimal point, it doesn’t mean you have that degree of accuracy.

    2. Once significant turbulence kicks in, the results can become problematic, with estimates based on rules of thumb backed up by wind-tunnel data, depending on how sensitive the flow is going to be (pockets, bubbles, regions of locally reversed flow, etc). Equations for aerodynamics are generally great if you stay out of such flight regimes (blah blah stuff we all know here).

      Part of the utter nonsense of climate modeling is that they’re pretending the equations (that are already falling apart toward the upper trailing edge at high-angles of attack, as chaotic behavior starts to manifest) are actually valid to a high-degree of precision for [i]days[/i], not for just milliseconds. A different approach is needed for describing the behavior of large regions of a poorly mixed atmosphere (massive variations in water vapor, condensation, etc) driven by tiny self-induced differences in pressure.

      I’d assert that our knowledge in this area is still pretty tentative and provisional. For example, prior to the hard data that came back from probes to the outer planets, it was assumed that their wind speeds would be lower because there was hardly any heat input from the sun to drive any weather phenomenon. Instead it was found that wind speeds increased the further out the planets were, with winds going [i]supersonic[/i] on Triton (as I recall). Lower atmospheric temperatures apparently reduce “friction” via eddy formation, or some similar type of effect, and I don’t think we have any validated equations for that.

      Climatology back on Earth is still stuck on the idea that more heat equals more wind, because there’s more energy in the system. That seems to be not just wrong, but utterly wrong.

      1. Yes, it is the heat distribution, the temperature gradients, which drive wind. GHG warming actually ought to reduce the gradients.

  7. In defense of epicycles, they work for providing a description of orbiting bodies. Sure you need more epicycles as your observations improve, but they can and were refined to provide better descriptions. But here, do we have a process that can be refined to provide a better description?

    1. And we still use epicycles to compute planetary positions to higher accuracy with less machine cycles than other methods.

      The VSOP87 routines calculate positions to about an arc second over a two-thousand year span, and for each planet they’re just

      rad(t) = sum (for i=0 to m): r(i) * (cos(s(i)*t))
      lat(t) = sum (for i = 0 to m): a(i) * (cos(b(i)*t))
      long(t) = sum (for i = 0 to m): c(i) * (cos(d(i)*t))

      Given periodic waveforms with multiple components, just use the Fourier spectrum. You could run the VSOP87 functions with thousands of gears like a glorified Antikythera device.

      Climate science is an Antikythera device with only two gears, with the output dial a trivial function of the input CO2 crank. But instead of debating whether a particular wheel deep down in an assembly should have 253 teeth or 254 teeth, they’re arguing over whether the input crank should be geared up or geared down to drive the output dial.

      * Insert Farside picture to three cave-men trying to draw an engineering diagram of how a rock kills an antelope *

Comments are closed.