More Climate Scam Commentary

Instapundit has a roundup this morning, including this — the fix is in:

The picture that emerges is simple. In any discussion of global warming, either in the scientific literature or in the mainstream media, the outcome is always predetermined. Just as the temperature graphs produced by the CRU are always tricked out to show an upward-sloping “hockey stick,” every discussion of global warming has to show that it is occurring and that humans are responsible. And any data or any scientific paper that tends to disprove that conclusion is smeared as “unscientific” precisely because it threatens the established dogma.

For more than a decade, we’ve been told that there is a scientific “consensus” that humans are causing global warming, that “the debate is over” and all “legitimate” scientists acknowledge the truth of global warming. Now we know what this “consensus” really means. What it means is: the fix is in.

It also makes one wonder — what else have these people and their enablers in the media been lying to us about?

[Update a few minutes later]

Three things you absolutely must know about the scandal.

[Update a few minutes later]

For those who want to get their geek on, here’s a preliminary code review:

I’ve examined two files in some depth and found (OK so Harry found some of this)

* Inappropriate programming language usage
* Totally nuts shell tricks
* Hard coded constant files
* Incoherent file naming conventions
* Use of program library subroutines that appear to be

o far from ideal in how they do things when they work
o do not produce an answer consistent with other way to calculate the same thing
o but which fail at undefined times
o and where when the function fails the the program silently continues without reporting the error

Yes, let’s completely upend the world’s economy over results like this.

[Update a few minutes later]

Here’s more:

We have here a stellar example of it in real life in the above example where a “squared” value (that theoretically can never become negative) goes negative due to poor programming practice.

There are ways around this. If a simple “REAL” (often called a FLOAT) variable is too small, you can make it a “DOUBLE” and some compilers support a “DOUBLE DOUBLE” to get lots more bits. But even they can have overflow (or underflow the other way!) if the “normal” value can be very very large. So ideally, you ought to ‘instrument’ the code with “bounds checks” that catch this sort of thing and holler if you have that problem. There are sometimes compiler flags you can set to have “run time” checking for overflow and abort if it happens (there are also times that overflow is used as a ‘feature’ so you can’t just turn it off all the time. It is often used to get “random” numbers, for example.)

But yes, from a programmers point of view, to watch someone frantic over this “newbie” issue is quite a “howler”…

And this:

So we have a tacit confirmation that they start with GHCN data. That means that ALL the issues with the GHCN data (migration to the equator, migration from the mountains to the beaches…) apply to Hadley / CRU just as they do to GIStemp.

Both are broken in the same way, so that is why they agree. They use biased input data and see the same result.

And this:

BTW, IMHO it would be easy to make an alternative Global Temperature Series. “Mc” is quite right that it is easy. I could have one in about a day (less if I didn’t want to think about the details too much) and it would be more accurate than GIStemp. How? Simply by “un-cherry picking” some of the GIStemp parameters then running the code.

I finds the dig at “real science” vs “procedures” interesting. How can you have reliable science if your procedures are broken? I learned about “lab procedures” and the importance of them very early in chem lab. Anyone who disses the merit of sound procedures is an accident waiting to happen… IMHO. And will produce errors from unsound procedures.

But the overall thing that I pick up from this is just the tone of True Believers. These folks really do think they have it all worked out. And that is a very dangerous thing. It leads to very closed minds and it leads to very strong “selection bias”. Often with no ability to self detect that broken behaviour.

You know, I think there will be a great deal of insight come from this “leak”…

The climate gods have feet of clay. What’s sad is that people like commenter and defender Chris Gerrib (who had never even heard of induction until this discussion) and many politicians (e.g., Al Gore) are incapable of understanding the degree to which this invalidates the entire enterprise, because of their ignorance of epistemology.

35 thoughts on “More Climate Scam Commentary”

  1. Adjusting data to fit a preordained conclusion is good science. If we waited for AGW to be proven by real data, it might be too late. The holy writ of post-modernism says that the ends justify the means. I should also point out that just because a person cheats on their taxes doesn’t mean they’re dishonest! It just doesn’t! So stop saying that!

  2. No, adjusting data to fit a conclusion is not good science. Good thing that the tree-ring data was included as a separate line in the IPCC graph.

    Besides the fact that it would be helpful to prove that the individual in question actually did cheat on their taxes, do you really want to argue that one researcher’s personal issues invalidates the work of thousands of others?

  3. Chris Gerrib writes:


    do you really want to argue that one researcher’s personal issues invalidates the work of thousands of others?

    I do argue that this conspiracy invalidates the “work” of thousands.

    How about we, very reasonably, no longer trust any scientific paper that has cited the work of the climategate cabal exposed by this leak.

    If peer reviewers of the faulty works were not anonymous, we could also now discount the work of the reviewers, and of the folks that cited their work. Since they are anonymous, we are forced to also doubt the reputation of the journals involved in this as well.

    The global warming conspiracy is unraveling.

    I would not characterize a conspiracy to falsify scientific data, and obstructing other scientists publications as a “personal issue”.

  4. “Good thing that the tree-ring data was included as a separate line in the IPCC graph.”

    You know they cherry picked Yamal too, don’t you?

  5. Chris: We’ve got as much evidence of a conspiracy as AGW pimps have uncorrupted evidence of AGW.
    We’re good if you are.

    Besides, if faking evidence and stifling contrary opinions are acceptable tactics to wreck the world’s economy over… then they’re good enough tactics to to throw AGW boosters into Turkish prisons with. Sauce for the goose…

  6. Chris – Haven’t proved a conspiracy? The emails can’t be any more clear on this point. Means, motive, and opportunity as well as intent documented with dates and identities.

    Your case for denying this conspiracy is even worse than the case for AGW.

  7. The irony here is that because the CRU stonewalled on this stuff for so long and then this data was obtained via illicit means, the CRU models and data is getting more review than it ever would if they had just released it openly.

    I don’t know about the talk of “conspiracy”. There does seem to be evidence that a group of researchers (including people from outside the CRU) have been coordinating in some degree of deceit and questionable data manipulation. Further, when you consider their close rapport with the IPCC and their apparent coercing of various journals, this does seem to imply existence of a conspiracy of some sort.

  8. Chris, did you read the entire article Rand linked to: “Three things you absolutely must know about the scandal”?

  9. If you want to throw out inductive reasoning, half of modern science goes out the window with it.

    What mental defect would cause you to believe that anyone here was proposing to throw out inductive reasoning?

  10. Wow. At one point in the “preliminary code review” link above, the author comes across code that calculates overlap of circles using a graphics program. It draws the two circles (virtually that is) and the overlap is colored something unique (white I gather). It then figures out how much of the graph is is that special color.

    There’s also a case where a previous programmer somehow did a data conversion (“sun percentage” to “cloud percentage”) even though no chain of conversion programs existed to do the conversion.

    And the “totally nuts shell tricks”? Definitely. The code does things like call unix commands to count the lines of a file, then it uses Fortran (“inappropriate programming language usage”) to parse the text output from the line counting program.

    I knew in my bones that these climate models would be rat nests of bad code, but man, this is a lot worse than I was expecting.

    Yes, let’s completely upend the world’s economy over results like this.

    Right on, Rand. I think even if the other activities of these particular climate scientists were innocuous (and they aren’t IMHO), the code discussed there is incredibly bad, well past the point where it destroys the viability of the model. For example, if we made a plain English description of what the code is supposed to do, used the same initial data sets, and developed new code from scratch. The new program would (again IMHO) produce a different output. It wouldn’t process and/or ignore errors in the same way, use the same combination of utterly kluged code, or the same mysterious data conversions.

  11. This reminds me of when I read an article about the history of the biographies of Horatio Alger. It seems Alger, before his career as an author of rags-to-riches tales in the late 1800s, was some kind of minister for a couple years till the congregation became aware he was, ahem, corrupting the morals of some of their boys. He left and afterward led a humdrum life, and in the 1920s an author decided to write his biography. But discovering that the only interesting parts of Alger’s life were unmentionable in polite society, he changed strategy and simply made up stuff–and an interesting yarn did he concoct. Indeed it became the basis of scholarship on Alger for more than 30 years. Finally in 1972, after some questions had been raised, the hoaxer came clean. But the fake info had spread so far that it continues to be taken for real by many who never got the word, even today.

    But-and this is the poignant part-I related all this to a retired English teacher, and her immediate response was, “That can’t be true–I taught stories from Horatio Alger and researched his life. There were no scandals like that, and he had a lot of interesting experiences.”

    I think of this when I see people responding to these revelations with, “so what if they screwed around a little, there’s lots of evidence for AGW.”

  12. The new program would (again IMHO) produce a different output. It wouldn’t process and/or ignore errors in the same way, use the same combination of utterly kluged code, or the same mysterious data conversions.

    I saw someone claim in comments at PJM that when McIntyre fed random data into Mann’s model a couple years ago, it outputted a hockey stick. Can anyone verify that? After seeing this, I could certainly believe it.

  13. I’ll take that challenge, Rand. The Scientific Method is specifically designed to be skeptical of induction. The importance of designing an experiment to test an hypothesis is that empiricism, not induction, is the only valid means of knowing anything.

    Remember that g=Gm1m2/r^2 is an inductive conclusion and was accepted by “consensus” for hundreds of years. However, it doesn’t fit the empirical evidence of Mercury’s orbit and is no longer the accepted Theory of Gravity. Only empirical evidence should be accepted at face value; ALL induction should be treated with suspicion. Better and better induction may be the result of applying the Scientific Method, but it is the hostility of the Scientific Method to induction that allows induction the little credibility it has.

    I recommend The Black Swan by Nassim Taleb; he summarizes centuries of philosophy on this subject very well.

  14. “Fred K. – no motive is shown.”

    This goes beyond obtuse to disingenuous. The motives are money, power and prestige. Did I say money?

    For you to link a guy that writes the quote below is the epitome of disingenuousness.
    “As has been pointed out numerous times, nothing in the stolen emails and other documents that found their way onto the Internet last week in any way challenges the science behind anthropogenic global warming.”

    I don’t know how many hay fields he had to rob to construct that strawman but he should have stuck to one discipline or the other as his journalism skills obviously suffered.

  15. Remember that g=Gm1m2/r^2 is an inductive conclusion and was accepted by “consensus” for hundreds of years. However, it doesn’t fit the empirical evidence of Mercury’s orbit and is no longer the accepted Theory of Gravity.

    It’s actually quite well accepted, as a special case of GR for small velocities and not near large masses. Any theory of gravity, including GR, remains a theory, and an inductive one.

    That induction should be treated with suspicion is not equivalent to saying it should be thrown out. It has its uses. Are you really proposing that it does not?

  16. Yeah Rand, That’s what McIntyre did and the random noise produced a hockey stick.

    As a used to be professional meteorologist and even a couple of years at a university working in atmospheric research I guess I have a lingering professional interest in the whole climate change thing.

    Those of us who have been following the issue and reading the scientific papers, McIntyre’s critiques etc etc knew pretty much that the field was corrupt particularly after the Wegman report in 2006. Wegman confirmed the hockey stick was broken, spanked Mann and co for inept use of statistics, admitted the late 20th century was probably warmest in the last 400 years(just about entirely uncontroversial amongst the skeptic community – see Little Ice Age , recovery from) and did an analysis of the relationships between the main players producing AGW work. There were around 43 of them and all collaborating and reviewing each other’s work. Not very indepenent and prone to groupthink.

    The nice thing about the CRU hack is that what we thought is confirmed and now it is all over the web and even now the MSM.

    My wife suggests the appropriate punishment for these miscreants is for the rest of their lives to run solo weather observation stations getting REAL raw data – in Siberia. (Hey, they reckon its getting much warmer there!)

  17. Rand,
    Yes, implying that I’d make a case for induction to be “thrown out” was too strong. I suppose I was reacting to the quote, “If you want to throw out inductive reasoning, half of modern science goes out the window with it.” It isn’t “science” that would be thrown out; Science is a process for invalidating inductive reasoning, not a body of knowledge. None of the evidence that is properly recorded and archived would be thrown out, and it isn’t that uncommon for pre-existing models and theories to be thrown out.

    Even what you said demonstrates the supremacy of empirical evidence over induction; as I understand it, at high masses and low velocities the additional terms don’t fall out of the equation, they become so small that they don’t affect the outcome more than measurement error, so to incorporate the “true” model isn’t practical. The most important consideration is the applicability of the result, not its “accuracy”. (Included to save me a little face)

  18. J. Card. You can’t be serious. Hard to save face with egg all over it.

    Accuracy is important. If you know anything about science – not engineering – is that we’re after the truth. If the result isn’t applicable, chances are it’s inaccurate, and therefore not true.

    It’s almost like Logic 101.

  19. If programmers working in industry pulled any of this sh!t we’d be out on our ears. I’ve rejected better-written code for not being up to standards.

    Interesting, isn’t it, that when a US corporation writes software that MIGHT produce numbers that end up in a financial statement, there are regulatory hurdles that have to be met, thanks to SOX, but when the numbers produced by the code MIGHT FOOL US INTO A REORDERING OF SOCIETY, it takes a “hack” to reveal the code as poorly written.

  20. Rand – considering I’m arguing in another thread with people about how round the Earth is, it certainly seems that you want to throw out any inductive reasoning that doesn’t agree with your preconceived ideas.

  21. I am surprised, that people in here are surprised from knowing that physicists can’t code. In all my life I have known like one physicist who could code well (he was an astrophysicist and changed his line of work to computer programming eventually).

    Unfortunately the press seems to be generally glossing over this leak and treating it as if it doesn’t affect the status quo. I saw like 20 second soundbites dismissing it on different TV news channels. These aren’t the droids you’re looking for. Move along…

    This whole deal is obviously more than just incompetence however. Corrupted GW scientists cherry picking data, or plain distorting data, personal attacks on critics, etc. I hope these people are kicked out of the establishment as they deserve.

  22. Normally, I don’t try to argue from authority, but I do happen to have some experience in this area.

    Godzilla, it’s weirder than you think. Learning to code decently is not that hard or rare. For example, I’m a mathematician (specialty in mathematical physics) with a couple of years of real world programming experience (thanks to Hewlett Packard) in addition to many (too many) years of academic side programming. There’s supposed to be some good coders in places like Lawrence Livermore. High energy physics has a lot of effective code. It might be scary on the inside, but it works pretty well.

    Some of the kludges in the code mentioned above are so bad, I’m boggled as to who would have the combination of skill and ignorance to accomplish such a feat. It’s got to be someone who learned programming in Tyme Before Reckoning, say in the 60s or 70s, and never brushed up on modern techniques or hired a good coder to handle the nasty bits.

    I was expecting climate modeling code to be hairy. After all, you are right about most physicists not having a lot of programming skill. And complex code with decades of revisions is a recipe for a big mess. But they surely can hire skilled undergrads, grads, or recent graduates to polish some of the turds. You can just about use some of this code as proof of the existence of a deity of madness.

  23. Karl, I second that notion very strongly. I’ve made a great living and a highly successful career as a guy who goes back and forth between the worlds of developers (where I wrote commercial code ten years ago that is still being used by some trading firms despite its never having been upgraded since I left the company nine years ago) and the world of traders and accountants (where I was for seven years a highly successful futures trader). My college education? Bachelor of Arts, in classical languages and literature. I’m a completely self-taught developer…and my reason for emphasizing this is not to brag. It’s simply to observe that, if you have an I.Q. higher than room temperature and you also have some basic intellectual integrity and pride in your work, then there are a thousand perfectly good books that will teach you how to write clean, stable, maintainable code. If you write sh***y code, then either you’re really really young and inexperienced, or dumb as a post, or too damned lazy for any right-thinking employer to hire you. So, um, are these Deities Of Science On Whose Unquestionable Authority We Are To Cripple The World’s Economy And Hence Stave Off The End Of The World…are they young and inexperienced, dumb as a post, or damned lazy? It pretty much has to be one of the three…unless you want to go for “deliberately trying to work a scam on the tax-paying, huge-government-grant-supplying public”.

    And, Godzilla, your point would be valid except for one thing. Their supporters can’t say, “Hey, they’re good scientists even though they’re hopelessly incompetent programmers,” because their computer models are precisely the product they wish to invest with scientific authority.

    To another point…I like how Chris finds it difficult to tell the difference between “thousands” and, what was it, forty-three?

  24. Rand,

    About the “Mann Method + Random Noise -> Hockeysticks”

    Jeff Id at the Air Vent Hockey Stick Posts I think “CPS Revisited” has the core issues outlined.

    The key piece is: They’re using an algorithm on a pile of complete crap of “anything that might be considered” a temperature proxy. The method finds anything that matches the available temperature records most accurately. Then assumes that because it matches reasonably well across the available temperature records, it must also do a decent job prior to the temperature records.
    .
    The fact that you’re going to end up with a “blade” that matches current instrumental records is hardly shocking. But the method of extending into a historical reconstruction essentially forces a flat shaft. You haven’t “trained” your system to find “excellent proxies.” You’ve only trained it to find “things that match 1850-to-nowish-with-some-creative-cropping.”
    .
    There are issues with Jeff’s approach also, but the key bit is: This is absolutely not a good method of discerning signal from noise – particularly when know you have errors in the ‘test pattern’ anyway. (The trees didn’t have thermometers strapped to them after all, the nearest relevant thermometer is usually a couple hundred miles away at the nearest airport.)

  25. The code being poor could be a sign of too many people having worked on it, and in particular could represent students having worked on it.

    In my first full-time job, and in the part-time job I did just before it that pretty much led to me getting hired, I was tasked with translating formulas written in Matlab into a C++ environment, called by Java, where the C++ calculations all used a math library written mostly by grad students. The problem is grad students are busy people and even when they have a lot of time to put in, they’re prone to make mistakes career programmers wouldn’t, and they also pass on their work to the next student when they leave or their workload changes. Because it’s far easier to write code than read it (particularly if it’s commented poorly), whoever picks up the baton next may not catch the errors and may make them worse.

    Early on in that job I fixed some major memory allocation errors in the library, and some time later I discovered that a seemingly random crash in our application stemmed from negative numbers creeping into a calculation (due to unavoidable rounding error) that then converted the dataset to complex numbers when doing a square root. The complex numbers were a bad thing, but the conversion routine was also subtly broken and causing more memory problems.

    All things considered, it’s probably worth cutting the programmers a little slack. Even the ones applying artificial corrections and tricks probably did so at the explicit behest of their professors. But it should send up a red flag that computer models written by a university research group are probably less significant or trustworthy than computer models in industry. I guarantee the problems in that code (except the manipulation) are endemic to all university code. The moral of the story is that any computer models and data that are going to end up driving policy should be available to the public.

  26. Lummox, this isn’t even “the models.”

    This is the rather straight-forward “Determine a gridded historical temperature reconstruction from the available instrumental measurements.”

    The entire thing should be focused on averaging and interpolating. Possibly with some weighting if you have concrete error evaluations of a set of measurements.

    The key issues as one reads through is the state of the records. There are literally thousands of paper records that were entered into computer databases – and the entering appears to have been quite shoddy. There have been verified instances of a station in France being used to determine temperature in Maine – because of an erroneous entry in Lat/Long.

    Most of the really egregious comments by the programmer aren’t about the programming – but about how the database needs to be reconstructed from first principles.

  27. Chris Gerrib, these leaked emails and files on programming destroy the global warming “science” not only because of what they reveal but because these are just emails we know about. There is no reason to believe there are not a lot more like this at CRU and other warmie sites. Hadley/CRU is a kingpin of the IPCC reports, from the 20th century claim of 0.6C temperature rise to the disgraced 1990s’ “hockey stick”. By the way, the CRU may be in East Anglia but it gets money from the U.S. Department of Energy and the EPA. So when you get press releases on the crisis of global warming your hearing one voice not three. I wonder if we can add to that number. Peer reviewed indeed.

  28. I’ve probably missed the boat on this, because of holiday travel, but I’ll put this up, anyway.

    Joe Sixpack –
    “Accuracy is important. If you know anything about science – not engineering – is that we’re after the truth.”

    You’re agreeing with my original point. The Newtonian gravity equation is asymptotic to the Relativity gravity equation, and I was pointing out that, in a purely accurate sense, the Relativity equation should be used and that this replacement of models was an example of why induction needs to be treated with such skepticism that we apply the Scientific Method to inductive reasoning. It was Rand’s point that the Newtonian equation was good enough, implicitly because the difference between the equations was less than measurement error. The terms do not cancel out in the realm of large masses at low speeds; they become immaterially small. (See the section “Deviation from Newton’s Law of Gravity” in here: http://en.wikipedia.org/wiki/Einstein_field_equations, noting the areas where values are “approximately zero” or “negligible”, but never “zero” or “cancelled”).

    The reason I backed off from my previously harsh stand is that his approach is more scientific; the “truth” you mention in science is found in data, not in the models that survive experimentation, so differences between one model and another model that are less than the measurement error are irrelevant as long as the relevant domain is properly understood.

Comments are closed.