A New Science Movement

Did Climaquiddick set one off? If so, it’s not just a new, but a real (as opposed to politically ideologically driven) science, returning it to free inquiry.:

Remember these names: Steven Mosher, Steve McIntyre, Ross McKitrick, Jeff “Id” Condon, Lucia Liljegren, and Anthony Watts. These, and their community of blog commenters, are the global warming contrarians that formed the peer-to-peer review network and helped bring chaos to Copenhagen – critically wounding the prospects of cap-and-trade legislation in the process. One may have even played the instrumental role of first placing the leaked files on the Internet.

This group can be thought of as the first cousins to Andrew Breitbart’s collective of BIG websites – obsessively curious, grassroots investigators that provide vision to the establishment’s blind eye. Peer-to-peer review is the scientific version of the undernews.

Call it Big Science.

[Update a few minutes later]

I liked this comment, which puts it all in perspective for those who remain willfully blind to the implications of the data dump:

Imagine for a moment that a high school student submitted a project for competition in which he offered up the hypothesis that tree rings gave a historical blueprint of climate change.

Competition Judge: “Ok, Johnny, this is a very interesting theory. May I see your data?”

Johnny McFibber: “I lost it.”

Competition Judge: “Hmmm. That will make it nearly impossible to win, Johnny. Can you duplicate it or give us a detailed description of what it showed”

Johnny M “Actually, I hid the parts that didn’t comport with my theory., in fact, showed the exact opposite of my theory,..and I emailed all my friends to do the same”

Competition Judge” “Johnny, that’s not the way we conduct ourselves in the sciences, you must be confused with your humanities classes. Over here, we strictly scrutinize the facts.

Johnny M: There’s a reporter here I would like to introduce you to…he wants to ask some questions about your first marriage.

Competition Judge: Great work on this project, Johnny. The science is settled. You win.

Moral? Research softly and carry a big hockey stick.

Fortunately, the hockey stick is broken, probably for good.

9 thoughts on “A New Science Movement”

  1. I have been wondering for some year now, which institutions in our society, if any, allow true open inquiry into the questions of the day.

    Corporate America was never such an institution. Corporate America has always been about “the sell”, whether it is about a consumer product or about an internal process. For example, if Procter and Gamble is all about Total Quality Management, you don’t set out gathering data about the weaknesses or shortcomings about the corporate TQM initiative, even if pointing out those weaknesses, shortcomings, or limitations could lead to improvements in what the corporation is doing. Once P&G is doing TQ, it is TQ All the Way and one doesn’t raise those sorts of questions.

    And as to a major public university, either the Business School or the Engineering School, one doesn’t ask too many questions either because it is TQ too if one wants to build some kind of alliance with P&G for all manner of legitimate reasons — job placement of grads, adopting the management methodology of the corporate world to improve what the university does, getting funding for applied research, and so on.

    The only open-inquiry institution on what P&G is doing with TQ, in my opinion, is the Wall Street Journal, which will run a front page article asking all of the question no one at P&G or any of its partners will ask, mainly, is this good for business, good for shareholder value, or is this another fad?

    Likewise, on questions of science, there are many reasons to question the objectivity of the public university — the funding presents too many strings. If the Wall Street Journal is a more objective source regarding the merits of a business process under the TQM rubric, Popular Mechanics these days is the go-to place on science questions with social implications.

    For example, when 9-11 Trutherism was the rage, the university was mincing around that one so as to not offend social sensibilities — the only place to get objective consideration of the issue was . . . Popular Mechanics.

    Popular Mechanics hasn’t yet dived into Climate Gate, but they have been pretty forthcoming about alternative energy and alternative fuels. In once sense Popular Mechanics is “all for” these things because they involve engineering and they are cool and neat and everyting. In another sense, Popular Mechanics ran a “hey wait a minute” article in the past year critiquing some of the schemes that are supposed to dispense with conventional fuels.

    There you have it. The last voices of objective truth are the Wall Street Journal, Popular Mechanics, and parts of the Blogosphere.

  2. Did Climaquiddick set off a new movement in science?

    Maybe. But I doubt it. I think the problems that contributed to Climaquiddick are present in more fields that climate science.

    Specifically, computers make it too easy to do half-assed science.

    One of the defenses I saw repeated when I used to read RealClimate was that it is simply too onerous to put up all the data and code and make things perfectly repeatable. That may be true but who cares? This is a fundamentally unscientific attitude and it is not an attitude unique to climate science. Science is reproducible experiment and many scientists are using computers as an excuse to ignore this.

    Many published articles in signal processing or image processing or any field that requires huge data sets are not perfectly reproducible. This is not to say that the authors are being dishonest – they’re not. In 12 pages there’s no way to convey all the parameters, visualization choices, programming tweaks that lead to a particular result. Most of the time you read the article to see if there’s a “big idea.” You take the result (“Hundred-fold improvement!”) with a grain of salt.

    Computers make it too easy to grind through gigabytes of data. Organizing and formatting that data for someone else to reproduce your experiment is too boring to assign even to a grad student and making data usable for everyone can be too time consuming. Getting outside help to do it is ludicrously expensive under university rules that make kick-backs taken by Tony Soprano look like charity. The attitude seems to be it takes and money away from doing the “important stuff.”

    Rutherford is somewhere frowning in disgust.

    Apart from making data and code available, the approaches used in practical fields are often so silly because they are invented using ad hoc analysis enabled by the fact that you can try silly ideas inexpensively on a computer. So, for example, the approach to measuring earth’s temperature is by interpolating a temperature over the surface of the earth and then numerically integrating. An intuitive approach enabled by computers, but not a very good one for spectral estimation of a DC term using randomly sampled data.

    Don’t get me wrong. I think that politics injected itself into climate science to an extent so extreme that at least some of these guys knew they were generating results that wouldn’t withstand scrutiny. But they were using arguments and methods that had congealed in broader science as computers became more popular.

    I think that what we’re seeing with Climaquiddick is an expression of a bigger problem in science. Particularly science that heavily relies on big computer runs. I don’t have a solution but it is certainly a problem.

  3. I got to agree with you, Joe. Back in the late 90’s, I was a graduate student in the math department at Chapel Hill, NC. At the time, the department was hiring a bunch of people for a new applied math subdepartment. They got some solid people to hold the department down and were then looking for tenure-track professors to fill out the remainder of the permanent positions.

    Of this group, there seemed to be two types. One group tended to work with particular systems or equations. The other group worked with computer models. The group working with computer models seemed a lot less competent mathematically than the former, but they had the glossy Power Point presentations. Mind you, the audience would be about 2/3rds or more pure mathematicians, so it was a very tough crowd for someone presenting computer work.

    For example, I remember one guy who used level sets (sets over which a real function is constant) to model various types of tricky singularities (like the kind of stuff that happens when a drop separates from a leaky faucet). The problem was that nobody could figure out whether the math was good. Sure. his sexy graphics looked like the still pictures he had of water droplet separation, but there was no concrete math model to compare.

    A lot of the audience would ask questions about whether his methods could apply to their niche (eg, Morse theory which is a way of modeling surfaces and higher dimensional shapes by chopping the shape along some direction finely enough that only a little bit of “weird stuff” like shape changes or singularities happens between each cut). He didn’t know enough to say one way or another whether his research would be useful.

    Another guy attempted to model “Euler fluids” which is a complete viscosity free fluid obeying the Navier-Stokes equations. He got some great swirling action (which is what you expect when there’s no resistance to swirling from viscosity) and demonstrated a hard limit where the fluid hit a singularity. His computer models while pretty were less than useful since round off error from the code vastly changed how the fluid swirled (he was using a seriously tricky and unstable initial condition). Basically, he showed that the computer model broke just from round off error and characterized how tough the problem was. That made him the strongest (IMHO) of the computer wielding group.

    In defense of the computer guys, they all worked on problems that were for one reason or another hard to do on computers. So they weren’t slouching or doing lazy work. The problem was that for the most part, they wouldn’t understand what most of the department was doing.

    Anyway, end result is that as far as I know, virtually everyone came from the first group which wasn’t dependent on computers. I think there was one or two good computer research people, but I don’t know if we were able to get them.

  4. Apart from making data and code available, the approaches used in practical fields are often so silly because they are invented using ad hoc analysis enabled by the fact that you can try silly ideas inexpensively on a computer.

    It’s worth noting that this is a strength as well as a weakness. A lot of research is about trying silly ideas. Sometimes it works, in which case there is usually a not-so-silly reason why.

    For a personal example, a partner and I came up with a data imputation method (basically, a method to fill in the blanks of a partial collection of data with missing data) back in 2003 or so that could work on a table or matrix of numerical data where most of the entries were missing. You got out an approximation to the matrix including all missing values with the entries decomposed for principle component analysis.

    The problem with it was that we couldn’t get either a unique “best” answer (I could get a range of possible matrix answers ) nor could I justify why we should bother with the method. It was voodoo which gave a range of possible answers.

    I’ve been tempted to apply it to some of this climate data. I just don’t know whether it’d be worth the effort. For what it’s worth, this method seems to work well on very incomplete data sets that are thrown together and share a common axis (say time, for example) with the existing data partially overlapping each other along that axis. That’s exactly what the paleoclimate records are, both the raw data and the processed data. It won’t puke out a magic “global average temperature”, but it does provide a way to extend records virtually beyond their current extent (for example, satellite records for the past two decades could be extended to glacial data for the past 400k years).

    That may be a good idea or a lousy one, possibly both. But if it weren’t for the computer, I could never fall into such peril.

  5. Yikes: from the Boulder Daily Comrade

    Letters to the Editor – Jan. 9Camera staff
    Posted: 01/09/2010 01:00:00 AM MST

    Climate science

    ‘Intellectual self-

    criticism’

    Michael Glantz offers an interesting challenge to climate scientists whose opinions differ from the party line (“Skeptics, show us your e-mails”). I am one of those scientists. This is, however, an impractical challenge because there is no way to remove all personal information involving one’s self and countless others.

    However, because I teach and do climate research at the University of Colorado, I do assume all my e-mails are the property of the Colorado State Government and if necessary could be examined in depth by the appropriate officials. I am confident that there would be no e-mails, even if “taken out of context,” which would indicate (as the Climatic Research Unit e-mails do) that I was trying to rig the peer review process or trying to keep contrary information out of international summary documents. But these are relatively minor issues.

    The real challenge to all scientists is to actively challenge the validity of their conclusions by seeking and supporting independent reproduction of their results. This is the foundation of science: intellectual self-criticism. The single biggest scandal revealed in the emails from the Climatic Research Unit is the lengths they went to refuse outside requests to make data and methodology available over the course of years including discussions about resisting Freedom of Information Act requests. Something like this would never show up in my e-mails. I have always enthusiastically aided anyone trying to reproduce or refute my results.

    That the work produced by the Climatic Research Unit is not completely and independently reproducible because the data and methods were actively hidden from public scrutiny indicates that whatever was occurring over time at the Climatic Research Unit, it was never related to science.

    THOMAS N. CHASE

    Boulder

  6. I think that John Daly should be an honorary member of the list above. He was the Australian climate skeptic whose death received positive comment in the CRU emails.

    His pioneering blog, “Still Waiting for Greenhouse,” is still available here:

    http://www.john-daly.com/

  7. I remember people saying a long time ago, “It came from a computer so it must be right.” That was a long time ago and most people know better today. Could it be that outdated idea is still accepted by some scientists?

    Related cartoon.

  8. Joe and Karl are so right. Computers sometimes allow one to take shortcuts. But, if one takes too many shortcuts, one tends to lose track of where one is.

    I remember, years ago, one of our guys related how he used to work in nuclear stuff at a famous national lab. One year, they had an exchange with Russian scientists working in the same area. Everyone was amazed at the depth and breadth of the Russians’ knowledge, and it became apparent that it was because they did not have the computer resources of the West, and had to actually think through the theory. So, when a US scientist would say “we have a computer model which says this but we don’t know why,” the Russian would invariably supply the reason.

  9. Paul Milenkovic, I’m not sure that’s right. A lot of research, like TQM, was pioneered at corporate institutions willing to try new things. Most fall prey to the fallacies you cite, but the strength of the system is that it only takes one to be honest with its self-criticism to come to a verifiably better way of doing things, which is then rewarded with higher profitability and putting everyone else out of business. The trick isn’t to find a social sphere (like journalism or academia or business) that encourages honest people, because that will never exist; the trick is to set up a system that rewards the one correct person along metrics that don’t rely on human judgement. The failure comes from systems like academia’s tenure tracks or government funding that rely on majority rule or consensus for acceptance of new ideas.

    It doesn’t matter that most companies don’t conduct new research; it matters that any one company that does will be rewarded.

Comments are closed.