Ferguson’s Imperial Model

A code review.

Good lord.

This reminds me an awful lot of the code that was leaked from CRU. S**t climate coding has done a lot of economic damage, but nowhere near as rapidly as this has, with tens of thousands of deaths to boot.

[Update a few minutes later]

A devastating conclusion:

All papers based on this code should be retracted immediately. Imperial’s modelling efforts should be reset with a new team that isn’t under Professor Ferguson, and which has a commitment to replicable results with published code from day one.

On a personal level, I’d go further and suggest that all academic epidemiology be defunded. This sort of work is best done by the insurance sector. Insurers employ modellers and data scientists, but also employ managers whose job is to decide whether a model is accurate enough for real world usage and professional software engineers to ensure model software is properly tested, understandable and so on. Academic efforts don’t have these people, and the results speak for themselves.

Same with climate modeling. Get it out of the universities. Particularly Penn State.

[Update a while later]

What Ferguson’s booty call tells us about our “elites.”

[Friday-afternoon update]

The model that panicked the world was junk.

15 thoughts on “Ferguson’s Imperial Model”

  1. The old saying is “all models are wrong but some of them are useful.” When a model is being used for public policy decisions, the question is “useful to whom?”

    Mix a drop of sewage in a glass of wine and you get sewage. Mix a drop of water in a glass of sewage and you get sewage. The same applies to politics. Add a drop of politics to science and you get politics (and political science is an oxymoron).

  2. Not even GIGO…

    To think that pseudo random processes that do not produce a repeatable result can be combined enough in some kind of mathematical amalgam to provide any form of predictive accuracy is innumerate. To paraphrase “It’s a bold strategy you’ve got there Cotton.”

    I’ve tremendous experience with the proper use of pseudo random sequences to model digital hardware in the design phase. As documented this is improper technique and guaranteed to generate irreproducible results. Enough to render the modeling useless because you cannot distinguish cause and effect. I, like the author, base this statement on over 30 years experience with computer design and simulation for computer and digital telecoms gear developed for a diverse set of employers.

    Assuming the author is being honest their conclusions are warranted and in my professional opinion without question.

    This is why computer models that claim to predict any physical phenomena must be validated via independent experimental verification before their results can be relied upon.

  3. Wana bet that any of the other “models” are any better?

    The first comment was priceless: “Devastating. Heads must roll for this, and fundamental changes be made to the way government relates to academics and the standards expected of researchers. Imperial College should be ashamed of themselves.”

    As if this was somehow anything but SOP for government programs. In both senses of the word.

    The parallel with the warmist hoax is undeniable which means it will be denied to the dying breath of the “established” media.

    So we’ve locked down the country, squandered trillions of dollars, killed untold numbers by delaying critical care based on averaging random numbers.

  4. Wow. What crap.

    The first article contains a link the GitHub repository. I took a look at CovidSim.cpp. —

    /*
    (c) 2004-20 Neil Ferguson, Imperial College London (neil.ferguson@imperial.ac.uk)
    All rights reserved. Copying and distribution prohibited without prior permission.
    */

    And this little comment on line #1380
    //Intervention delays and durations by admin unit: ggilani 16/03/20

    It’s like all the the other bad amateur open-source software out there. Lots of one character variable names, with as little whitespace as possible. Commented out code. The input relies on scanf() to put everything in place, and then argv[] to parse out individual characters in a pages long nested if statement, all shoved into a single struct called— P. Lots of (void*)& being passed to functions. Other files include indentations over halfway across overly wide pages. The only good thing I can say is that they didn’t try to rewrite it using C++ templates.

    It’s write and forget code, unmaintainable even by the people who wrote it. I don’t care how valid the underlying algorithms may be, there no way for anyone to understand what’s really going on in this mess.

    Not just academics and open source amateurs produce crap like this, unfortunately Decades ago I used to do contracts at places like Microsoft where I’d get code like this, (DRM, video codecs, etc.) and have a few months to produce something that could finally run on a Mac. Little help from the original team, as they were all busy on the next great product. Getting the compile errors down below the development system’s upper limit of 10,000 errors was a cause for celebration.

    1. They probably don’t understand templates well enough to try, that’s why they’re using (void*).

    2. I found a bug! No, not in the Fergeson code (not that finding a bug would be particularly challenging). I found one in this comment box’s code. I was entering a funny comment about structure P (which is obviously the name for a pointer, as opposed to the ubiquitous and useful name “my_struct”, which tells you who own’s it.

      Anyway, while entering my “repeat-until-panic” loop, the less-than sign killed the rest of the comment in the preview window. Using Fortran’s ‘.LE.’ in C doesn’t look right, so I thought “I wonder if double less-than signs would get displayed as a single one?” Nope. It just locks up the comment window and makes the page unresponsive.

      1. Give it a rest George. I long ago discovered it is nigh impossible to push C code syntax into the comment box of Word Press. You might have better luck with FORTRAN or BASIC.

    3. My hourly consulting fee doubles if C++ templates are employed anywhere in the code.

  5. I don’t understand the basic presumption that by running the simulation multiple times, and averaging the results, the resulting results are somehow useful as a forecast.

    You average the results of ten dice tosses and bet your life on the results of the next toss?

    1. I suspect the assumptions stem from Monte Carlo simulations in physics, where often you’re modeling a small sample of trillions of random particle interactions to solve a problem in thermodynamics or nuclear reactions, where our reality really is the average of all the runs.

      But in other cases where reality is a result of one real-world run (like climate or a disease), they assume the average of a bunch of runs is more likely than one of the outliers, even though there may not be any reason to assume one outcome is likelier than another.

      The average dice roll is a 3.5, but there’s no reason to think three or four is more common than a one or six.

      1. The problem here is that the source of randomness isn’t *just* the mathematically constrained controlled pseudo-random generators, but coding bugs as well. Results by averaging the effects of all bugs?

  6. This is just wonderful. The software used to make predictions that have cost trillions of dollars is a piece of poorly coded crap that’s little more than a random output generator. Sounds a lot like the climate models, which if their proponents had their way, would also have multi-trillion dollar impacts. Perhaps it’s time to set some quality standards on computer models before they’re used to make public policy decisions.

Comments are closed.