Transterrestrial Musings  


Amazon Honor System Click Here to Pay

Space
Alan Boyle (MSNBC)
Space Politics (Jeff Foust)
Space Transport News (Clark Lindsey)
NASA Watch
NASA Space Flight
Hobby Space
A Voyage To Arcturus (Jay Manifold)
Dispatches From The Final Frontier (Michael Belfiore)
Personal Spaceflight (Jeff Foust)
Mars Blog
The Flame Trench (Florida Today)
Space Cynic
Rocket Forge (Michael Mealing)
COTS Watch (Michael Mealing)
Curmudgeon's Corner (Mark Whittington)
Selenian Boondocks
Tales of the Heliosphere
Out Of The Cradle
Space For Commerce (Brian Dunbar)
True Anomaly
Kevin Parkin
The Speculist (Phil Bowermaster)
Spacecraft (Chris Hall)
Space Pragmatism (Dan Schrimpsher)
Eternal Golden Braid (Fred Kiesche)
Carried Away (Dan Schmelzer)
Laughing Wolf (C. Blake Powers)
Chair Force Engineer (Air Force Procurement)
Spacearium
Saturn Follies
JesusPhreaks (Scott Bell)
Journoblogs
The Ombudsgod
Cut On The Bias (Susanna Cornett)
Joanne Jacobs


Site designed by


Powered by
Movable Type
Biting Commentary about Infinity, and Beyond!

« Private Space Discussion | Main | Let It Die »

How Many Of Them Are About Climate Change?

Ian Murray (who seems to be having spam problems) should find this one fascinating. There's a brief, but interesting article in this week's Economist about the disturbing prevalance of shaky statistics in peer-reviewed scientific papers.

...If a set of data are “unedited”, the last digits in the numbers recorded will tend to have the values 0-9 at random, since these digits represent small values, and are thus the ones that are hardest to measure. If those numbers are rounded carelessly, however, 4s and 9s (which tend to get rounded up to the nearest half or whole number) will be rarer than they should be. The two researchers duly discovered that 4s and 9s were, indeed, rarer than chance would predict in many of the papers under scrutiny.

False data, false results. Though it was difficult to show whether, in any given case, this falsity led to a result being proclaimed statistically significant when it was not, it was possible to estimate how much error there was likely to be. In one case, however, there was no doubt. A number supposed to be statistically significant was explicitly mis-stated, and a false inference drawn in the paper's conclusion.

Peer review is highly overrated, in my opinion. I hope that specialty blogs can start to address some of the deficiencies of that process. It's not just media types that need fact checking. Sometimes scientists (and NASA officials) need it as well, and we'd be better off if more people realized this rather than relying on arguments from false authority.

Posted by Rand Simberg at June 03, 2004 04:02 PM
TrackBack URL for this entry:
http://www.transterrestrial.com/mt-diagnostics.cgi/2485

Listed below are links to weblogs that reference this post from Transterrestrial Musings.
Comments

If the peer review is done CORRECTLY, then these problems should've been caught. There are proper ways to deal with the fuzziness of results, particularly if the numbers are experimental data (where one presumably knows the number of bits resolution in the A/D conversion, for example). However, my impression has always been that such malstatistics are rampant in the 'softer' sciences (psych and some medical), likely due to the weaker emphasis on math these folks have in the course of their training.

Just one question, though... Upon examining piles of numbers that have been rounded to '5' or '0', is it any surprise that '4' & '9' are missing? How did they miss that '1','2','3','6','7', and '8' are ALSO missing??? Upon looking at the abstract, it would appear they mix some rounded and some non-rounded numbers, in which case they should've concluded that '0' and '5' were OVERrepresented. They may, in fact, be drawing a correct conclusion from flawed data -- I hope they practice what they preach and publish the raw data in their paper!

- Eric.

Posted by Eric Strobel at June 3, 2004 05:04 PM

Mr. Simberg wrote:
"Peer review is highly overrated, in my opinion. I hope that specialty blogs can start to address some of the deficiencies of that process."

This implies that you think it should be done away with entirely. I would agree that peer review has problems and that it is not a guarantee of accuracy. But I've also seen it work well at improving the quality of a manuscript. Let's keep in mind that the Internet is not peer reviewed and it isn't a paragon of accuracy...

However, I would also argue that the most important peer reviewer is the editor. I have seen cases where an editor let things slip through that he clearly should not have in the stupid belief that the peer review process would catch them. The peer review process in those cases also failed, but the editor was clearly not doing what he needed to do.

Posted by Dwayne A. Day at June 3, 2004 06:18 PM

I'm not arguing that it should be done away with--I'm just pointing out that it's not infallible, and that we shouldn't place as much stock in it as we do, at least until we come up with ways to dramatically improve it. I do think that one way of supplementing/complementing it is via a healthy debate on line.

Posted by Rand Simberg at June 3, 2004 06:31 PM

Isn't Scientific American "peer-reviewed"?

'Nuff said.

Posted by Barbara Skolaut at June 3, 2004 08:38 PM

Uh Barbara,
Scientific American is not peer-reviewed...

Posted by Duncan Young at June 3, 2004 09:07 PM

Part of the problem is that people take scientific publication too seriously. Papers in scientific journals are first drafts, intermediate results, and the like. Only after they've been worked over, experiments duplicated, implications checked, does the level of confidence in a given result become high. Publication is about communication between people working in the field, it's not a record of objective scientific fact. Every now and then a single paper is published which completely nails a particular problem, but that's the exception, not the rule.

Posted by Andrew Case at June 4, 2004 07:10 AM

That was sort of my point, Andrew. It's not clear to me that scientific journals serve that much purpose any more, now that we have the web, in which things can be quickly thrashed out in almost real time, rather than waiting months to get something reviewed and published.

Posted by Rand Simberg at June 4, 2004 07:27 AM

Agreed. When I search for scientific knowledge, I start with the web, read the PDF papers people have on their websites, and read the PDFs of the online journals Caltech has access to (and probably pays through the nose for).

Finally, if there is a particular paper I need that's not on-line, I'll go to the library and dig it out the old fashioned way, but that will be about 1 paper in 10 because it's s--o s--l--o--w.

Years ago I suggested in a document sent to a UN conference that a worldwide search engine be created dealing specifically with scientific publications/knowledge and accurately classifying things according to subject and degree of complexity.

Ad-hoc peer review could be integrated with such a thing. The closest thing to this so far seems to be Wikipedia, but think the eventual appearance of such a free system is inevitable. I would rather have it sooner than later, so I'd like to see the universities club together and do it (which should save them money by freeing them from journals and the bad copyright deals that go with them).

Posted by Kevin Parkin at June 4, 2004 08:09 AM

Thanks, Duncan. That certainly explains a lot.

Posted by Barbara Skolaut at June 4, 2004 02:50 PM

The peer review process is really not meant to guard against data cooking, fraud, etc. As far as I can tell, it's useful for keeping the obviously-loony papers out (well, unless the whole field is loony) and for helping authors to know where their paper lacks readability, references, etc. The only way to fully test for faked data in experimental papers is to do the experiment again, which is obviously beyond the scope of most peer reviews for most experiments.

Posted by Dave at June 4, 2004 06:03 PM

Mr. Simberg wrote:
"It's not clear to me that scientific journals serve that much purpose any more, now that we have the web, in which things can be quickly thrashed out in almost real time, rather than waiting months to get something reviewed and published."

You are presuming that speed is important, but it is not. There is much value to a slower and more deliberative process. And I'm not sure what you have against scientific journals or formal publishing.

The most important thing is the process. It has to be designed well. This means things like carefully selecting reviewers (and eliminating ones who don't respond or do poorly) and establishing clear deadlines.

It is never going to be possible to "thrash things out almost in real time" when the issues are complex. Some reviews are extremely complicated processes, requiring responses and corrections and much interaction between submitter and reviewer.

Would you really want a doctor to crack open your chest and perform a procedure on you that had not gone through a formal review process?

Posted by Dwayne A. Day at June 4, 2004 10:22 PM


Post a comment
Name:


Email Address:


URL:


Comments: