From me, in a podcast with Anthony Colangelo.
The cost would be larger than the current state budget.
Because, you know, taxes in California aren’t high enough.
Thoughts from Judith Curry on the current state of knowledge in climate. The warm mongers never consider the possibility that their proposed cures may in fact be worse than the disease. I personally think it’s nuts to consider climate a greater threat to humanity than poverty, and particularly energy poverty. But then, many of them don’t really care about humanity, or consider humanity a problem in and of itself.
[Update a few minutes later]
A new paper on the epistemological status of general circulation models.
He attempts to discredit Judith Curry, and you’ll never guess what happens next!
There is one wonderful thing about Gavin’s argument, and one even more wonderful thing.
The wonderful thing is that he is arguing that Dr. Curry is wrong about the models being tuned to the actual data during the period because the models are so wrong (!).
The models were not tuned to consistency with the period of interest as shown by the fact that – the models are not consistent with the period of interest. Gavin points out that the models range all over the map, when you look at the 5% – 95% range of trends. He’s right, the models do not cluster tightly around the observations, and they should, if they were modeling the climate well.
Here’s the even more wonderful thing. If you read the relevant portions of the IPCC reports, looking for the comparison of observations to model projections, each is a masterpiece of obfuscation on this same point. You never see a clean, clear, understandable presentation of the models-to-actuals comparison. But look at those histograms above, direct from the hand of Gavin. It’s the clearest presentation I’ve ever run across that the models run hot. Thank you, Gavin.
Yes, thank you.
[Update a while later]
HubbellClinton tweets about science, and you’ll never guess what happened next!
Problems with p-hacking are by no means exclusive to Wansink. Many scientists receive only cursory training in statistics, and even that training is sometimes dubious. This is disconcerting, because statistics provide the backbone of pretty much any research looking at humans, as well as a lot of research that doesn’t. If a researcher is trying to tell whether changing something (like the story someone reads in a psychology experiment, or the drug someone takes in a pharmaceutical trial) causes different outcomes, they need statistics. If they want to detect a difference between groups, they need statistics. And if they want to tease out whether one thing could cause another, they need statistics.
The replication crisis in psychology has been drawing attention to this and other problems in the field. But problems with statistics extends far beyond just psychology, and the conversation about open science hasn’t reached everyone yet. Nicholas Brown, one of the researchers scrutinizing Wansink’s research output, told Ars that “people who work in fields that are kind of on the periphery of social psychology, like sports psychology, business studies, consumer psychology… have told me that most of their colleagues aren’t even aware there’s a problem yet.”
I think the hockey stick episode shows that this is a problem with climate research as well.
The point of peer review has always been for fellow scientists to judge whether a paper is of reasonable quality; reviewers aren’t expected to perform an independent analysis of the data.
“Historically, we have not asked peer reviewers to check the statistics,” Brown says. “Perhaps if they were [expected to], they’d be asking for the data set more often.” In fact, without open data—something that’s historically been hit-or-miss—it would be impossible for peer reviewers to validate any numbers.
Peer review is often taken to be a seal of approval on research, but it’s actually more like a small or large quality boost, depending on the reviewers and scientific journal in question. “In general, it still has a good influence on the quality of the literature,” van der Zee said to Ars. But “it’s a wildly human process, and it is extremely capricious,” Heathers points out.
There’s also the question of what’s actually feasible for people. Peer review is unpaid work, Kirschner emphasizes, usually done by researchers on top of their existing heavy workloads, often outside of work hours. That often makes devoting the time and effort needed to catch dodgy statistics impossible. But Heathers and van der Zee both point to a possible generational difference: with better tools and a new wave of scientists who aren’t being asked to change long-held habits, better peer reviews could conceivably start to emerge. Although if change is going to happen, it’s going to be slow; as Heathers points out, “academia can be glacial.”
“Peer review” is worse than useless at this point, I think. And it’s often wielded as a cudgel against dissidents of the climate religion.
I see “science fans” applauding and promoting Bill Nye’s call for 100% renewable generation by 2050. One might think if one endorsed Mr. Nye’s plan it would also be prudent to encourage studies such as the one advocated by the Secretary of Energy. Certainly Mr. Nye is not a power systems expert, nor have I seen him reference any when he is explain how such a transition can be accomplished. We should all be at least somewhat skeptical about the potential consequences of such a significant endeavor.
What I may be missing is the role of “optimism” which Mr. Nye assures us is a necessary ingredient for this transition. I’d seen hints of this before and perhaps what is happening is that far too many people obstinately reject any criticism regarding renewables because they believe that optimism is crucial if the planet is to be saved. Consequently no one should utter a disparaging word about any of the potential “preferred” renewable solutions. The view seems to be that we must get started now and we will work out the distracting details as we go along.
Perhaps this explains why those who view climate with extreme alarm often show no tolerance for criticism of renewable energy? Otherwise, why are grid experts not trusted? Grid experts have academic credentials, share a common body of knowledge, and continually build and alter their understandings based upon empirical evidence. Individually and collectively they work to be innovative and develop new approaches and challenge older perspectives. Grid experts have a proven track record of success. As I’ve argued before grid experts do not for the most part have a strong vested personal interest in the status quo. An ambitious, aggressive transfer to greater renewables would increase the demand and likely compensation for most all existing grid experts.
It’s almost as though it’s religious, not scientific.
Nice to see things like this at Slate. Everyone who “marched” yesterday should read it. Didn’t like the “science deniers” reference in last graf, though.
The “March For Science” failed, as demonstrated by its own signs:
Time to brush up on your social science, Science Guy. You too, Astrophysicist Dr. DeGrasse Tyson. You too, all ye faithful March for Science marchers, all ye believers in Truth, Science, and the Objective Way. Beware your own version of science denial. The idea has not developed “somehow”, “along the way”, that belief is informed by more than just what science says. Modern humans have always interpreted the facts based on deep values and meanings, affective filters imbuing the facts with an emotional valence that plays a huge part in determining what ultimately arises as our view of THE TRUTH.
Tyson and others are profoundly (and willfully) ignorant of philosophy. Belief in an objective reality is a critical element of the scientific method, but it’s just a belief, not the “truth.”
Self-taught systems beat MDs at predicting heart attacks:
All four AI methods performed significantly better than the ACC/AHA guidelines. Using a statistic called AUC (in which a score of 1.0 signifies 100% accuracy), the ACC/AHA guidelines hit 0.728. The four new methods ranged from 0.745 to 0.764, Weng’s team reports this month in PLOS ONE. The best one—neural networks—correctly predicted 7.6% more events than the ACC/AHA method, and it raised 1.6% fewer false alarms. In the test sample of about 83,000 records, that amounts to 355 additional patients whose lives could have been saved. That’s because prediction often leads to prevention, Weng says, through cholesterol-lowering medication or changes in diet.
To be honest, while it’s statistically significant, I’d have expected a bigger improvement than that. And it’s not clear how useful it is if the recommendations aren’t science based, as prescribing cholesterol-reduction or diet change generally aren’t.
The good, the bad, and the null hypothesis.