Category Archives: Mathematics

Democrats And Climate

They’ve lost the argument, and it’s their own fault:

…many voters don’t see Democrats acting like people who believe we’re facing an extinction level event. For instance, why aren’t we talking about adding hundreds of new nuclear power plants to our energy portfolio? Such an effort would do far more to mitigate carbon emissions than any unreliable solar or windmill boondoggle –certainly more than any non-binding international agreement. Maybe there are tradeoffs, who knows.

Or take prospective presidential hopeful Andrew Cuomo. Setting intentions aside, in all practical ways, he’s been worse for the environment than Trump. Cuomo claims he “is committed to meeting the standards set forth in the Paris Accord regardless of Washington’s irresponsible actions.” Yet as governor, he’s blocked natural gas pipelines and banned fracking, which has proven to be one of the most effective ways to mitigate carbon emissions. U.S. energy-related carbon emissions have fallen almost 14 percent since they peaked in 2007 according to the OECD – this, without any fabricated carbon market schemes. The driving reason is the shift to natural gas. Why do liberals hate science? Why do they condemn our grandchildren to a fiery end?

Fact is, Obama—as was his wont—tried to shift American policy with his pen rather than by building consensus (which was also an assault on proper norms of American governance, but the “Trump is destroying the Constitution!” crowd is conveniently flexible on this issue.) It’s not a feasible or lasting way to govern, unless the system collapses. It is also transparently ideological.

It’s impossible for any intelligent person to take them seriously.

Dilbert One

…”scientists” zero:

…in a sense, the video doesn’t even refute the straw man it set up. It’s not that climate science consists only of models: obviously there are observations too. But all the attribution claims about the climatic effects of greenhouse gases are based on models. If the scientists being interviewed had any evidence otherwise, they didn’t present any.

When you can’t even knock down your own straw man, you don’t have much of an argument.

So how did the video do refuting Scott Adams’ cartoon? He joked that scientists warning of catastrophe invoke the authority of observational data when they are really making claims based on models. Check. He joked that they ignore on a post hoc basis the models that don’t look right to them. Check. He joked that their views presuppose the validity of models that reasonable people could doubt. Check. And he joked that to question any of this will lead to derision and the accusation of being a science denier. Check. In other words, the Yale video sought to rebut Adams’ cartoon and ended up being a documentary version of it.

They would appear to lack self awareness.

The Uncertainty Monster

Thoughts from Judith Curry on the current state of knowledge in climate. The warm mongers never consider the possibility that their proposed cures may in fact be worse than the disease. I personally think it’s nuts to consider climate a greater threat to humanity than poverty, and particularly energy poverty. But then, many of them don’t really care about humanity, or consider humanity a problem in and of itself.

[Update a few minutes later]

A new paper on the epistemological status of general circulation models.

Gavin Schmidt

He attempts to discredit Judith Curry, and you’ll never guess what happens next!

There is one wonderful thing about Gavin’s argument, and one even more wonderful thing.

The wonderful thing is that he is arguing that Dr. Curry is wrong about the models being tuned to the actual data during the period because the models are so wrong (!).

The models were not tuned to consistency with the period of interest as shown by the fact that – the models are not consistent with the period of interest. Gavin points out that the models range all over the map, when you look at the 5% – 95% range of trends. He’s right, the models do not cluster tightly around the observations, and they should, if they were modeling the climate well.

Here’s the even more wonderful thing. If you read the relevant portions of the IPCC reports, looking for the comparison of observations to model projections, each is a masterpiece of obfuscation on this same point. You never see a clean, clear, understandable presentation of the models-to-actuals comparison. But look at those histograms above, direct from the hand of Gavin. It’s the clearest presentation I’ve ever run across that the models run hot. Thank you, Gavin.

Yes, thank you.

[Update a while later]

Semi-related: Chelsea HubbellClinton tweets about science, and you’ll never guess what happened next!

Mindless Eating

and mindless research:

Problems with p-hacking are by no means exclusive to Wansink. Many scientists receive only cursory training in statistics, and even that training is sometimes dubious. This is disconcerting, because statistics provide the backbone of pretty much any research looking at humans, as well as a lot of research that doesn’t. If a researcher is trying to tell whether changing something (like the story someone reads in a psychology experiment, or the drug someone takes in a pharmaceutical trial) causes different outcomes, they need statistics. If they want to detect a difference between groups, they need statistics. And if they want to tease out whether one thing could cause another, they need statistics.

The replication crisis in psychology has been drawing attention to this and other problems in the field. But problems with statistics extends far beyond just psychology, and the conversation about open science hasn’t reached everyone yet. Nicholas Brown, one of the researchers scrutinizing Wansink’s research output, told Ars that “people who work in fields that are kind of on the periphery of social psychology, like sports psychology, business studies, consumer psychology… have told me that most of their colleagues aren’t even aware there’s a problem yet.”

I think the hockey stick episode shows that this is a problem with climate research as well.

The point of peer review has always been for fellow scientists to judge whether a paper is of reasonable quality; reviewers aren’t expected to perform an independent analysis of the data.

“Historically, we have not asked peer reviewers to check the statistics,” Brown says. “Perhaps if they were [expected to], they’d be asking for the data set more often.” In fact, without open data—something that’s historically been hit-or-miss—it would be impossible for peer reviewers to validate any numbers.

Peer review is often taken to be a seal of approval on research, but it’s actually more like a small or large quality boost, depending on the reviewers and scientific journal in question. “In general, it still has a good influence on the quality of the literature,” van der Zee said to Ars. But “it’s a wildly human process, and it is extremely capricious,” Heathers points out.

There’s also the question of what’s actually feasible for people. Peer review is unpaid work, Kirschner emphasizes, usually done by researchers on top of their existing heavy workloads, often outside of work hours. That often makes devoting the time and effort needed to catch dodgy statistics impossible. But Heathers and van der Zee both point to a possible generational difference: with better tools and a new wave of scientists who aren’t being asked to change long-held habits, better peer reviews could conceivably start to emerge. Although if change is going to happen, it’s going to be slow; as Heathers points out, “academia can be glacial.”

“Peer review” is worse than useless at this point, I think. And it’s often wielded as a cudgel against dissidents of the climate religion.