Climate-Change Communications

moving beyond certainty:

The strategy of hyping certainty and a scientific consensus and dismissing decadal variability is a bad move for communicating a very complex, wicked problem such as climate change. Apart from the ‘meaningful’ issue, its an issue of trust – hyping certainty and a premature consensus does not help the issue of public trust in the science.

This new paper is especially interesting in context of the Karl et al paper, that ‘disappears’ the hiatus. I suspect that the main take home message for the public (those paying attention, anyways) is that the data is really really uncertain and there is plenty of opportunity for scientists to ‘cherry pick’ methods to get desired results.

Apart from the issue of how IPCC leaders communicate the science to the public, this paper also has important implications for journalists. The paper has a vindication of sorts for David Rose, who asked hard hitting questions about the pause at the Stockholm press conference.

It’s a good, and necessary first step.

8 thoughts on “Climate-Change Communications”

  1. If they were the slightest bit serious about the -science-, the first step would be figuring out how, -precisely- to combine the satellite measurements and the ground measurements into a single combined temperature record – with correctly assessed error bars.

    The camera-aimed-down and the Stevensen Screened actual thermometer clearly aren’t measuring exactly the same things – so -declare- the surface stations to be -proxies- for “the gridcell temperature”, and start cross-calibrating to assess “Ok, so how bad -is- it.”

    1. For climate scientists! Error bars only exist for Type II errors, as there are no Type I errors. For some, the Type II bar is 2 or 3%.

  2. You can’t combine the land- and satellite- measurements, as they are measuring different places. The satellites measure things like the troposphere [and (currently) only look at the poles from a steep angle]. There are lots of different things to measure on the earth, and they often say different things too. I’d think that combining them would lose a lot of information.

    1. Yes, I do recognize that.

      Don’t think of the surface stations as “absolute temperature measurements”, think of them as “This is a potential rough proxy measurement for the lowest available satellite-measured tropospheric temperature directly over this spot.” Now evaluate them in that vein. Starting with a spot-by-spot evaluation.

      That is: Pretend it’s a tree. You don’t “just” look up on a chart ring-width-to-temperature correspondances, you first have to construct the darn chart. It is a really nice tree – it has nice concrete numbers in it – but it’s (in the very best case) still a proxy. Start with a -deep wilderness- set of sites and the best micrositing. The error bars will be a lot wider than the current daft 0.1C estimates. But the direct overlap of both measurements over the same time period would eventually allow (a) direct evaluation of microsite issues, (b) direct evaluation of urban heat islands, (c) temporal correspondances for both, and (d) coherency in the evaluation and judgement of “site moves” during the overlap period.

      That is a fair piece of work. But the payoff is extending the estimate of the lower tropospheric temperature estimates back in time prior to 1978.

      The current ‘homogeneity’ approach does very odd things to unmoving, well-sited thermometers precisely because it -assume- that ‘disconnects’ are caused by site issues, and not be precisely the thing they’re supposed to be searching for: weather and climate changes. Being able to concretely say “Um, no. That site didn’t get moved my somebody on day X, instead the weather really did just drop 10 degrees far earlier than the normal pattern.” would be quite useful.

    1. I’ll note that his ‘point one’ and mine are on precisely the same topic.

      The focus on internal homogeneity is a key portion of the -why- the temperature gets re-evaluated. The tree rings show the same sorts of jumps, why would one expect a thermometer to not experience the same jumps? Combine with the satellite people to figure out “That’s a real jump” and “That’s a -fake- jump caused by reasons unknown!”

      1. Well, I have no idea if that Karl et al paper is any good, or if the people doing it are twisting science for their politics – but the suggestion is a reasonable one. Look at ship and buoy temperatures when they are both taken at (about) the same place and time. Note that there is a bias; the ship temperatures average 0.12 degrees warmer (with a massive standard deviation). Now you have a choice: lower the ship temperatures, raise the buoy temperatures, or accept that your data is going to be increasingly biased as buoy measurements replace ships’.

        1. Of course they are twisting it for politics. Don’t be ridiculous.

          Now, all they have to do is get rid of those pesky satellite measurements that stubbornly refuse to show warming. When they do, you’ll know we are warming, because our goose is being cooked.

Comments are closed.