There’s a lesson to be learned from an editor’s having resigned over his journal’s publication of a research report thought to have been inadequately reviewed: extraordinary claims must be supported by extraordinary evidence. Headline writers and media … take note.
The paper posted in late July was written by climate scientists Roy Spencer and William Braswell. The story was subsequently picked up by the Drudge report and Fox News, warning that “new findings throw the entire global warming theory into question.”
What is interesting is that the actual paper by Spencer and Braswell, while considered by a number of sympathetic and unsympathetic critics as having some flaws, makes relatively modest claims that bear little relation to the subsequent headlines appearing on conservative-leaning media outlets. Now that the editor-in-chief of the journal that published the initial paper has resigned in protest, and a rebuttal is being published in a major climate science journal, it is worth taking a moment to move past the hype and drama to take an indepth look at the actual science being discussed.
The issue at hand involves quantifying climate sensitivity, which is commonly defined as how much warming of Earth’s surface would be expected as CO2 increases. Climate scientists (even those like Spencer and MIT’s Richard Lindzen, both outspoken “skeptics”) tend to agree that simple radiative transfer models predict about 1 degree C (1.8 degrees F) average global warming for a doubling of atmospheric carbon dioxide. The big question is how much natural processes, referred to as feedbacks, will amplify or dampen this response.
There are a number of important feedbacks (at least over a century timescale) that will affect the amount of warming we end up with, including water vapor, clouds, the lapse rate, and albedo. Of these, water vapor is almost universally considered to be a positive feedback (increasing warming), while the effects of clouds are still largely uncertain. Clouds can both trap in additional heat and reflect incoming radiation, and scientists are still fairly unsure which of these effects is larger, though most climate models predict that the positive feedback will slightly win out.
There has been a vigorous debate in the scientific literature over the past few years between Andrew Dessler at Texas A&M University and others who argue that satellite data suggests a positive feedback from clouds, and Lindzen, Spencer, and others who argue that it shows an uncertain or negative feedback. The recent article by Spencer and Braswell is in many ways a response to the criticisms of their earlier paper by Dessler in 2010.
Much of the debate revolves around the question of how much variations in temperatures over the past 10 years are a result of El Niño events and how much is “forced” by clouds. Traditionally, changes in cloud cover were considered as a feedback to external change in the system, rather than as a driver of change in-and-of themselves.
Spencer and Braswell argue that natural variability in cloud cover could make it difficult to diagnose the magnitude of feedbacks and may bias earlier studies that showed a positive cloud feedback. Specifically, they point out that there may be non-radiative forcing as a result of natural changes in cloud conditions that complicate the picture, and they conclude that “without knowledge of time-varying radiative forcing components … feedback cannot be accurately diagnosed.”
Figure from Spencer and Braswell 2011. The green line shows observations from the Hadley Centre, while the blue and red lines show the three most sensitive and least sensitive models of 13 examined.
In their recent paper, Spencer and Braswell compare the three highest and three lowest sensitivity climate models over the past 10 years to surface temperature records produced by the Hadley Centre in the UK (HadCRUt). They find that the models they examine all do a poor job of replicating the observed temperatures, and the lower sensitivity models are slightly closer to observations than higher sensitivity models, though the lack of error bars precludes determining how significant these differences actually are.
They conclude with a heavily caveated statement that “While this discrepancy is nominally in the direction of lower climate sensitivity of the real climate system, there are a variety of parameters other than feedback … which make accurate feedback diagnosis difficult.”
Their paper contains no statements that are particularly exceptionable, and certainly nothing to justify the misleading headlines that followed. The University of Alabama in Huntsville press release accompanying their paper, however, was titled “Climate models get energy balance wrong, make too hot forecasts of global warming” and contained a number of statements critical of climate models that were not contained in their paper.
From there, the story exploded, and the editor of the journal Remote Sensing resigned to “personally protest against how the authors and like-minded climate skeptics have much exaggerated the paper’s conclusions in public statements.”
Other climate scientists had similar reactions to the affair, with MIT’s Kerry Emanuel remarking that “I have seldom seen such a degree of disconnect between the substance of a paper and what has been said about it.”
Georgia Tech’s Judith Curry in a blog posting summed up her thoughts: “So should the paper by Spencer & Braswell have been published?” Yes, she replies, adding that “Ideally, it would have undergone a more rigorous peer review and have been improved as a result of that process. Spencer & Braswell make some points that are worth considering, but this needs to be done in a more rigorous manner (and with much less hype).”
As of September 11, her posting had prompted more than 300 comments, many of them technical but providing lots of grist for those wanting to drill down.
Andrew Dessler has a response to the Spencer and Braswell paper, which will shortly be published in the journal Geophysical Research Letters.
Dessler begins quite clearly by pointing out how “the usual way to think about clouds in the climate system is that they are a feedback — as the climate warms, clouds change in response and either amplify (positive cloud feedback) or ameliorate (negative cloud feedback) the initial change.”
He suggests that Spencer and Braswell’s formulation — that clouds are both a cause of and feedback on climate change — is rather outside of current norms.
Dessler points out too that Spencer and Braswell don’t actually quantify the size of variability in cloud forcings or feedbacks, but rather simply assumed it to be large relative to other climate factors. He goes through an exercise of computing these values based on satellite measurements and other datasets, and he argues that energy trapped by clouds can explain only a few percent of the surface temperature changes, much less than claimed by Spencer and Braswell.
Figure from Dessler 2011. The blue line shows the temperature record used by Spencer and Braswell (HadCRUt), while the other red lines show other major temperature records and reassessments (NASA’s GISTemp, ERA, and MERRA). The thin black lines are all 13 climate models considered, while the ones with the + symbols are those highlighted by Spencer and Braswell.
Responding to Spencer and Braswell’s comparison of high and low sensitivity models to observations, Dessler points out that Spencer and Braswell exclude a number of models that do reasonably well at matching observations, and also choose the surface temperature set (HadCRUt) with the largest deviation from models over the period in question. He creates his own chart that shows a much more nuanced picture. He also notes that the models that best agree with observations are the ones that do the best in replicating El Niño events, and suggests that “the ability to reproduce ENSO is what’s being tested [by Spencer and Braswell], not anything directly related to equilibrium climate sensitivity.”
The back-and-forth in the scientific literature will likely continue for some time on this subject, and our understanding of the processes involved will likely be better for it. In the mean time, to paraphrase the late Carl Sagan, be skeptical of any extraordinary results that are claimed in the absence of extraordinary evidence.
And yes, that reminder applies to headline writers too.