In the aftermath of the recent hacked e-mails affair, much opprobrium has been cast in the direction of the Climate Research Unit (CRU) at the University of East Anglia.

In part the result of confusion over the meaning of the now notorious “hide the decline” remark, some assume that the global temperature record produced jointly by CRU and the Hadley Centre is of dubious quality.

Fox News, for instance, recently ran an article with the alarming headline, “NASA Data Worse Than Climate-Gate Data, Space Agency Admits“, referring to an e-mail obtained by the Competitive Enterprise Institute under the Freedom of Information Act. In that e-mail, one of the lead scientists behind NASA’s GISSTemp temperature series suggested that the Hadley and CRU approach might be superior.

Overseas, Der Spiegel has made similar associations between the e-mails and the perceived validity of the surface temperature record.

All of which makes it worth stepping back and examining exactly how these temperature records are created. At least for land temperatures, it turns out that it is not a very complicated a process, and recently a slew of climate science bloggers on both sides of the issue have set about replicating the work that CRU and NASA have done, with remarkably similar results.

Global temperature records are assembled using raw temperature data from a number of different sources. Land temperatures come primarily from roughly 7000 different temperature stations in the Global Historical Climatologically Network (GHCN). These stations are located all over the world, each with records of varying length and completeness. This data is supplemented with data from additional stations in Antarctica and the United States. A map of the location of all GHCN stations is shown below.

View larger image

Figure from Peterson and Vose 1997.

The land temperature data is combined with ocean surface temperature data from ship-borne thermometers (for the historical record) and satellites (for the record after 1979) to produce a global temperature record.

When NASA and Hadley/CRU create a global temperature record, they aren’t estimating the average temperature of the globe per se; rather, they are calculating the global temperature anomaly, the change in temperature from a base period. This is an important distinction, as using anomalies helps avoid a number of location- and site-specific biases that will affect the absolute temperature but not the change in temperature over time (for a more technical discussion of the benefits of anomalies vis-à-vis absolute temperatures, see these three posts).

But calculating anomalies can be complicated when dealing with station records that are often incomplete or of relatively short lengths. To compare anomalies from one station with those from another station, a common baseline period is needed. Perhaps the simplest way of calculating anomalies is to identify a specific baseline period (say, 1961-1990), and simply toss out all station records that don’t cover that period. This approach, called the Common Anomaly Method (CAM), is the one taken by NOAA’s National Climate Data Center (NCDC) in its temperature reconstructions.

CAM is less than ideal in some cases, however, as it often requires discarding a lot of useful temperature data. Other approaches seek to use stations with more complete records and located close to ones with sporadic records; this approach allows analysis of a common anomaly for those stations. Variants of this approach include the Reference Station Method (RSM) and the First Differences Method (FDM).

In addition to calculating anomalies, stations need to be spatially weighted. This step involves dividing the world into grid cells and assigning each station to a particular grid, creating an average anomaly for each grid and, depending on the gridding method used, weighting each grid based on its respective geographic area. Spatial weighting is essential to deal with the fact that stations are not equally distributed over Earth’s surface.

For example, if more than 20 percent of the world’s weather stations are in the United States, the U.S. temperatures should not be reflected as 20 percent of the global land temperature, since the country covers only 5 percent of the global land mass.

Armed with spatial weighting and anomaly calculation, scientists can generate a land temperature reconstruction. There are various adjustments made to individual stations or selection criteria to reduce Urban Heat Island effects, station moves, time of observation changes, and other biases, but the net effect of these adjustments vis-à-vis the raw data on a global level is relatively small.

Over the past few months, at least six different groups (including one involving this author) have undertaken independent efforts to reconstruct land temperatures. Each of these series relies on the same basic set of raw station data, but their methods often differ substantially. The reconstructions are:

Additionally, three groups have always released both land and global land/ocean temperature reconstructions using different methods:

One can compare the outputs of each of these side-by-side (save for Tamino’s, for which the numeric results are not published, but which looks similar to the others in graphs he has posted). Also, this comparison does not reflect the Clear Climate Code reconstruction here, as it is an essentially perfect reconstruction of GISSTemp and is visually indistinguishable. Here are land temperatures from 1900 to 2009 for all series:

View larger image

And the same data from 1960-2009:

View larger image

The Jeff Id/Roman M, Nick Stokes, Hausfather, Tamino, and Residual Analysis approaches all use raw GHCN data only. NCDC applies adjustments to the raw data, and HadCRUT and GISSTemp both apply their own adjustments and include additional stations from Antarctica (and, in the case of HadCRUT, from some private station data not included in GHCN).

Note that the two series that have come under the most criticism (GISSTemp and HadCRUT) actually show the lowest land temperature trend of any of the reconstructions, perhaps the result of their Antarctic coverage.

The comparison does strongly suggest that the validity of these temperature reconstructions has in no way been diminished by anything released in the hacked e-mails.

However one slices it, the world has still warmed significantly over the past century.

Zeke is an energy systems analyst and environmental economist with a strong interest in conservation and efficiency. He was previously the chief scientist at C3, an energy management and efficiency company,...