Is today a warmer-than-average day? That seemingly basic question has become fraught in the era of the Anthropocene, as greenhouse gases emitted during a century-plus of fossil fuel use have been warming most everything on our planet.
The most common U.S. yardstick for determining whether a given day is unusually warm or cool – NOAA’s 30-year database of “climate normals” – is now being updated. (Update: The new normals were released on May 4, 2021.) Over the past decade, the averages for the U.S. and many other nations have been based on data from 1981-2010. Now these averages are being revised across the world to reflect the just-ended period of 1991-2020.
As one would expect, the new norms will be warmer in most parts of the world, though not everywhere. And that raises big questions about what qualifies as “normal”. (Note that we’re using “normal” as a proxy for average; it’s actually “normal” for weather to vacillate rather than sticking to the statistical norm day after day.)
For the contiguous U.S., the period 1991-2020 was roughly 0.44°F warmer and 0.34″ wetter than in 1981-2010, based on national-scale data published by NOAA’s National Centers for Environmental Information (NCEI). The final station-by-station data are scheduled to be released this spring.
The warming has been even more dramatic in Alaska. The coverage of “tundra” climate – one where the warmest month averages less than 50°F – decreased by more than 9% in the updated Alaska norms, according to climatologist Brian Brettschneider, of the University of Alaska Fairbanks.
The mechanism behind these and other changes is simply the lopping off of one decade, the 1980s, and the addition of another decade, the 2010s. Such a shift can have surprisingly large effects, especially if the decades added and dropped included any unusually warm or cold periods that were particularly sharp and/or prolonged.
For example, the coldest December on record for the contiguous U.S. was in 1983, and the warmest June was in 2016.
Some of the changes reflect natural decade-to-decade variability. That may be the case for the area from Montana and the Dakotas into southern Canada, which are coming in slightly cooler in the new norms (see map above) despite longer-term expectations of warming.
Other shifts, such as moistening across much of the northern tier of states and widespread warming in the Southwest, are more in line with long-term climate-change projections. Heating across the Southwest is leading to “hot droughts” that add to the wildfire threat.
In most places, the bulk of the warming has occurred at night, again in line with long-held expectations. In Minneapolis, for example, one early estimate found that daily highs were about 0.2°F warmer in the new climatological period, but daily lows were about 0.9°F warmer.
Likewise, University of Georgia climatologist Pam Knox described her state’s temperature changes in an email: “Generally getting warmer, with more increases in minimum temperature than maximum temperatures (related to higher humidity and urbanization).”
Globally, the new period was about 0.18°C (0.32°F) warmer in the NOAA database and about 0.19°C (0.34°F) warmer in the NASA database. Differences in how data-sparse areas are handled, particularly the Arctic, account for most of the tiny difference between the NOAA and NASA datasets.
If it continues for another several decades, on top of the warming that had already happened by 1981-2010, this pace of warming would push global climate well past the Paris Climate Agreement benchmark of 1.5°C of warming over preindustrial times.
Finding the best guide to the future
The practice of using 30-year periods to describe climatology began just after World War II, when the forerunner of the UN-based World Meteorological Organization (WMO) asked each nation to calculate climate averages for the period 1901-1930. Those were followed by 1931-1960 norms. More recently, many nations, including the U.S., have been updating their 30-year climate norms each decade.
The idea is that a 30-year period is long enough for most annual and multiyear climate variations to get canceled out.
The 30-year norms have a far-flung influence on the nation’s economy. While about half of U.S. states have decoupled energy demand from their rate-setting practices, others rely heavily on the NOAA 30-year averages as a guide to future heating and cooling demands when setting rates. So it’s crucial that the data be as accurate as possible.
Global heating, however, throws a wrench into what “normal” means. For one thing, do the past 30 years really represent what we can expect in the 2020s? Researchers at NOAA and elsewhere have been exploring alternatives. One is an “optimal climate normal”: It’s developed by finding the length of the averaging period that best predicts the following year at each location.
Along these lines, NOAA has posted supplemental monthly temperature normals for hundreds of U.S. stations. These include 5-, 10-, 15-, and 20-year averages through 2010, along with optimal climate normals tailored for each station.
Several studies have found that a 15-year average tends to perform best in the current climate regime. With this in mind, NOAA is to soon replace its variety of supplemental norms with 15-year averages, this time not just for temperature but for a whole gamut of weather conditions. “A single alternative based on our recent research on optimal normals will provide a readily acceptable substitute to meet the needs of a variety of user communities,” said climate scientist Michael Palecki, who is managing the climate normals project at NOAA/NCEI.
‘Normalizing’ a human-caused warming
Each update of climate normals brings up another question: Does this practice serve as an inadvertent smokescreen, one that keeps us from fully seeing the relentless march of human-caused warming?
Some observers think so. “By updating every decade, this version of ‘normal’ hides a lot of change, and like the allegorical frog it makes it harder to notice the hot water we are in,” said Robert Rohde, lead scientist for the Berkeley Earth project, on Twitter.
In Miami, for example, December 2020 had an average temperature of 69.4°F. That was 1.1°F below the 1981-2010 climate norm – but when compared to the climate way back in 1931-1960, it was 1.3°F above average. Although urban heat island effects have contributed to some of the temperature rise in cities like Miami, rural areas are warming too.
There’s clearly a need for updated climate norms. For some purposes, it’s most important and relevant to know where a particular day stands compared to present versus long-ago climate over a longer period.
Updated norms are also crucial to how certain phenomena are diagnosed. For example, NOAA and other agencies define El Niño and La Niña based on sea surface temperatures over the central and eastern Pacific Ocean, and whether those are running warmer or cooler than the seasonally adjusted average. If a fixed climate baseline were used, then long-term warming would eventually (and erroneously) make it look as if a permanent El Niño event were in place.
For such reasons, it’s best to see a fixed benchmark not as a replacement for the regular updates, but rather as an alternate measure that could be maintained and used to illustrate how far climate has strayed as a result of human activity.
What period would be best to use as a pre-climate-change benchmark? One option is 1951-1980. That’s the span used by NASA’s Goddard Institute for Space Studies (GISS) when calculating its monthly and yearly departures from average global temperature.
“The primary focus of the GISS analysis are long-term temperature changes over many decades and centuries, and a fixed base period yields anomalies that are consistent over time,” says GISS on its website. “However, organizations like the [National Weather Service], who are more focused on current weather conditions, work with a time frame of days, weeks, or at most a few years. In that situation it makes sense to move the base period occasionally, i.e., to pick a new ‘normal’ so that roughly half the data of interest are above normal and half below.”
According to New Jersey state climatologist David Robinson of Rutgers University, as cited in an online column in Forbes by Marshall Shepherd, of the University of Georgia, “the 1951-1980 period is a viable candidate [for a fixed benchmark], as it is an era with abundant observations and precedes recent decades where the signal of human-induced warming has emerged from a naturally noisy climate signal.”
Global temperatures from 1951 to 1980 were among the lowest of the century. Some natural variability was in the mix, but climate model reconstructions suggest that the main culprit was the industrial boom that came after World War II and the resulting sun-blocking aerosols that were being spewed at a furious pace by North America and Europe.
Later in the century, growing environmental awareness and regulation helped reduce the prevalence of sun-blocking soot, but the increases in invisible greenhouse-gas emissions continued apace, eventually putting the world on a sustained warming track.
Another option for a fixed benchmark is 1961-1990. That interval includes the relatively cool 1960s and 1970s and also the 1980s, when global warming began to manifest in earnest. Because the 1961-1990 period is also the last universal benchmark mandated by the WMO, those numbers are readily available across the globe.
In fact, the WMO now plans to preserve data for the period 1961-1990 as a reference period for climate change assessment.
“In a world in which the climate is changing rapidly, we need to update the climate normals more frequently than we did in the past to keep them useful,” said NOAA/NCEI principal scientist Thomas Peterson, president of the WMO Commission for Climatology, in a 2015 statement.
“At the same time, we need to keep the historical baseline for the sake of public and scientific understanding about the rate of climate change.”
Of course, the atmosphere doesn’t care how we measure its warming. The absolute temperature increase is the same no matter what we use to assess “above” and “below” normal. Still, there’s a case to be made for keeping two benchmarks: one fixed and one dynamic, each serving a different purpose.
Or, to invoke the venerable British phrase, horses for courses.