Analysis of James Hansen’s 1988 Prediction of Global Temperatures for the Last 30 Years

Guest analysis by Clyde Spencer

Introduction

There have been articles on WUWT recently, here and here, commemorating the 30 years since James Hansen gave Senate committee testimony about his view of the human influence on climate. Some apologists for Hansen have, without more than subjectively comparing graphs, claimed that his prediction was extremely accurate. The following is his official 1988 prediction for three different scenarios of future trace-gases implicated in anthropogenic global warming:

I have highlighted the observed 1958-1988 annual average temperatures in red to make the line more legible.

The apologist’s claims for extreme accuracy are based on the subjective impression that the temperatures over the last 30 years have tracked his prediction of temperatures from forcing of intermediate ‘greenhouse gas,’ other trace- gasses, and aerosol assumptions (Scenario B). He assumed two significant volcanic eruptions during that 30-year period. However, there was only one, Mt. Pinatubo (1991, VEI 6). Therefore, had he assumed that there would only be one eruption, his estimates would have been higher and would have tracked B even more poorly than they have. Were it not for two exceptionally strong El Niño events in the last 20 years, it is unlikely that current temperatures would be anywhere near as high as they are currently. However, he did not consider the role of El Niño’s in his computer model. Therefore, it is just luck that his predictions came as close to reality as they did. The greatest intellectual ‘sin’ for a scientist is to be right for the wrong reasons!

Hansen dramatically emphasized that “The most recent two seasons (Dec.-Jan.-Feb and Mar.-Apr.-May, 1988) are the warmest in the entire record.” This is really a non sequitur. It would be notable if the last point(s) in a long upward-trending series were not the warmest in the series. And, indeed, the 27 seasons preceding the two 1988 record temperatures were all lower than the 1981 seasonal high! (See the next graph, below) Basically, Hansen got lucky again that he had a couple of warm seasons that allowed him to make such a statement to impress the uncritical Senators. Otherwise, he would have had to truncate his graph at 1981 to make a similar claim. He also added an extra season of data to his ‘30-year’ time-series, probably to accentuate the claim. Two seasons sounds more impressive than one season.

Hansen claimed “The warming is almost 0.4 degrees Centigrade [sic] by 1987 relative to … the 30 year mean, 1950 to 1980 … The probability of a chance warming of that magnitude is about 1 percent.” The first graph, above, with the red line, shows that 0.3 °C would be a more accurate estimate. One should be suspicious of such a claim when his own data demonstrated that the temperature had already exceeded that for one season in 1981! Are we to believe that at least two events with a 1% probability occur within 7 years of each other? He then claimed that the recent temperatures were about three times the standard deviation (0.13) of the baseline annual temperature average. Actually, the standard deviation of the annual averages for the 1958 to 1988 period is more like 0.15. Thus, the 1988 quarterly temperatures were about two ‘standard deviations’ above the previous 30 years of temperatures, not three! He was playing loose with the facts!

The values at the beginning and end of a noise-free, increasing, linear time-series will have the largest differences from the mean; meaning, that it is expected that they will likely have the largest standard deviations from the mean. The standard deviation of a time-series varies directly with the slope of a smooth trend-line and, the number of samples. To analyze time-series data properly, it should be de-trended, the mean set to zero, and the residuals used to get an accurate estimate of the probability of a random deviation from the mean. Hansen should know that! He is describing behavior (the first two data points of 1988) that is a function of the slope, not the internal variance of the data. Again, Hansen is trying to snow the Senators. No one is arguing that temperatures aren’t increasing. It is evident from the graphs. Nevertheless, he is offering sophistry to convince the Senate committee that what we are seeing is extremely rare.

However, most of his apologists are engaging in qualitative hand waving and not using any mathematical or statistical analysis to quantify the quality of his prediction, even when based on false assumptions. The question is, “How skillful was Hansen in quantitatively predicting the climate for the next 30 years, based on a computer program that assumes CO2 is the ‘control knob’ on global temperatures?”

Graphical Analysis

If one were to fit a linear, least-squares regression to Hansen’s 30-year data, and extrapolate it 30 years into the present, how would it compare with Hansen’s prediction? I’ll call this a naive prediction, a simple extrapolation of past trends, without making any assumptions about the cause of the warming, or the presence of extenuating influences such as volcanic aerosols. Implicit in this naive prediction is that it represents “Business as Usual,” the trace-gas assumption made by Hansen for his Scenario A.

The following graph is derived from Hansen’s Figure 2 in his Senate testimony, with higher temporal resolution than the prediction graph above. According to Hansen, the baseline for the calculation of the ΔT (anomaly) is the average temperature for the period of 1950 through 1980, although, he does not provide that average temperature.

The R2 value for the red trend-line is lower than I would like to see, with time only accounting for about 19% of the variance in temperature. Smoothing with annual averages and removing the seasonality would increase the R2 value. Nevertheless, substituting 60 years for x in the regression equation, gives a predicted anomaly of 0.467 °C for 2018. That is lower than Hansen’s Scenario C – that is, reduction (“draconian emission cuts!”) of the things driving warming – was supposed to be. Note that the slope of the regression line predicts a change of less than 1 °C per century, about the amount commonly claimed for the last century. Therefore, it may be a reasonable long-term approximation of future warming.

However, the period of 1958 to 1988 probably is not the best interval to use for prediction. The R2 value is low also because there are two distinct trends – one negative and a longer one that is positive. It appears that the 30-year interval he used was a pragmatic choice to provide the most current measurements for the Senate testimony. The two trends change over between about 1964 and 1970. Calculating the slope of the trend-line for the period of 1964 through 1988 would probably be the best choice. Doing so, provides better than a two-fold increase in the R2 value and increases the slope of the trend-line (green) to 0.0181 °C per year, for a predicted anomaly of about 0.803 for 2018.

Let’s compare the above, seasonal temperature graph with the most recent (2018/06/18) annual temperature series produced by Hansen. [ http://www.columbia.edu/~mhs119/Temperature/ ]

Note that the slope of the trend-line (green) for the period from 1970 through 2018 is almost the same as that obtained above ― 0.017 versus 0.018 °C per year. In hindsight, it appears that the selection of the period from 1964 through 1988 was a better choice for future prediction than the entire 30-year set. This graph shows an estimated anomaly of about 0.80 °C for 2018. Unfortunately, he does not provide an R2 value for his “Best Linear Fit,” nor are error bars provided for the temperatures, which is typical for his work.

It appears that the average global-temperature trend has been reasonably well behaved since at least 1970, and probably since about 1964. That is, it has not shown the longer-term non-linearity characteristic of Hansen’s 1988 predictions.

Let’s return now to the graph first shown, which was Hansen’s 1988 prediction with Scenarios A, B, and C. I have plotted the naive extrapolation, in green, on top of the original graph.

In the early years, the three scenarios are all close to each other, and probably close to the uncertainty of the temperature measurements. Therefore, I’ll focus on the 21st Century behavior. Note that the trend-line (green) best tracks Scenario C, the “draconian emission cuts!” I suspect that Hansen scheduled one of his two hypothetical volcanic eruptions for about 2014, driving Scenarios B and C temperatures down temporarily. The trend-line prediction is clearly lower than Scenario B (intermediate trace-gas), and much lower than Scenario A, the supposed “Business As Usual.” How much lower you ask? For 2019, the trend-line predicts an anomaly of about 0.80, while Hansen’s Scenario A is about 1.55; that is almost 94% higher than the trend-line prediction. Looked at another way, Scenario A has a slope (post 1988) of about 0.033 while the observed temperatures have a slope of about 0.017. That is, Scenario A has a slope twice the slope of the naive prediction, which matches the observed data recorded since 1988.

Conclusion

Using only Hansen’s own data, the above demonstrates that Hansen was not “extremely accurate” in his 1988 predictions because a simple, commonly unreliable, linear extrapolation performed better than his model in predicting the last 30 years of temperatures. One of the consequences of demonstrating the ‘Business As Usual’ linear extrapolation of past temperatures as being superior to the model used by Hansen, is that it isn’t necessary to appeal to anthropogenic influences to account for a phenomenon that started 12 millennia ago, with the end of the last major glaciation. Occam’s Razor suggests that the best explanation for something is the simplest explanation. That is, there is no compelling need to complicate the explanation with human interference. Climate changes. That is what it does. That is why climatologists use a 30-year average of weather to define a climate regime or episode. While I’m sure that humans are having an impact on climate, it isn’t just their CO2 emissions, and it certainly isn’t fossil fuel combustion that is the primary control of temperature. Notwithstanding how poor Hansen’s predictions actually were, I think we should still keep before us his assessment of computer modeling:

“There are major [my emphasis added] uncertainties in the model, which arise especially from assumptions about (1) global climate sensitivity and (2) heat uptake and transport by the ocean, …”

He should have mentioned also the need for parameterization of clouds in the models. In any event, we should take computer model ‘projections’ with a grain of sea salt – and anything that Hansen says with a block of salt.

Steven Mosher graciously provided the original graphs and quotes from Hansen through a link to a copy of the Senate Committee testimony, which he uploaded to WUWT comments:

[ https://climatechange.procon.org/sourcefiles/1988_Hansen_Senate_Testimony.pdf ]

Leave a comment