Reblogged from Clive Best:
Overview: Figure 1. shows a comparison of the latest HadCRUT4.6 temperatures with CMIP5 models for Representative Concentration Pathways (RCPs). The temperature data lies significantly below all RCPs, which themselves only diverge after ~2025.
Modern Climate models originate from Global Circulation models which are used for weather forecasting. These simulate the 3D hydrodynamic flow of the atmosphere and ocean on earth as it rotates daily on its tilted axis, and while orbiting the sun annually. The meridional flow of energy from the tropics to the poles generates convective cells, prevailing winds, ocean currents and weather systems. Energy must be balanced at the top of the atmosphere between incoming solar energy and out going infra-red energy. This depends on changes in the solar heating, water vapour, clouds , CO2, Ozone etc. This energy balance determines the surface temperature.
The disagreement on the global average surface temperature is huge – a spread of 4C. This implies that there must still be a problem relating to achieving overall energy balance at the TOA. Wikipedia tells us that the average temperature should be about 288K or 15C. Despite this discrepancy in reproducing net surface temperature the model trends in warming for RCP8.5 are similar.
Likewise weather station measurements of temperature have changed with time and place, so they too do not yield a consistent absolute temperature average. The ‘solution’ to this problem is to use temperature ‘anomalies’ instead, relative to some fixed normal monthly period (baseline). I always use the same baseline as CRU 1961-1990. Global warming is then measured by the change in such global average temperature anomalies. The implicit assumption of this is that nearby weather station and/or ocean measurements warm or cool coherently, such that the changes in temperature relative to the baseline can all be spatially averaged together. The usual example of this is that two nearby stations with different altitudes will have different temperatures but produce the similar ‘anomalies’. A similar procedure is used on the model results to produce temperature anomalies. So how do they compare to the data?
Figure 3 shows the result for HadCRUT4.6 compared to the CMIP5 model ensembles run with CO2 forcing levels from RCP8.5, RCP4.5, RCP2.4 and where anomalies use the same 30y normalisation period.
Note how all models now converge to the zero baseline (1961-1990) eliminating differences in absolute temperatures. This apparently allows models to be compared directly to measured temperature anomalies, although each use anomalies for different reasons. Those of the data is due to poor coverage while that of the models is due to poor agreement in absolute temperatures. The various dips seen in Fig 3. before 2000 are due to historic volcanic eruptions whose cooling effect has been included in the models.
Figure 4 shows a close up detail from 1950-2050. This shows how there is a large spread in model trends even within each RCP ensemble. The data falls below the bulk of model runs after 2005 except briefly during the recent el Nino peak in 2016.
Figure 1. shows that the data are now lower than the mean of every RCP, furthermore we won’t be able to distinguish between RCPs until after ~2030.
Method: I have downloaded and processed all CMIP5 runs from KNMI Climate Explorer for each RCP. I then calculated annual averages for the 1961-1990 baseline and combined them in all into a single CSV file. These can each be download using for example this URL: RCP85
To retrieve any of the others just change ’85’ to ’60’ or ’45’ or ’26’ in the URL.
Sorry, I do not understand to the condition “Energy must be balanced at the top of the atmosphere between incoming solar energy and out going infra-red energy”. I think there must be some energy absorbed in the oceans (if there had been lower temperature previously). If we raise the temperature of the system to achieve balance at TOA, the temperature will be overestimated?
LikeLiked by 1 person