Warmer California winters will reduce Sierra Nevada snowpack

Jerry Brown’s myopic view of climate from 2016:

California Governor Edmund G. Brown, Jr.:  “California droughts are expected to be more frequent and persistent, as warmer winter temperatures driven by climate change reduce water held in the Sierra Nevada snowpack and result in drier soil conditions. Recognizing these new conditions, the executive order directs permanent changes to use water more wisely and efficiently, and prepare for more frequent, persistent periods of limited supply.”  9 May 2016.

Source via Wayback Machine: https://web.archive.org/web/20160520060004/https://www.gov.ca.gov/news.php?id=19408

[edit, photo added below from KOLO 8 Reno, 7 Mar 2019]:

No photo description available.

 

Ringed and bearded seals, still listed as ‘threatened’, are still doing really well

Dr. Crockford’s key quote up front: “…Quakenbush is willing to admit [in a wishy-washy way] to a journalist that biologists can’t tell the future:

‘…two predictions that we made about what could be bad for walruses, just within a couple of years turned around and were sort of the opposite.’”

Hifast Note: “Sort of the opposite” translates to “we were wrong.”

polarbearscience

This isn’t news but it’s good to hear it again, this time from the mouth of one of the biologists who collects the data: against all odds, the primary prey species of polar bears are doing spectacularly well.

Ringed seal Barrow AK_Brendan Kelly

According to leading seal biologist Lori Quakenbush of Alaska Department of Fish and Game, ringed and bearded seals in the Chukchi Sea are doing great (ADN, 11 February 2019, “Seals seem to be adapting to shrinking sea ice off Alaska”):

“We’re seeing fat seals,” said Lori Quakenbush, a wildlife biologist with the Alaska Department of Fish and Game’s Arctic Marine Mammal Program. “They are reproducing earlier than they have in the past, which says they are getting enough nutrition at this point to grow quickly and become reproductive at an earlier age.”

Quakenbush looking for ringed and bearded seals in Chukchi sea_11 Feb 2019 ADNRinged and bearded seals across the Arctic, including the Chukchi and Bering Seas, were listed as threatened in 2012…

View original post 1,078 more words

Met Office Decadal Forecasts Running Hot

NOT A LOT OF PEOPLE KNOW THAT

By Paul Homewood

met-o-2016-2021-detail-1

https://tallbloke.wordpress.com/2019/02/05/met-office-update-there-is-no-update/

Time to take a closer look at the new Met Office decadal forecast of global temperatures. (By decadal, the Met Office mean five years, apparently!)

Tallbloke handily updated the Met Office forecast from January 2017 with actual temperatures since (see above chart), in order to see how good their forecasting prowess actually was.

As you can see, it was pretty crap in reality!

For the period 2017-21, they predicted an anomaly range between 0.42 and 0.89C.

By stark contrast, the actual anomaly last year was 0.30C, way below the predictions.

It is also worth highlighting that even the retrospective predictions (that is retrospectively modelling past temperatures using known variables) were at the high end of the bands till the mid 1990s, and since have been trundling along the bottom with the exception of the record El Nino of 2015/16.

It is even more noticeable that the…

View original post 225 more words

John Christy: Guilty as Charged (DeSmogBlog’s own goal)

Reblogged from Watts Up With That and Master Resource:

By Robert Bradley Jr. — February 5, 2019

“From the Climate Disinformation Database: John R. Christy” reads the headline from DeSmogBlog in its “Climate Denier Spotlight.” The following short profile follows (emphasis added):

John R. Christy is a professor of Atmospheric Science and Director of the Earth System Science Center at the University of Alabama in Huntsville. He’s a vocal critic of climate change models and has testified on numerous occasions against the mainstream scientific views on man-made climate change. Christy has affiliations with a number of climate science-denying think tanks, including the Heartland Institute and the Cato Institute. And now Andrew Wheeler has appointed him to serve on the U.S. Environmental Protection Agency’s Science Advisory Board.

Professor Christy is an excellent choice for EPA’s Science Advisory Board. And if you doubt me, please read the quotations below that DeSmogBlog has put up on its website to purportedly discredit EPA Secretary Wheeler’s choice. Christy’s views are mainstream in the world that most of us live in.

February, 2016

“The real world is not going along with rapid warming. The models need to go back to the drawing board.”

June, 2015

“[W]e are not morally bad people for taking carbon and turning it into the energy that offers life to humanity in a world that would otherwise be brutal (think of life before modernity). On the contrary, we are good people for doing so.”

April, 2015

“Carbon dioxide makes things grow. The world used to have five times as much carbon dioxide as it does now. Plants love this stuff. It creates more food. CO2 is not the problem.… There is absolutely no question that carbon energy provides with longer and better lives. There is no question about that.”

August, 2013

“I was at the table with three Europeans, and we were having lunch. And they were talking about their role as lead authors. And they were talking about how they were trying to make the report so dramatic that the United States would just have to sign that Kyoto Protocol.”

February, 2013

“If you choose to make regulations about carbon dioxide, that’s OK.  You as a state can do that; you have a right to do it.  But it’s not going to do anything about the climate. And it’s going to cost, there’s no doubt about that.”

March, 2011

“…it is fairly well agreed that the surface temperature will rise about 1°C as a modest response to a doubling of atmospheric CO2 if the rest of the component processes of the climate system remain independent of this response.”

May, 2009

“As far as the [2003 American Geophysical Union statement], I thought that was a fine statement because it did not put forth a magnitude of the warming. We just said that human effects have a warming influence, and that’s certainly true. There was nothing about disaster or catastrophe. In fact, I was very upset about the latest AGU statement [in 2007]. It was about alarmist as you can get.”

February, 2009

“We utilize energy from carbon, not because we are bad people, but because it is the affordable foundation on which the profound improvements in our standard of living have been achieved – our progress in health and welfare.”

December, 2003

In a 2003 interview with the San Francisco Chronicle, Christy describes himself as  “a strong critic of scientists who make catastrophic predictions of huge increases in global temperatures and tremendous rises in sea levels.”

Christy also added:

“It is scientifically inconceivable that after changing forests into cities, turning millions of acres into farmland, putting massive quantities of soot and dust into the atmosphere and sending quantities of greenhouse gases into the air, that the natural course of climate change hasn’t been increased in the past century.”

Conclusion

The above quotations are neither radical nor errant. They are middle-of-the-roadish. John Christy knows that the climate changes and humans have a warming impact (good news indeed). And yes, the climate models are overpredicting real-world warming, a divergence that is growing, not contracting, as his iconic graph shows.

If Professor Christy sounds like a rational scientist working in a very unsettled field, you are correct. No “pretense of knowledge” here. Compare him to the know-it-all alarmist climatologists such as Andrew Dessler at Texas A&M, whose challenge to Texas Gov. Abbott was critically reviewed last week at MasterResource.

In fact, Dr. Christy (neutral profile here) is a global lukewarmer swimming upstream in a sea of Malthusian snowflakes, defined as those who think that the natural climate is optimal and that change cannot be good. (As Professor Dessler states: “… when it comes to climate, change is bad.” [1])

A polite, learned scientist, John Christy has to be among the most likeable physical scientists you will meet. (He was a star at a Houston Forum Climate Summit back in 1999, another story.) May America get to know him better in his new role.

———-

[1] Dessler, Introduction to Modern Climate Change (Cambridge, UK: Cambridge University Press, 2nd ed., 2016),  p. 146 (his emphasis).


Rob Bradley is the editor of Master Resource, well worth adding to your bookmarks.

Mathematical modeling illusions

Reblogged from Watts Up With That:

The global climate scare – and policies resulting from it – are based on models that do not work

Dr. Jay Lehr and Tom Harris

For the past three decades, human-caused global warming alarmists have tried to frighten the public with stories of doom and gloom. They tell us the end of the world as we know it is nigh because of carbon dioxide emitted into the air by burning fossil fuels.

They are exercising precisely what journalist H. L. Mencken described early in the last century: “The whole point of practical politics is to keep the populace alarmed (and hence clamorous to be lead to safety) by menacing it with an endless series of hobgoblins, all of them imaginary.”

The dangerous human-caused climate change scare may well be the best hobgoblin ever conceived. It has half the world clamoring to be led to safety from a threat for which there is not a shred of meaningful physical evidence that climate fluctuations and weather events we are experiencing today are different from, or worse than, what our near and distant ancestors had to deal with – or are human-caused.

Many of the statements issued to support these fear-mongering claims are presented in the U.S. Fourth National Climate Assessment, a 1,656-page report released in late November. But none of their claims have any basis in real world observations. All that supports them are mathematical equations presented as accurate, reliable models of Earth’s climate.

It is important to properly understand these models, since they are the only basis for the climate scare.

Before we construct buildings or airplanes, we make physical, small-scale models and test them against stresses and performance that will be required of them when they are actually built. When dealing with systems that are largely (or entirely) beyond our control – such as climate – we try to describe them with mathematical equations. By altering the values of the variables in these equations, we can see how the outcomes are affected. This is called sensitivity testing, the very best use of mathematical models.

However, today’s climate models account for only a handful of the hundreds of variables that are known to affect Earth’s climate, and many of the values inserted for the variables they do use are little more than guesses. Dr. Willie Soon of the Harvard-Smithsonian Astrophysics Laboratory lists the six most important variables in any climate model:

1) Sun-Earth orbital dynamics and their relative positions and motions with respect to other planets in the solar system;

2) Charged particles output from the Sun (solar wind) and modulation of the incoming cosmic rays from the galaxy at large;

3) How clouds influence climate, both blocking some incoming rays/heat and trapping some of the warmth;

4) Distribution of sunlight intercepted in the atmosphere and near the Earth’s surface;

5) The way in which the oceans and land masses store, affect and distribute incoming solar energy;

6) How the biosphere reacts to all these various climate drivers.

Soon concludes that, even if the equations to describe these interactive systems were known and properly included in computer models (they are not), it would still not be possible to compute future climate states in any meaningful way. This is because it would take longer for even the world’s most advanced super-computers to calculate future climate than it would take for the climate to unfold in the real world.

So we could compute the climate (or Earth’s multiple sub-climates) for 40 years from now, but it would take more than 40 years for the models to make that computation.

Although governments have funded more than one hundred efforts to model the climate for the better part of three decades, with the exception of one Russian model which was fully “tuned” to and accidentally matched observational data, not one accurately “predicted” (hindcasted) the known past. Their average prediction is now a full 1 degree F above what satellites and weather balloons actually measured.

In his February 2, 2016 testimony before the U.S. House of Representatives Committee on Science, Space & Technology, University of Alabama-Huntsville climatologist Dr. John Christy compared the results of atmospheric temperatures as depicted by the average of 102 climate models with observations from satellites and balloon measurements. He concluded: “These models failed at the simple test of telling us ‘what’ has already happened, and thus would not be in a position to give us a confident answer to ‘what’ may happen in the future and ‘why.’ As such, they would be of highly questionable value in determining policy that should depend on a very confident understanding of how the climate system works.”

Similarly, when Christopher Monckton tested the IPCC approach in a paper published by the Bulletin of the Chinese Academy of Sciences in 2015, he convincingly demonstrated that official predictions of global warming had been overstated threefold. (Monckton holds several awards for his climate work.)

The paper has been downloaded 12 times more often than any other paper in the entire 60-year archive of that distinguished journal. Monckton’s team of eminent climate scientists is now putting the final touches on a paper proving definitively that – instead of the officially-predicted 3.3 degrees Celsius (5.5 F) warming for every doubling of CO2 levels – there will be only 1.1 degrees C of warming. At a vital point in their calculations, climatologists had neglected to take account of the fact that the Sun is shining!

All problems can be viewed as having five stages: observation, modeling, prediction, verification and validation. Apollo team meteorologist Tom Wysmuller explains: “Verification involves seeing if predictions actually happen, and validation checks to see if the prediction is something other than random correlation. Recent CO2 rise correlating with industrial age warming is an example on point that came to mind.”

As Science and Environmental Policy Project president Ken Haapala notes, “the global climate models relied upon by the IPCC [the United Nations Intergovernmental Panel on Climate Change] and the USGCRP [United States Global Change Research Program] have not been verified and validated.”

An important reason to discount climate models is their lack of testing against historical data. If one enters the correct data for a 1920 Model A, automotive modeling software used to develop a 2020 Ferrari should predict the performance of a 1920 Model A with reasonable accuracy. And it will.

But no climate models relied on by the IPCC (or any other model, for that matter) has applied the initial conditions of 1900 and forecast the Dust Bowl of the 1930s – never mind an accurate prediction of the climate in 2000 or 2015. Given the complete lack of testable results, we must conclude that these models have more in common with the “Magic 8 Ball” game than with any scientifically based process.

While one of the most active areas for mathematical modeling is the stock market, no one has ever predicted it accurately. For many years, the Wall Street Journal chose five eminent economic analysts to select a stock they were sure would rise in the following month. The Journal then had a chimpanzee throw five darts at a wall covered with that day’s stock market results. A month later, they determined who preformed better at choosing winners: the analysts or the chimpanzee. The chimp usually won.

For these and other reasons, until recently, most people were never foolish enough to make decisions based on predictions derived from equations that supposedly describe how nature or the economy works.

Yet today’s computer modelers claim they can model the climate – which involves far more variables than the economy or stock market – and do so decades or even a century into the future. They then tell governments to make trillion-dollar policy decisions that will impact every aspect of our lives, based on the outputs of their models. Incredibly, the United Nations and governments around the world are complying with this demand. We are crazy to continue letting them get away with it.

Dr. Jay Lehr is the Science Director of The Heartland Institute which is based in Arlington Heights, Illinois.

Tom Harris is Executive Director of the Ottawa, Canada-based International Climate Science Coalition.

 

Global Mean Surface Temperature: Early 20th Century Warming Period – Models versus Models & Models versus Data

Bob Tisdale - Climate Observations

This is a long post: 3500+ words and 22 illustrations. Regardless, heretics of the church of human-induced global warming who frequent this blog should enjoy it.  Additionally, I’ve uncovered something about the climate models stored in the CMIP5 archive that I hadn’t heard mentioned or seen presented before.  It amazed even me, and I know how poorly these climate models perform.  It’s yet another level of inconsistency between models, and it’s something very basic. It should help put to rest the laughable argument that climate models are based on well-documented physical processes.

View original post 4,910 more words

A comparison of CMIP5 Climate Models with HadCRUT4.6

Reblogged from Clive Best:

Overview: Figure 1. shows a comparison of the latest HadCRUT4.6 temperatures with CMIP5 models for Representative Concentration Pathways (RCPs). The temperature data lies significantly below all RCPs, which themselves only diverge after ~2025.

cmip5-short-768x426

Fig 1. Model comparisons to data 1950-2050. Spaghetti are individual annual model results for each RCP. Solid curves are model ensemble annual averages.

Modern Climate models originate from Global Circulation models which are used for weather forecasting. These simulate the 3D hydrodynamic flow of the atmosphere and ocean on earth as it rotates daily on its tilted axis, and while orbiting the sun annually. The meridional flow of energy from the tropics to the poles generates convective cells, prevailing winds, ocean currents and weather systems. Energy must be  balanced at the top of the atmosphere between incoming solar  energy and out going infra-red energy. This depends on changes in the solar heating, water vapour, clouds , CO2, Ozone etc. This energy balance determines the surface temperature.

Weather forecasting models use live data assimilation to fix the state of the atmosphere in time and then extrapolate forward one or more days up to a maximum of a week or so.  Climate models however run autonomously from some initial state, stepping  far into the future assuming that they correctly simulate a changing climate due to  CO2 levels, incident solar energy, aerosols, volcanoes etc. These models predict  past and future surface temperatures, regional climates, rainfall, ice cover etc. So how well are they doing?

cmip5-surface-temp-768x452

Fig 2. Global Surface temperatures from 12 different CMIP5 models run with RCP8.5

The disagreement on the global average surface temperature is huge – a spread of 4C. This implies that there must still be a problem relating to achieving overall energy balance at the TOA. Wikipedia tells us that the average temperature should be about 288K or 15C. Despite this discrepancy in reproducing net surface temperature the model trends in warming for  RCP8.5 are similar.

Likewise weather station measurements of temperature have changed with time and place, so they too do not yield a consistent absolute temperature average. The ‘solution’ to this problem is to use temperature ‘anomalies’ instead, relative to some fixed normal monthly period (baseline).  I always use the same baseline as CRU 1961-1990. Global warming is then measured by the change in such global average temperature anomalies. The implicit assumption of this is that nearby  weather station and/or ocean measurements warm or cool coherently, such that the changes in temperature relative to the baseline can all be spatially averaged together. The usual example of this is that two nearby stations with different altitudes will have different temperatures but produce the similar ‘anomalies’. A similar procedure is used on the model results to produce temperature anomalies. So how do they compare to the data?

Figure 3 shows the result for HadCRUT4.6 compared to the CMIP5 model ensembles run with CO2 forcing levels from RCP8.5, RCP4.5, RCP2.4 and where anomalies use the same 30y normalisation period.

1880-2100-768x426

Note how all models now converge to the zero baseline (1961-1990) eliminating differences in absolute temperatures. This apparently allows models to be compared directly to measured temperature anomalies, although each use anomalies for different reasons. Those of the data is due to poor coverage while that of the models is due to poor agreement in absolute temperatures.  The various dips seen in Fig 3. before 2000 are due to historic volcanic eruptions whose cooling effect has been included in the models.

cmip5-1950-2050-768x426

Figure 4 shows a close up detail from 1950-2050. This shows how there is a large spread in model trends even within each RCP ensemble. The data falls below the bulk of model runs after 2005 except briefly during the recent el Nino peak in 2016.

Figure 1. shows that the data are now lower than the mean of every RCP, furthermore we won’t be able to distinguish between RCPs until after ~2030.

Method: I have downloaded and processed all CMIP5 runs from KNMI Climate Explorer for each RCP. I then calculated annual averages for the 1961-1990 baseline and combined them in all into a single CSV file.  These can each be download using for example this URL:  RCP85

To retrieve any of the others just change ’85’ to ’60’ or ’45’ or ’26’ in the URL.

The credibility gap between predicted and observed global warming

Reblogged from Watts Up With That:

By Christopher Monckton of Brenchley

The prolonged el Niño of 2016-2017, not followed by a la Niña, has put paid to the great Pause of 18 years 9 months in global warming that gave us all such entertainment while it lasted. However, as this annual review of global temperature change will show, the credibility gap between predicted and observed warming remains wide, even after some increasingly desperate and more or less openly prejudiced ever-upward revisions of recent temperatures and ever-downward depressions in the temperatures of the early 20th century in most datasets with the effect of increasing the apparent rate of global warming. For the Pause continues to exert its influence by keeping down the long-run rate of global warming.

Let us begin with IPCC’s global warming predictions. In 2013 it chose four scenarios, one of which, RCP 8.5, was stated by its authors (Riahi et al, 2007; Rao & Riahi, 2006) to be a deliberately extreme scenario and is based upon such absurd population and energy-use criteria that it may safely be ignored.

For the less unreasonable, high-end-of-plausible RCP 6.0 scenario, the 21st-century net anthropogenic radiative forcing is 3.8 Watts per square meter from 2000-2100:

clip_image002

CO2 concentration of 370 ppmv in 2000 was predicted to rise to 700 ppmv in 2100 (AR5, fig. 8.5) on the RCP 6.0 scenario (thus, the centennial predicted CO2 forcing is 4.83 ln(700/370), or 3.1 Watts per square meter, almost five-sixths of total forcing). Predicted centennial reference sensitivity (i.e., warming before accounting for feedback) is the product of 3.8 Watts per square meter and the Planck sensitivity parameter 0.3 Kelvin per Watt per square meter: i.e., 1.15 K.

The CMIP5 models predict 3.37 K midrange equilibrium sensitivity to CO2 doubling (Andrews+ 2012), against 1 K reference sensitivity before accounting for feedback, implying a midrange transfer function 3.37 / 1 = 3.37. The transfer function, the ratio of equilibrium to reference temperature, encompasses by definition the entire operation of feedback on climate.

Therefore, the 21st-century warming that IPCC should be predicting, on the RCP 6.0 scenario and on the basis of its own estimates of CO2 concentration and the models’ estimates of CO2 forcing and Charney sensitivity, is 3.37 x 1.15, or 3.9 K.

Yet IPCC actually predicts only 1.4 to 3.1 K 21st-century warming on the RCP 6.0 scenario, giving a midrange estimate of just 2.2 K warming in the 21st century and implying a transfer function of 2.2 / 1.15 = 1.9, little more than half the midrange transfer function 3.37 implicit in the equilibrium-sensitivity projections of the CMIP5 ensemble.

clip_image004

Note that Fig. 2 disposes of any notion that global warming is “settled science”. IPCC, taking all the scenarios and hedging its bets, is predicting somewhere between 0.2 K cooling and 4.5 K warming by 2100. Its best estimate is its RCP 6.0 midrange estimate 2.2 K.

Effectively, therefore, given 1 K reference sensitivity to doubled CO2, IPCC’s 21st-century warming prediction implies 1.9 K Charney sensitivity (the standard metric for climate-sensitivity studies, which is equilibrium sensitivity to doubled CO2 after all short-acting feedbacks have operated), and not the 3.4 [2.1, 4.7] K imagined by the CMIP5 models.

Since official predictions are thus flagrantly inconsistent with one another, it is difficult to deduce from them a benchmark midrange value for the warming officially predicted for the 21st century. It is somewhere between the 2.2 K that IPCC gives as its RCP 6.0 midrange estimate and the 3.9 K deducible from IPCC’s midrange estimate of 21st-century anthropogenic forcing using the midrange CMIP5 transfer function.

So much for the predictions. But what is actually happening, and does observed warming match prediction? Here are the observed rates of warming in the 40 years 1979-2018. Let us begin with GISS, which suggests that for 40 years the world has warmed at a rate equivalent not to 3.9 C°/century nor even to 2.2 C°/century, but only to 1.7 C°/century.

clip_image006

Next, NCEI. Here, perhaps to make a political point, the dataset is suddenly unavailable:

clip_image008

Next, HadCRUT4, IPCC’s preferred dataset. The University of East Anglia is rather leisurely in updating its information, so the 40-year period runs from December 1978 to November 2018, but the warming rate is identical to that of GISS, at 1.7 C°/century equivalent, below the RCP 6.0 midrange 2.2 C°/century rate.

clip_image010

Next, the satellite lower-troposphere trends, first from RSS. It is noticeable that, ever since RSS, whose chief scientist publicly describes those who disagree with him about the climate as “deniers”, revised its dataset to eradicate the Pause, it has tended to show the fastest apparent rate of global warming, now at 2 C°/century equivalent.

clip_image012

Finally, UAH, which Professor Ole Humlum (climate4you.com) regards as the gold standard for global temperature records. Before UAH altered its dataset, it used to show more warming than the others. Now it shows the least, at 1.3 C°/century equivalent.

clip_image014

How much global warming should have occurred over the 40 years since the satellite record began in 1979? CO2 concentration has risen by 72 ppmv. The period CO2 forcing is thus 0.94 W m–2, implying 0.94 x 6/5 = 1.13 W m–2 net anthropogenic forcing from all sources. Accordingly, period reference sensitivity is 1.13 x 0.3, or 0.34 K, and period equilibrium sensitivity, using the CMIP5 midrange transfer function 3.37, should have been 1.14 K. Yet the observed period warming was 0.8 K (RSS), 0.7 K (GISS & HadCRUT4) or 0.5 K (UAH): a mean observed warming of about 0.7 K.

A more realistic picture may be obtained by dating the calculation from 1950, when our influence first became appreciable. Here is the HadCRUT4 record:

clip_image016

The CO2 forcing since 1950 is 4.83 ln(410/310), or 1.5 Watts per square meter, which becomes 1.8 Watts per square meter after allowing for non-CO2 anthropogenic forcings, a value consistent with IPCC (2013, Fig. SPM.5). Therefore, period reference sensitivity from 1950-2018 is 1.8 x 0.3, or 0.54 K, while the equivalent equilibrium sensitivity, using the CMIP5 midrange transfer function 3.37, is 0.54 x 3.37 = 1.8 K, of which only 0.8 K actually occurred. Using the revised transfer function 1.9 derived from the midrange predicted RCP 6.0 predicted warming, the post-1950 warming should have been 0.54 x 1.9 = 1.0 K.

It is also worth showing the Central England Temperature Record for the 40 years 1694-1733, long before SUVs, during which the temperature in most of England rose at a rate equivalent to 4.33 C°/century, compared with just 1.7 C°/century equivalent in the 40 years 1979-2018. Therefore, the current rate of warming is not unprecedented.

It is evident from this record that even the large and naturally-occurring temperature change evident not only in England but worldwide as the Sun recovered following the Maunder minimum is small compared with the large annual fluctuations in global temperature.

clip_image018

The simplest way to illustrate the very large discrepancy between predicted and observed warming over the past 40 years is to show the results on a dial.

clip_image020

Overlapping projections by IPCC (yellow & buff zones) and CMIP5 (Andrews et al. 2012: buff & orange zones) of global warming from 1850-2011 (dark blue scale), 1850 to 2xCO2 (dark red scale) and 1850-2100 (black scale) exceed observed warming of 0.75 K from 1850-2011 (HadCRUT4), which falls between the 0.7 K period reference sensitivity to midrange net anthropogenic forcing in IPCC (2013, fig. SPM.5) (cyan needle) and expected 0.9  K period equilibrium sensitivity to that forcing after adjustment for radiative imbalance (Smith et al. 2015) (blue needle). The CMIP5 models’ midrange projection of 3.4 K Charney sensitivity (red needle) is about thrice the value consistent with observation. The revised interval of global-warming predictions (green zone), correcting an error of physics in models, whose feedbacks do not respond to emission temperature, is visibly close to observed warming.

Footnote: I undertook to report on the progress of my team’s paper explaining climatology’s error of physics in omitting from its feedback calculation the observable fact that the Sun is shining. The paper was initially rejected early last year on the ground that the editor of the top-ten journal to which it was sent could not find anyone competent to review it. We simplified the paper, whereupon it was sent out and, after many months’ delay, only two reviews came back. The first was a review of a supporting document giving results of experiments conducted at a government laboratory, but it was clear that the reviewer had not actually read the laboratory’s report, which answered the question the reviewer had raised. The second was ostensibly a review of the paper, but the reviewer stated that, because he found the paper’s conclusions uncongenial he had not troubled to read the equations that justified those conclusions.

We protested. The editor then obtained a third review. But that, like the previous two reviews, was not a review of the present paper. It was a review of another paper that had been submitted to a different journal the previous year. All of the points raised by that review had long since been comprehensively answered. None of the three reviewers, therefore, had actually read the paper they were ostensibly reviewing.

Nevertheless, the editor saw fit to reject the paper. Next, the journal’s management got in touch to say that it was hoped we were content with the rejection and to invite us to submit further papers in future. I replied that we were not at all satisfied with the rejection, for the obvious reason that none of the reviewers had actually read the paper that the editor had rejected, and that we insisted, therefore, on being given a right of appeal.

The editor agreed to send out the paper for review again, and to choose the reviewers with greater care this time. We suggested, and the editor accepted, that in view of the difficulty the reviewers were having in getting to grips with the point at issue, which was clearly catching them by surprise, we should add to the paper a comprehensive mathematical proof that the transfer function that embodies the entire action of feedback on climate is expressible not only as the ratio of equilibrium sensitivity after feedback to reference sensitivity before feedback but also as the ratio of the entire, absolute equilibrium temperature to the entire, absolute reference temperature.

We said we should explain in more detail that, though the equations for both climatology’s transfer function and ours are valid equations, climatology’s equation is not useful because even small uncertainties in the sensitivities, which are two orders of magnitude smaller than the absolute temperatures, lead to large uncertainty in the value of the transfer function, while even large uncertainties in the absolute temperatures lead to small uncertainty in the transfer function, which can thus be very simply and very reliably derived and constrained without using general-circulation models.

My impression is that the editor has realized we are right. We are waiting for a new section from our professor of control theory on the derivation of the transfer function from the energy-balance equation via a leading-order Taylor-series expansion. That will be with us at the end of the month, and the editor will then send the paper out for review again. I’ll keep you posted. If we’re right, Charney sensitivity (equilibrium sensitivity to doubled CO2) will be 1.2 [1.1, 1.3] C°, far too little to matter, and not, as the models currently imagine, 3.4 [2.1, 4.7] C°, and that, scientifically speaking, will be the end of the climate scam.

A Sea-Surface Temperature Picture Worth a Few Hundred Words!

Reblogged From Watts Up With That:

We covered this paper when it was first released, here is some commentary on it – Anthony


Guest essay by By PATRICK J. MICHAELS

On January 7 a paper by Veronika Eyring and 28 coauthors, titled “Taking Climate Model Evaluation to the Next Level” appeared in Nature Climate ChangeNature’s  journal devoted exclusively to this one obviously under-researched subject.

For years, you dear readers have been subject to our railing about the unscientific way in which we forecast this century’s climate: we take 29 groups of models and average them. Anyone, we repeatedly point out, who knows weather forecasting realizes that such an activity is foolhardy. Some models are better than others in certain situations, and others may perform better under different conditions. Consequently, the daily forecast is usually a blend of a subset of available models, or, perhaps (as can be the case for winter storms) only one might be relied upon.

Finally the modelling community (as represented by the football team of authors) gets it. The second sentence of the paper’s abstract says “there is now evidence that giving equal weight to each available model projection is suboptimal.”

A map of sea-surface temperature errors calculated when all the models are averaged up shows the problem writ large:

Annual sea-surface temperature error (modelled minus observed) averaged over the current family of climate modelsFrom Eyring et al.

First, the integrated “redness” of the map appears to be a bit larger than the integrated “blueness,” which would be consistent with the oft-repeated (here) observation that the models are predicting more warming than is being observed. But, more important, the biggest errors are over some of the most climatically critical places on earth.

Start with the Southern Ocean. The models have almost the entire circumpolar sea too warm, much of it off more than 1.5°C. Down around 60°S (the bottom of the map) water temperatures get down to near 0°C (because of its salinity, sea water freezes at around -2.0°C). Making errors in this range means making errors in ice formation. Further, all the moisture that lies upon Antarctica originates in this ocean, and simulating an ocean 1.5° too warm is going to inject an enormous amount of nonexistent moisture into the atmosphere, which will be precipitated over the continent in nonexistent snow.

The problem is, down there, the models are making error about massive zones of whiteness, which by their nature absorb very little solar radiation. Where it’s not white, the surface warms up quicker.

(To appreciate that, sit outside on a sunny but calm winters day, changing your khakis from light to dark, the latter being much warmer)

There are two other error fields that merit special attention: the hot blobs off the coasts of western South America and Africa. These are regions where relatively cool water upwells to the surface, driven in large part by the trade winds that blow into the earth’s thermal equator. For not-completely known reasons, these sometimes slow or even reverse, upwelling is suppressed, and the warm anomaly known as El Niño emerges (there is a similar, but much more muted version that sometimes appears off Africa).

There’s a current theory that El Niños are one mechanism that contributes to atmospheric warming, which holds that the temperature tends to jump in steps that occur after each big one. It’s not hard to see that systematically creating these conditions more persistently than they occur could put more nonexistent warming into the forecast.

Finally, to beat ever more manfully on the dead horse—averaging up all the models and making a forecast—we again note that of all the models, one, the Russian INM-CM4 has actually tracked the observed climate quite well. It is by far the best of the lot. [Hifast bold]  Eyring et al. also examined the models’ independence from each other—a measure of which are (and which are not) making (or not making) the same systematic errors. And amongst the most independent, not surprisingly, is INM-CM4.

(It’s update, INM-CM5, is slowly being leaked into the literature, but we don’t have the all-important climate sensitivity figures in print yet.)

The Eyring et al. study is a step forward. It brings climate model application into the 20th century.

Hansen’s 1988 Predictions Redux

Reblogged from Watts Up With That:

Guest Post by Willis Eschenbach

Over in the Tweeterverse, someone sent me the link to the revered climate scientist James Hansen’s 1988 Senate testimony and told me “Here’s what we were told 30 years ago by NASA scientist James Hansen. It has proven accurate.”

I thought … huh? Can that be right?

Here is a photo of His Most Righteousness, Dr. James “Death Train” Hansen, getting arrested for civil disobedience in support of climate alarmism …

I have to confess, I find myself guilty of schadenfreude in noting that he’s being arrested by … Officer Green …

In any case, let me take as my text for this sermon the aforementioned 1988 Epistle of St. James To The Senators, available here. I show the relevant part below, his temperature forecast.

ORIGINAL CAPTION: Fig. 3. Annual mean global surface air temperature computed for trace gas scenarios A, B, and C described in reference 1. [Scenario A assumes continued growth rates of trace gas emissions typical of the past 20 years, i.e., about 1.5% yr^-1 emission growth; scenario B has emission rates approximately fixed at current rates; scenario C drastically reduces trace gas emissions between 1990 and 2000.] The shaded range is an estimate of global temperature during the peak of the current and previous interglacial periods, about 6,000 and 120,000 years before present, respectively. The zero point for observations is the 1951-1980 mean (reference 6); the zero point for the model is the control run mean.

I was interested in “Scenario A”, which Hansen defined as what would happen assuming “continued growth rates of trace gas emissions typical of the past 20 years, i.e., about 1.5% yr-1“.

To see how well Scenario A fits the period after 1987, which is when Hansen’s observational data ends, I took a look at the rate of growth of CO2 emissions since 1987. Figure 2 shows that graph.

Figure 2. Annual increase in CO2 emissions, percent.

This shows that Hansen’s estimate of future CO2 emissions was quite close, although the reality was ~ 25% MORE annual increase in CO2 than Hansen estimated. As a result, his computer estimate for Scenario A should have shown a bit more warming than we see in Figure 1 above.

Next, I digitized Hansen’s graph to compare it to reality. To start with, here is what is listed as “Observations” in Hansen’s graph. I’ve compared Hansen’s observations to the Goddard Institute for Space Studies Land-Ocean Temperature Index (GISS LOTI) and the HadCRUT global surface temperature datasets.

Figure 3. The line marked “Observations” in Hansen’s graph shown as Figure 1 above, along with modern temperature estimates. All data is expressed as anomalies about the 1951-1980 mean temperature.

OK, so now we have established that:

• Hansen’s “Scenario A” estimate of future growth in CO2 emissions was close, albeit a bit low, and

• Hansen’s historical temperature observations agree reasonably well with modern estimates.

Given that he was pretty accurate in all of that, albeit a bit low on CO2 emissions growth … how did his Scenario A prediction work out?

Well … not so well …

Figure 4. The line marked “Observations” in Hansen’s graph shown as Figure 1 above, along with his Scenario A, and modern temperature estimates. All observational data is expressed as anomalies about the 1951-1980 mean temperature.

So I mentioned this rather substantial miss, predicted warming twice the actual warming, to the man on the Twitter-Totter, the one who’d said that Hansen’s prediction had been “proven accurate”.

His reply?

He said that Dr. Hansen’s prediction was indeed proven accurate—he’d merely used the wrong value for the climate sensitivity, viz: “The only discrepancy in Hansen’s work from 1988 was his estimate of climate sensitivity. Using best current estimates, it plots out perfectly.”

I loved the part about “best current estimates” of climate sensitivity … here are current estimates, from my post on The Picasso Problem  

Figure 5. Changes over time in the estimate of the climate sensitivity parameter “lambda”. “∆T2x(°C)” is the expected temperature change in degrees Celsius resulting from a doubling of atmospheric CO2, which is assumed to increase the forcing by 3.7 watts per square metre. FAR, SAR, TAR, AR4, AR5 are the UN IPCC 1st, second, third, fourth and fifth Assessment Reports giving an assessment of the state of climate science as of the date of each report. Red dots show recent individual estimates of the climate sensitivity

While giving the Tweeterman zero points for accuracy, I did have to applaud him for sheer effrontery and imaginuity. It’s a perfect example of why it is so hard to convince climate alarmists of anything—because to them, everything is a confirmation of their ideas. Whether it is too hot, too cold, too much snow, too little snow, warm winters, brutal winters, or disproven predictions—to the alarmists all of these are clear and obvious signs of the impending Thermageddon, as foretold in the Revelations of St. James of Hansen.

My best to you all, the beat goes on, keep fighting the good fight.

w.