Inside The Sausage Factory

Reblogged from Watts up with That:

Guest Post by Willis Eschenbach

There’s an old saying that “Laws are like sausages. It’s better not to see either one being made” … and I fear the same is true for far too much of what passes for climate “science” these days.

However, ignoring such wise advice, I’ve taken another look under the hood at the data from the abysmal Nature Communications paper entitled “Discrepancies in scientific authority and media visibility of climate change scientists and contrarians.” My previous analysis of the paper is here on WUWT.

In that article, it says that the “Source Data files” for the article are located here. That seemed hopeful, so I looked at that page. There, they say:

We document the media visibility and climate change research achievements of two groups of individuals representing some of  the most prominent figures in their respective domains: 386  climate change contrarians (CCC)  juxtaposed with 386 expert climate change scientists (CCS). These data were collected from the Media Cloud project (MC), an open data project hosted by the MIT Center for Civic Media and the Berkman Klein Center for Internet & Society at Harvard University. 

Enclosed are raw MC data and parsed media article data files obtained from two types of MC database queries: 

(i) ~105,000 media articles derived from the MC search query ”climate AND change AND global AND warming”; 

(ii) 772 individual data files, for each member of the CCC and CCS groups, each derived from a single MC search query ”MemberFullName AND climate”. 

Well hooray, that sounded great, that the raw data was “enclosed”. I was even happier to see that they’d provided the computer code they’d used, viz:

Source code: provided in a Mathematica (v11.1) notebook (MediaSource_Annotated_ALL_2256.nb using MediaSource_Annotated_ALL_2256.txt) reproduces the subpanels for Fig. 5 in the following research article

Outstanding, I thought, I have everything I need to replicate the study—the full code and data as used to do the calculations! That hardly ever happens … but then I noticed the caveat at the top of the page:

Data Files: This dataset is private for peer review and will be released on January 1, 2020.

Grrr … these jokers write a “scientific” paper and then they don’t release the code or the data for six months after publication? That’s not science, that a buncha guys engaged in what we used to call “hitchhiking to Chicago” accompanied by the appropriate obscene one-handed gesture with the thumb extended…

Undeterred, I went to take a look at the “Mediacloud” that they referred to. It’s an interesting dataset of hundreds of thousands of articles, and I’ll likely make use of it in the future. But it turns out that there was a huge problem … you can’t just enter e.g. “Willis Eschenbach” AND climate as their web page fatuously claims. You also need to specify just which sources you are searching, as well as the date range you’re interested in … and their information page says nothing about either one.

Now, in my list of media mentions in the Supplementary Information from their paper, there are only 40 results … but when I searched the entire Mediacloud dataset from 2001-01-01 to the present for my name plus “climate” as they say that they did, I got over 500 results … say what?

I’ve written to the corresponding author listed on that web page for clarification on this matter, but I’m not optimistic about the speed of his response … he may have other things on his mind at the moment.

Frustrated at Mediacloud, I returned to the paper’s data. In total there are over 60,000 media mentions between all of the 386 of us who are identified as “contrarians”. I decided to see which websites got the most mentions. Here are the top twenty, along with the number of times they were referenced:

  • lagunabeachindy.com:           6279
  • climatedepot.com:              4877
  • feedproxy.google.com:          3908
  • huffingtonpost.com:            2543
  • adsabs.harvard.edu:            1442
  • blogs.discovermagazine.com:    1115
  • thinkprogress.org:              871
  • desmogblog.com:                 827
  • freerepublic.com:               709
  • dallasnews.com:                 650
  • en.wikipedia.org:               641
  • theguardian.com:                609
  • democracynow.org:               515
  • examiner.com:                   426
  • jonjayray.comuv.com:            411
  • salon.com:                      398
  • web.archive.org:                384
  • nhinsider.com:                  379
  • wattsupwiththat.com:            355
  • news.yahoo.com:                 334

There are some real howlers in just these top twenty. First, as near as I can tell the most referenced site, the local California newspaper “Laguna Beach Independent” with 6,279 mentions, doesn’t contain any of the 386 listed names. Totally bogus, useless, and distorts the results in every direction.

Next, DeSmogBlog has 827 mentions … all of which will probably be strongly negative. After all, that’s their schtick, negative reviews of “contrarians”. I’ll return to this question of negative and positive mentions in a moment.

Then there’s “jonjayray.comuv.com” with 411 mentions, which is a dead link. Nobody home, the website is not “pining for the fjords” as they say.

And “feedproxy.google.com” seems to be an aggregator which often references a study or news article more than once. Here’s an example of such double-counting, from one person’s list of media mentions:

http://feedproxy.google.com/~r/firedoglake/fdl/~3/8KMa0w83rPo/,en,Firedoglake,809,247540225,CNBC Caught Soliciting Op-Ed Calling Climate Change A ‘Hoax’,2014-6-30

http://feedproxy.google.com/~r/firedoglake/fdl/~3/8KMa0w83rPo/,en,pamshouseblend.com,58791,247551206,CNBC Caught Soliciting Op-Ed Calling Climate Change A ‘Hoax’,2014-6-30″

Note that both of these links reference the same underlying document, “CNBC Caught Soliciting Op-Ed Calling Climate Change A ‘Hoax’”, but the document is located on two different websites. I didn’t have the heart or the time to find out how often that occurred … but the example above was from the very first person I looked at who had feedproxy.google.com in their list of mentions.

(I suppose I shouldn’t be surprised by the abysmal lack of quality control on their list of websites, because after all these authors are obviously devout Thermageddians … but still, those egregious errors were a real shock to me. My high school science teacher would have had a fit if we’d done that.)

Next, as I mentioned above, looking at that list I was struck by the fact that there is a huge difference between being mentioned on say DeSmogBlog, which will almost assuredly be a negative review, and being mentioned on ClimateDepot, which is much more likely to be positive in nature. But how could I quantify that?

To answer the question, I went back to Mediacloud. They have about a thousand websites which they have categorized as either Left, Center Left, Center, Center Right, or Right. So I decided to see how many times each category of websites was mentioned in the 60,000 media mentions for contrarians … here are those numbers.

  • Left:             6628
  • Center Left:    4051
  • Center:           2241
  • Center Right: 2056
  • Right:           4582
  • Total Left:     10679
  • Total Right:     6638

As you can see, there are about 50% more mentions on left-leaning websites than on right-leaning … so it appears quite possible that, rather than “contrarians” getting more good publicity than mainstream climate scientists as the paper claims, per their calculations “contrarians” are getting more bad publicity than mainstream climentarians.

Finally, before I left the subject and the website behind, I used Mediacloud to see how a couple of other people fared. Recall that all 396 of us “contrarians” garnered about 60,000 media mentions between us.

I first took a look at the media mentions of St. Greta of Thunberg, the Patron Saint of the Easily Led. Since she burst on the scene a few months ago, she has gotten no less than 36,517 mentions in the media, about 60% of the total of all the “contrarians” listed in their study.

I then looked at the man who has made more money out of climate hysteria than any living human being, the multimillionaire Climate Goracle, Mr. Al Gore himself. A search of Mediacloud for ‘”Al Gore” AND climate’ returned a total of 92,718 hits.

So while the clueless authors of this paper are so concerned about how much air time we “contrarians” get, between them just Al Gore and Greta Thunberg alone got twice the number of media mentions as all of us climate contrarians combined ….

Gotta say, every time I look at this heap of steaming bovine waste products it gets worse … but hopefully, this will be the last time I have to look at how this particular sausage was made.

w.

Curious Correlations

Reblogged from Watts Up With That:

Guest Post by Willis Eschenbach

I got to thinking about the relationship between the Equatorial Pacific, where we find the El Nino/La Nina phenomenon, and the rest of the world. I’ve seen various claims about what happens to the temperature in various places at various lag-times after the Nino/Nina changes. So I decided to take a look.

To do that, I’ve gotten the temperature of the NINO34 region of the Equatorial Pacific. The NINO34 region stretches from 90°W, near South America, out to 170° West in the mid-Pacific, and from 5° North to 5° South of the Equator. I’ve calculated how well correlated that temperature is with the temperatures in the whole world, at various time lags.

To start with, here’s the correlation of what the temperature of the NINO34 region is doing with what the rest of the world is doing, with no time lag. Figure 1 shows which areas of the planet move in step with or in opposition to the NINO34 region with no lag.

Figure 1. Correlation of the temperature of the NINO34 region (90°-170°W, 5°N/S) with gridcell temperatures of the rest of the globe. Correlation values greater than 0.6 are all shown in red.

Now, perfect correlation is where two variables move in total lockstep. It has a value of 1.0. And if there is perfect anti-correlation, meaning whenever one variable moves up the other moves down, that has a value of minus 1.0.

There are a couple of interesting points about that first look, showing correlations with no lag. The Indian Ocean moves very strongly in harmony with the NINO34 region (red). Hmmm. However, the Atlantic doesn’t do that. Again hmmm. Also, on average northern hemisphere land is positively correlated with the NINO34 region (orange), and southern hemisphere land is the opposite, negatively correlated (blue).

Next, with a one-month lag to give the Nino/Nina effects time to start spreading around the planet, we see the following:

Figure 2. As in Figure 1, but with a one month lag between the NINO34 temperature and the rest of the world. In other words, we’re comparing each month’s temperature with the previous month’s NINO34 temperature.

Here, after a month, the North Pacific and the North Atlantic both start to feel the effects. Their correlation switches from negative (blues and greens) to positive (red-orange). Next, here’s the situation after a two-month lag.

Figure 3. As in previous figures, but with a two month lag.

I found this result most surprising. Two months after a Nino/Nina change, the entire Northern Hemisphere strongly tends to move in the same direction as the NINO34 region moved two months earlier … and at the same time, the entire Southern Hemisphere moves in opposition to what the NINO34 region did two months earlier.

Hmmm …

And here’s the three-month lag:

Figure 4. As in previous figures, but with a three month lag.

An interesting feature of the above figure is that the good correlation of the north-eastern Pacific Ocean off the west coast of North America does not extend over the continent itself.

Finally, after four months, the hemispherical pattern begins to fall apart.

Figure 5. As in previous figures, but with a four & five month lag.

Even at five months, curious patterns remain. In the northern hemisphere, the land is all negatively correlated with NINO34, and the ocean is positively correlated. But in the southern hemisphere, the land is all positively correlated and the ocean negative.

Note that this hemispheric land-ocean difference with a five-month lag is the exact opposite of the land-ocean difference with no lag shown in Figure 1.

Now … what do I make of all this?

The first thing that it brings up for me is the astounding complexity of the climate system. I mean, who would have guessed that the two hemispheres would have totally opposite strong responses to the Nino/Nina phenomenon? And who would have predicted that the land and the ocean would react in opposite directions to the Nino/Nina changes right up to the very coastlines?

Second, it would seem to offer some ability to improve long-range forecasting for certain specific areas. Positive correlation with Hawaii, North Australia, Southern Africa, and Brazil is good up to four-five months out.

Finally, it strikes me that I can run this in reverse. By that, I mean I can find all areas of the planet that are able to predict the future temperature at some pre-selected location. Like, say, what areas of the globe correlate well with whatever the UK will be doing two months from now?

Hmmm indeed …

Warmest regards to all, the mysteries of this wondrous world are endless.

w.

The Cooling Rains

Reblogged from Watts Up With That:

Guest Post by Willis Eschenbach

I took another ramble through the Tropical Rainfall Measurement Mission (TRMM) satellite-measured rainfall data. Figure 1 shows a Pacific-centered and an Atlantic-centered view of the average rainfall from the end of 1997 to the start of 2015 as measured by the TRMM satellite.

Figure 1. Average rainfall, meters per year, on a 1° latitude by 1° longitude basis. The area covered by the satellite data, forty degrees north and south of the Equator, is just under 2/3 of the globe. The blue areas by the Equator mark the InterTropical Convergence Zone (ITCZ). The two black horizontal dashed lines mark the Tropics of Cancer and Capricorn, the lines showing how far north and south the sun travels each year (23.45°, for those interested).

There’s lots of interesting stuff in those two graphs. I was surprised by how much of the planet in general, and the ocean in particular, are bright red, meaning they get less than half a meter (20″) of rain per year.

I was also intrigued by how narrowly the rainfall is concentrated at the average Inter-Tropical Convergence Zone (ITCZ). The ITCZ is where the two great global hemispheres of the atmospheric circulation meet near the Equator. In the Pacific and Atlantic on average the ITCZ is just above the Equator, and in the Indian Ocean, it’s just below the Equator. However, that’s just on average. Sometimes in the Pacific, the ITCZ is below the Equator. You can see kind of a mirror image as a light orange horizontal area just below the Equator.

Here’s an idealized view of the global circulation. On the left-hand edge of the globe, I’ve drawn a cross section through the atmosphere, showing the circulation of the great atmospheric cells.

Figure 2. Generalized overview of planetary atmospheric circulation. At the ITCZ along the Equator, tall thunderstorms take warm surface air, strip out the moisture as rain, and drive the warm dry air vertically. This warm dry air eventually subsides somewhere around 25-30°N and 25-30S of the Equator, creating the global desert belts at around those latitudes.

The ITCZ is shown in cross-section at the left edge of the globe in Figure 2. You can see the general tropical circulation. Surface air in both hemispheres moves towards the Equator. It is warmed there and rises. This thermal circulation is greatly sped up by air driven vertically at high rates of speed through the tall thunderstorm towers. These thunderstorms form all along the ITCZ. These thunderstorms provide much of the mechanical energy that drives the atmospheric circulation of the Hadley cells.

With all of that as prologue, here’s what I looked at. I got to thinking, was there a trend in the rainfall? Is it getting wetter or drier? So I looked at that using the TRMM data. Figure 3 shows the annual change in rainfall, in millimeters per year, on a 1° latitude by 1° longitude basis.

Figure 3. Annual change in the rainfall, 1° latitude x 1° longitude gridcells.

I note that the increase in rain is greater on the ocean vs land, is greatest at the ITCZ, and is generally greater in the tropics.

Why is this overall trend in rainfall of interest? It gives us a way to calculate how much this cools the surface. Remember the old saying, what comes down must go up … or perhaps it’s the other way around, same thing. If it rains an extra millimeter of water, somewhere it must have evaporated an extra millimeter of water.

And in the same way that our bodies are cooled by evaporation, the surface of the planet is also cooled by evaporation.

Now, we note above that on average, the increase is 1.33 millimeters of water per year. Metric is nice because volume and size are related. Here’s a great example.

One millimeter of rain falling on one square meter of the surface is one liter of water which is one kilo of water. Nice, huh?

So the extra 1.33 millimeters of rain per year is equal to 1.33 extra liters of water evaporated per square meter of surface area.

Next, how much energy does it take to evaporate that extra 1.33 liters of water per square meter so it can come down as rain? The calculations are in the endnotes. It turns out that this 1.33 extra liters per year represents an additional cooling of a tenth of a watt per square meter (0.10 W/m2).

And how does this compare to the warming from increased longwave radiation due to the additional CO2? Well, again, the calculations are in the endnotes. The answer is, per the IPCC calculations, CO2 alone over the period gave a yearly increase in downwelling radiation of ~ 0.03 W/m2. Generally, they double that number to allow for other greenhouse gases (GHGs), so for purposes of discussion, we’ll call it 0.06 W/m2 per year.

So over the period of this record, we have increased evaporative cooling of 0.10 W/m2 per year, and we have increased radiative warming from GHGs of 0.06 W/m2 per year.

Which means that over that period and that area at least, the calculated increase in warming radiation from GHGs was more than counterbalanced by the observed increase in surface cooling from increased evaporation.

Regards to all,

w.

As usual: please quote the exact words you are discussing so we can all understand exactly what and who you are replying to.

Additional Cooling

Finally, note that this calculation is only evaporative cooling. There are other cooling mechanisms at work that are related to rainstorms. These include:

• Increased cloud albedo reflecting hundreds of watts/square meter of sunshine back to space

• Moving surface air to the upper troposphere where it is above most GHGs and freer to cool to space.

• Increased ocean surface albedo from whitecaps, foam, and spume.

• Cold rain falling from a layer of the troposphere that is much cooler than the surface.

• Rain re-evaporating as it falls to cool the atmosphere

• Cold wind entrained by the rain blowing outwards at surface level to cool surrounding areas

• Dry descending air between rain cells and thunderstorms allowing increased longwave radiation to space.

Between all of these, they form a very strong temperature regulating mechanism that prevents overheating of the planet.

Calculation of energy required to evaporate 1.33 liters of water.

#latent heat evaporation joules/kg @ salinity 35 psu, temperature 24°C

> latevap = gsw_latentheat_evap_t( 35, 24 ) ; latevap

[1] 2441369

# joules/yr/m2 required to evaporate 1.33 liters/yr/m2

> evapj = latevap * 1.33 ; evapj

[1] 3247021

# convert joules/yr/m2 to W/m2

> evapwm2 = evapj / secsperyear ; evapwm2

[1] 0.1028941

Note: the exact answer varies dependent on seawater temperature, salinity, and density. These only make a difference of a couple percent (say 0.1043 vs 0.1028941). I’ve used average values.

Calculation of downwelling radiation change from CO2 increase.

#starting CO2 ppmv Dec 1997

> thestart = as.double( coshort[1] ) ; thestart

[1] 364.38

#ending CO2 ppmv Mar 2015

> theend = as.double( last( coshort )) ; theend

[1] 401.54

# longwave increase, W/m2 per year over 17 years 4 months

> 3.7 * log( theend / thestart, 2)/17.33

[1] 0.0299117

Planet-Sized Experiments – we’ve already done the 2°C test

Guest Post by Willis Eschenbach

People often say that we’re heading into the unknown with regards to CO2 and the planet. They say we can’t know, for example, what a 2°C warming will do because we can’t do the experiment. This is seen as important because for unknown reasons, people have battened on to “2°C” as being the scary temperature rise that we’re told we have to avoid at all costs.

But actually, as it turns out, we have already done the experiment. Below I show the Berkeley Earth average surface temperature record for Europe. Europe is a good location to analyze, because some of the longest continuous temperature records are from Europe. In addition, there are a lot of stations in Europe that have been taking record for a long time. This gives us lots of good data.

So without further ado, here’s the record of the average European temperature.

Figure 1. Berkeley Earth average European temperature, 1743 – 2013. Red/yellow line is an 8-year Gaussian average. Horizontal red and blue lines are 2°C apart.

Temperatures were fairly steady until about the year 1890, when they began to rise. Note that this was BEFORE the large modern rise in CO2 … but I digress.

And from 1890 or so to 2013, temperatures in Europe rose by about 2°C. Which of course brings up the very important question …

We’ve done the 2°C experiment … so where are the climate catastrophes?

Seriously, folks, we’re supposed to be seeing all kinds of bad stuff. But none of it has happened. No cities gone underwater. No increase in heat waves or cold waves. No islands sinking into the ocean. No increase in hurricanes. No millions of climate refugees. The tragedies being pushed by the failed serial doomcasters for the last 30 years simply haven’t come to pass.

I mean, go figure … I went to Thermageddon and all I got was this lousy t-shirt …

In fact, here’s the truth about the effects of the warming …

Figure 2. Average annual climate-related (blue line) and non-climate-related (red line) deaths in natural disasters. Data from OFDA/CRED International Disaster Database

In just under a century, climate-related deaths, which are deaths from floods, droughts, storms, wildfires, and extreme temperatures, have dropped from just under half a million down to about twenty thousand … and during this same time, temperatures all over the globe have been warming.

So no, folks, there is no climate emergency. Despite children happily skipping class to march in lockstep to the alarmist drumbeat, climate is not the world’s biggest problem, or even in the top ten. Despite the pathetic importunings of “Beta” O’Rourke, this is not World War II redux. Despite Hollywood stars lecturing us as they board their private jets, there are much bigger issues for us to face.

The good news is, the people of the world know that the climate scare is not important. The UN polled almost ten million people as to what issues matter the most to them. The UN did their best to push the climate scare by putting that as the first choice on their ballot … but even with that, climate came in dead last, and by a long margin. Here is what the people of the world actually find important:

Figure 3. Results of the UN “My World” poll. Further analytic data here.

As you can see, there were sixteen categories. People put education, healthcare, and jobs at the top … and way down at the very bottom, “Action taken on climate change” came in at number sixteen.

In summary:

We’ve done the two degree Celsius experiment.

The lack of any climate-related catastrophes indicates that warming is generally either neutral or good for animal and plant life alike.

Climate related deaths are only about a twentieth of what they were a hundred years ago.

The people of the planet generally don’t see climate as an important issue. Fact Check: They are right.

===========================

Here, my gorgeous ex-fiancee and I are wandering on the east side of the Sierra Nevada mountains. We went and looked at Death Valley. It’s a couple hundred feet below sea level, and very, very dry. In the Valley, I saw that there was a temperature station at Stovepipe Wells. So I immediately looked for that essential accessory to any well-maintained temperature station … the air conditioner exhaust. Here you go, you can just see the air conditioner on the right side of this south-looking photo:

But what good is an air conditioner without some good old black heat-absorbing asphalt pavement to balance it out? So of course, they’ve provided that as well … here’s the view looking west. I’m not sure if this station is still in use, but any readings here would certainly be suspect.

Heck, if we’d parked our pickup truck facing outwards in the next stall to the left and revved the engine, we probably could have set a new high temperature record for this date …

Death Valley itself is stunningly stark, with the bones of the earth poking up through the skin …

Ah, dear friends, the world is full of wonders, far too many for any man to see all of them … keep your foot pressed firmly on the accelerator, time is the one thing that none of us have enough of. As Mad Tom o’ Bedlam sang,

With a host of furious fancies, whereof I am commander
With a sword of fire and a steed of air through the universe I wander
By a ghost of rags and shadows, I summoned am to tourney
Ten leagues beyond the wild world’s end … methinks it is no journey.

Today we’re at Boulder Creek, just east of the Sierras by Owens Lake … or Owens Ex-Lake, because all the water that used to fill the lake now waters gardens in LA.

However, the drought is broken in California, and some of the Sierra ski resorts have gotten forty or fifty feet of snow over the winter, so the east slope of the Sierras look like this where we are:

I am put in mind of what the poet said …

Come, my friends,
‘Tis not too late to seek a newer world.
Push off, and sitting well in order smite
The sounding furrows; for my purpose holds
To sail beyond the sunset, and the baths
Of all the western stars, until I die.

My very best to each one of you, sail on, sail beyond …

w.



HiFast Note:

Willis’ photos of the Stevenson screen were taken at the Stovepipe Wells Ranger Station (36.608169, -117.144553)

Stovepipe Wells Ranger Station

The USCRN site ISWC1 is about 700m south (36.601914, -117.145068).

Taking down the latest Washington Post Antarctic scare story on 6x increased ice melt

Reblogged from Watts Up With That:

Ice loss from Antarctica has sextupled since the 1970s, new research finds
An alarming study shows massive East Antarctic ice sheet already is a significant contributor to sea-level rise

Chris Mooney and Brady Dennis

January 14 at 3:00 PM (Washington Post)

Antarctic glaciers have been melting at an accelerating pace over the past four decades thanks to an influx of warm ocean water — a startling new finding that researchers say could mean sea levels are poised to rise more quickly than predicted in coming decades.

The Antarctic lost 40 billion tons of melting ice to the ocean each year from 1979 to 1989. That figure rose to 252 billion tons lost per year beginning in 2009, according to a study published Monday in the Proceedings of the National Academy of Sciences. That means the region is losing six times as much ice as it was four decades ago, an unprecedented pace in the era of modern measurements. (It takes about 360 billion tons of ice to produce one millimeter of global sea-level rise.)

“I don’t want to be alarmist,” said Eric Rignot, an Earth-systems scientist for the University of California at Irvine and NASA who led the work. But he said the weaknesses that researchers have detected in East Antarctica — home to the largest ice sheet on the planet — deserve deeper study.

“The places undergoing changes in Antarctica are not limited to just a couple places,” Rignot said. “They seem to be more extensive than what we thought. That, to me, seems to be reason for concern.”

The findings are the latest sign that the world could face catastrophic consequences if climate change continues unabated. In addition to more-frequent droughts, heat waves, severe storms and other extreme weather that could come with a continually warming Earth, scientists already have predicted that seas could rise nearly three feet globally by 2100 if the world does not sharply decrease its carbon output. But in recent years, there has been growing concern that the Antarctic could push that even higher.

That kind of sea-level rise would result in the inundation of island communities around the globe, devastating wildlife habitats and threatening drinking-water supplies. Global sea levels have already risen seven to eight inches since 1900.

The full drivel here


Why do I call it “drivel”? Three reasons:

1. Anything Chris Mooney writes about climate is automatically in that category, because he can’t separate his fear of doom from his writing.

2. The math doesn’t work in the context of the subheadline. Alarming? Read on.

3. Data back to 1972…where?

First, let’s get some data. Wikipedia, while biased towards alarmism in this reference, at least has the basic data.

https://en.wikipedia.org/wiki/Antarctic_ice_sheet

It covers an area of almost 14 million square kilometres (5.4 million square miles) and contains 26.5 million cubic kilometres (6,400,000 cubic miles) of ice.[2]A cubic kilometer of ice weighs approximately one metric gigaton, meaning that the ice sheet weighs 26,500,000 gigatons.

Now for the math.  

So, if the Antarctic ice sheet weighs 26,500,000 gigatonnes or 26500000000000000 tonnes

252 billion tonnes is 252 gigatonnes

Really simple math says:  252gt/26,500,000gt x 100 = 9.509433962264151e-4 or 0.00095% change per year

But this is such a tiny loss in comparison to the total mass of the ice sheet, it’s microscopic…statistically insignificant.

In the email thread that preceded this story (h/t to Marc Morano) I asked people to check my work. Willis Eschenbach responded, corrected an extra zero, and pointed this out:

Thanks, Anthony. One small issue. You’ve got an extra zero in your percentage, should be 0.00095% per year loss.

Which means that the last ice will melt in the year 3079 …

I would also note that 250 billion tonnes of ice is 250 billion cubic meters. Spread out over the ocean, that adds about 0.7 mm/year to the sea level … that’s about 3 inches (7 cm) per century.

As you said … microscopic.

w.

Paul Homewood noted in the email thread:

Ice losses from Antarctica have tripled since 2012, increasing global sea levels by 0.12 inch (3 millimeters) in that timeframe alone, according to a major new international climate assessment funded by NASA and ESA (European Space Agency).

https://climate.nasa.gov/news/2749/ramp-up-in-antarctic-ice-loss-speeds-sea-level-rise/

0.5mm per year.

Not a lot to worry about.

“They attribute the threefold increase in ice loss from the continent since 2012 to a combination of increased rates of ice melt in West Antarctica and the Antarctic Peninsula, and reduced growth of the East Antarctic ice sheet.”

Translation: The volcano riddled West/Peninsula is melting bit more and the Eastern Sheet is growing a little less than usual.

Paul Homewood adds on his website:

Firstly, according to NASA’s own press release, the study only looks at data since 1992. The Mail’s headline (Taken from the Washington Post – Anthony) that “Antarctica is losing SIX TIMES more ice a year than it was in the 1970s “ is totally fake, as there is no data for the 1970s. Any estimates of ice loss in the 1970s and 80s are pure guesswork, and have never been part of this NASA IMBIE study, or previous ones.

image

Secondly, the period since 1992 is a ridiculously short period on which to base any meaningful conclusions at all. Changes over the period may well be due to natural, short term fluctuations, for instance ocean cycles. We know, as the NASA study states, that ice loss in West Antarctica is mainly due to the inflow of warmer seas.

The eruption of Pinatubo in 1991 is another factor. Global temperatures fell during the next five years, and may well have slowed down ice melt.

Either way, Pinkstone’s claim that the ice loss is due to global warming is fake. It is a change in ocean current that is responsible, and nothing to do with global warming.

Then there is his pathetic claim that “Antarctica is shedding ice at a staggering rate”. Alarmist scientists, and gullible reporters, love to quote impressive sounding numbers, like 252 gigatons a year. In fact, as NASA point out, the effect on sea level rise since 1992 is a mere 7.6mm, equivalent to 30mm/century.

Given that global sea levels have risen no faster since 1992 than they did in the mid 20thC, there is no evidence that Antarctica is losing ice any faster than then. To call it staggering is infantile.

NASA also reckon that ice losses from Antarctica between 2012 and 2017 increased sea levels by 3mm, equivalent to 60mm/century. Again hardly a scary figure. But again we must be very careful about drawing conclusions from such a short period of time. Since 2012, we have had a record 2-year long El Nino. What effect has this had?

But back to that previous NASA study, carried out by Jay Zwally in 2015, which found:

A new NASA study says that an increase in Antarctic snow accumulation that began 10,000 years ago is currently adding enough ice to the continent to outweigh the increased losses from its thinning glaciers.

The research challenges the conclusions of other studies, including the Intergovernmental Panel on Climate Change’s (IPCC) 2013 report, which says that Antarctica is overall losing land ice.

According to the new analysis of satellite data, the Antarctic ice sheet showed a net gain of 112 billion tons of ice a year from 1992 to 2001. That net gain slowed   to 82 billion tons of ice per year between 2003 and 2008.

https://www.nasa.gov/feature/goddard/nasa-study-mass-gains-of-antarctic-ice-sheet-greater-than-losses 

Far from losing ice, as the new study thinks, Zwally’s 2015 analysis found the opposite, that the ice sheet was growing.

OK, Zwally’s data only went up to 2008, but there are still huge differences. Whereas Zwally estimates ice gain of between 82 and 112 billion tonnes a year between 1992 and 2008, the new effort guesses at a loss of 83 billion tonnes a year.

It is worth pointing out that Zwally’s comment about the IPCC 2013 report refers to the 2012 IMBIE report, which was the forerunner to the new study, the 2018 IMBIE.

Quite simply, nobody has the faintest idea whether the ice cap is growing or shrinking, never mind by how much, as the error margins and uncertainties are so huge.

The best guide to such matters comes from tide gauges around the world. And these continue to show that sea levels are rising no faster then mid 20thC, and at a rate of around 8 inches per century.

ARGO—Fit for Purpose?

Reblogged from Watts Up With That:

By Rud Istvan

This is the second of two guest posts on whether ‘big’ climate science missions are fit for purpose, inspired by ctm as seaside lunch speculations.

The first post dealt with whether satellite altimetry, specifically NASA’s newest Jason3 ‘bird’, was fit for sea level rise (SLR) ‘acceleration’ purpose. It found using NASA’s own Jason3 specs that Jason3 (and so also its predecessors) likely was NOT fit–and never could have been–despite its SLR data being reported by NASA to 0.1mm/yr. We already knew that annual SLR is low single digit millimeters. The reasons satellite altimetry cannot provide that level of precision are very basic, and were known to NASA beforehand—Earth’s requisite reference ellipsoid is lumpy, oceans have varying waves, atmosphere has varying humidity—so NASA never really had a chance of achieving what they aspired to: satalt missions to measure sea level rise to fractions of a millimeter per year equivalent to tide gauges. NASA claims they can, but their specifications say they cannot. The post proved lack of fitness via overlap discrepancies between Jason2 and Jason3, plus failure of NASA SLR estimates to close.

This second related guest post asks the same question of ARGO.

Unlike Jason3, ARGO had no good pre-existing tide gauge equivalent mission comparable. Its novel oceanographic purposes (below) tried to measure several things ‘rigorously’ for the very first time. “Rigorously’ did NOT mean precisely. One, ocean heat content (OHC), was previously very inadequately estimated. OHC is much more than just sea surface temperatures (SSTs). SSTs (roughly but not really surface) were formerly measured by trade route dependent buckets/thermometers, or by trade route and ship laden dependent engine intake cooling water temperatures. Deeper ocean was not measured at all until inherently depth inaccurate XBT sensors were developed for the Navy.

Whether ARGO is fit for purpose involves a complex unraveling of design intent plus many related facts. The short ARGO answer is probably yes, although OHC error bars are provably understated in ARGO based scientific literature.

For those WUWT readers wishing a deeper examination of this guest post’s summary conclusions, a treasure trove of ARGO history, implementation, and results is available at www.ARGO.uscd.edu. Most of this post is either directly derived therefrom, or from references found therein, or leads to Willis Eschenbach’s previous WUWT ARGO posts (many searchable using ARGO), with the four most relevant directly linked below.

This guest post is divided into three parts:

1. What was the ARGO design intent? Unlike simple Jason3 SLR, ARGO has a complex set of overlapping oceanographic missions.

2. What were/are the ARGO design specs relative to its missions?

3. What do facts say about ARGO multiple mission fitness?

Part 1 ARGO Intent

ARGO was intended to explore a much more complicated set of oceanography questions than Jason’s simple SLR acceleration. The ideas were developed by oceanographers at Scripps circa 1998-1999 based on a decade of previous regional ocean research, and were formulated into two intent/design documents agreed by the implementing international ARGO consortium circa 2000. There were several ARGO intended objectives. The three most explicitly relevant to this summary post were:

1. Global ocean heat climatology (OHC with intended accuracy explicitly defined as follows)

2. Ocean ‘fresh water storage’ (upper ocean rainfall salinity dilution)

3. Map of non-surface currents

All providing intended “global coverage of the upper ocean on broad spatial scales and time frames of several months or longer.”

Unlike Jason3, no simple yes/no ‘fit for purpose’ for ARGO’s multiple missions is possible. It depends on which mission over what time frame.

Part 2 ARGO Design

The international design has evolved. Initially, the design was ~3000 floats providing a random roughly 3 degree lat/lon ocean spacing, explicitly deemed sufficient spatial resolution for all ARGO intended oceanographic purposes.

There is an extensive discussion of the array’s accuracy/cost tradeoffs in the original intent/design documentation. The ARGO design “is an ongoing exercise in balancing the array’s requirements against the practical limitations imposed by technology and resources”. Varying perspectives still provided (1998-99) “consistent estimates of what is needed.” Based on previous profiling float experiments, “in proximate terms an array with spacing of a few hundred kilometers is sufficient to determine surface layer heat storage (OHC) with an accuracy of about 10W/m2 over areas (‘pixels’) about 1000km on a side.” Note the abouts.

The actual working float number is now about 3800. Each float was to last 4-5 years battery life; the actual is ~4.1 years. Each float was to survive at least 150 profiling cycles; this has been achieved (150 cycles*10 days per cycle/365 days per year equals 4.1 years). Each profile cycle was to be 10 days, drifting randomly at ~1000 meters ‘parking depth’ at neutral buoyancy for 9, then descending to 2000 meters to begin measuring temperature and salinity, followed by a ~6 hour rise to the surface with up to 200 additional measurement sets of pressure (giving depth), temperature, and salinity. This was originally followed by 6-12 hours on the surface transmitting data (now <2 hours using the Iridium satellite system) before sinking back to parking depth.

The basic ARGO float design remains:

clip_image002

And the basic ARGO profiling pattern remains:

clip_image004

‘Fit for purpose’ concerning OHC (via the 2000 meter temperature profile) presents two relevant questions. (1) Is 2000 meters deep enough? (2) Are the sensors accurate enough to estimate the 10W/m2 per 1000km/side ‘pixel’?

With respect to depth, there are two differently sourced yet similar ‘yes’ answers for all mission intents.

For salinity, the ARGO profile suffices. Previous oceanographic studies showed (per the ARGO source docs) that salinity is remarkably unvarying below about 750 meters depth in all oceans. This fortunately provides a natural salinity ‘calibration’ for those empirically problematic sensors.

It also means seawater density is roughly constant over about 2/3 of the profile, so pressure is a sufficient proxy for depth (and pressure can also be calibrated by measured salinity above 750 meters translated to density).

For temperature, as the following figure (in °F not °C) typical thermocline profiles show, ARGO ΔT depth profile does not depend very much on latitude since 2000 meters equaling ~6500 feet reaches the approximately constant deep ocean temperature equilibrium at all latitudes, providing another natural ARGO ‘calibration’. The 2000 meters ARGO profile was a wise intent/design choice.

clip_image005

Part 3 Is ARGO fit for purpose?

Some further basics are needed as background to the ARGO objectives.

When an ARGO float surfaces to transmit its data, its position is ascertained via GPS to within about 100 meters. Given the vastness of the oceans, that is an overly precise position measurement for ‘broad spatial scales’ of deep current drift and 1000000km2 OHC/salinity ‘pixels’.

Thanks to salinity stability below 750 meters, ARGO ‘salinity corrected’ instruments are accurate (after float specific corrections) to ±0.01psu, giving reasonable estimates of ‘fresh water storage’. A comparison of 350 retrieved ‘dead battery’ ARGO floats showed that 9% were still out of ‘corrected’ salinity calibration at end of life, unavoidably increasing salinity error a little.

The remaining big ‘sufficient accuracy’ question is OHC, and issues like Trenberth’s infamous “Missing Heat” covered in the eponious essay in ebook Blowing Smoke. OHC is a very tricky sensor question, since the vast heat capacity of ocean water means a very large change in ocean heat storage translates into a very small change in absolute seawater temperature.

How good are the ARGO temperature sensors? On the surface, it might seem to depend, since as an international consortium, ARGO does not have one float design. There are presently five: Provor, Apex, Solo, S2A, and Navis.

However, those 5 only ever embodied two temperature sensors, FS1 and SBE. Turns out—even better for accuracy—FS1 was retired late in 2006 when JPL’s Willis published the first ARGO OHC analysis after full (3000 float) deployment, finding (over too short a time frame, IMO) OHC was decreasing (!). Oops! Further climate science analysis purportedly showed FS1 temperature profiles in a few hundred of the early ARGO floats were probably erroneous. Those floats were taken out of service, leaving just SBE sensors. All five ARGO float designs use current model SBE38 from 2015.

SeaBirdScientific builds that sensor, and its specs can be found at www.seabird.com. The SeaBird E38 sensor spec is the following (sorry, but it doesn’t copy well from their website where all docs are in a funky form of pdf probably intended to prevent partial duplication like for this post).

Measurement Range

-5 to +35 °C

Initial Accuracy 1

± 0.001 °C (1 mK)

Typical Stability

0.001 °C (1 mK) in six months, certified

Resolution

Response Time 2

500 msec

Self-heating Error

< 200 μK

1 NIST-traceable calibration applying over the entire range.
2 Time to reach 63% of nal value following a step change in temperature

That is a surprisingly good seawater temperature sensor. Accurate to a NIST calibrated 0.001°C, with a certified temperature precision drift per 6 months (1/8 of a float lifetime) of ±0.001°C. USCD says in its ARGO FAQs that the ARGO temperature data it provides is accurate to ±0.002°C. This suffices to estimate the about 10W/m2 OHC intent per 1000000 km2 ARGO ‘pixel’.

BUT, there is still a major ‘fit for purpose’ problem despite all the ARGO strong positives. Climate papers based on ARGO habitually understate the actual resulting OHC uncertainty—about 10W/m2. (Judith Curry has called this one form of her ‘uncertainty monster’). Willis Eschenbach has posted extensively here at WUWT (over a dozen guest posts already) on ARGO and its findings. His four most relevant for the ‘fit for purpose’ scientific paper uncertainty question are from 2012-2015, links that WE kindly provided via email needing no explanation:


Decimals of Precision

An Ocean of Overconfidence

More Ocean-Sized Errors In Levitus Et Al.

Can We Tell If The Oceans Are Warming

And so we can conclude concerning the ARGO ‘fit for purpose’ question, yes it probably is—but only if ARGO based science papers also correctly provide the associated ARGO intent uncertainty (error bars) for ‘rigorous albeit broad spatial resolution’.

Ocean Heat Content Surprises

Here are Dr. Curry’s summarizing comments:

JC reflections

After reading all of these papers, I would have to conclude that if the CMIP5 historical simulations are matching the ‘observations’ of ocean heat content, then I would say that they are getting the ‘right’ answer for the wrong reasons. Not withstanding the Cheng et al. paper, the ‘right’ answer (in terms of magnitude of the OHC increase) is still highly uncertain.

The most striking findings from these papers are:

  • the oceans appear to have absorbed as much heat in the early 20th century as in recent decades (stay tuned for a forthcoming blog post on the early 20th century warming)
  • historical model simulations are biased toward overestimating ocean heat uptake when initialized at equilibrium during the Little Ice Age
  • the implied heat loss in the deep ocean since 1750  offsets one-fourth of the global heat gain in the upper ocean.
  • cooling below 2000 m offsets more than one-third of the heat gain above 2000 m.
  • the deep Pacific cooling trend leads to a downward revision of heat absorbed over the 20th century by about 30 percent.
  • an estimated 20% contribution by geothermal forcing to overall global ocean warming over the past two decades.
  • we do not properly understand the centennial to millennia ocean warming patterns, mainly due to a limited understanding of circulation and mixing changes

These findings have implications for:

  • the steric component of sea level rise
  • ocean heat uptake in energy balance estimates of equilibrium climate sensitivity
  • how we initialize global climate models for historical simulations

While each of these papers mentions error bars or uncertainty, in all but the Cheng et al. paper, significant structural uncertainties in the method are discussed. In terms of uncertainties, these papers illustrate numerous different methods of estimating of 20th century ocean heat content.  A much more careful assessment needs to be done than was done by Cheng et al., that includes these new estimates and for a longer period of time (back to 1900), to understand the early 20th century warming.

In an article about the Cheng et al. paper at Inside Climate News, Gavin Schmidt made the following statement:

“The biggest takeaway is that these are things that we predicted as a community 30 years ago,” Schmidt said. “And as we’ve understood the system more and as our data has become more refined and our methodologies more complete, what we’re finding is that, yes, we did know what we were talking about 30 years ago, and we still know what we’re talking about now.”

Sometimes I think we knew more of what we were talking about 30 years ago (circa the time of the IPCC FAR, in 1990) than we do now: “it aint what you don’t know that gets you in trouble. It’s what you know for sure that just aint so.”

The NASA GISS crowd (including Gavin) is addicted to the ‘CO2 as climate control knob’ idea.  I have argued that CO2 is NOT a climate control knob on sub millennial time scales, owing to the long time scales of the deep ocean circulations.

A talking point for ‘skeptics’ has been ‘the warming is caused by coming out of the Little Ice Age.’   The control knob afficionadoes then respond ‘but what’s the forcing.’  No forcing necessary; just the deep ocean circulation doing its job.  Yes, additional CO2 will result in warmer surface temperatures, but arguing that 100% (or more) of the warming since 1950 is caused by AGW completely neglects what is going on in the oceans.

=========

[Hifast Note: The comment thread at Dr. Curry’s Climate Etc. is essential reading as well.]

 

Climate Etc.

by Judith Curry

There have several interesting papers on ocean heat content published in recent weeks, with some very important implications.

View original post 2,593 more words

A Small Margin Of Error

Reblogged from Watts Up With That:

Guest Post by Willis Eschenbach

I see that Zeke Hausfather and others are claiming that 2018 is the warmest year on record for the ocean down to a depth of 2,000 metres. Here’s Zeke’s claim:

Figure 1. Change in ocean heat content, 1955 – 2018. Data available from Institute for Applied Physics (IAP). 

When I saw that graph in Zeke’s tweet, my bad-number detector started flashing bright red. What I found suspicious was that the confidence intervals seemed far too small. Not only that, but the graph is measured in a unit that is meaningless to most everyone. Hmmm …

Now, the units in this graph are “zettajoules”, abbreviated ZJ. A zettajoule is a billion trillion joules, or 1E+21 joules. I wanted to convert this to a more familiar number, which is degrees Celsius (°C). So I had to calculate how many zettajoule it takes to raise the temperature of the top two kilometres of the ocean by 1°C.

I go over the math in the endnotes, but suffice it to say at this point that it takes about twenty-six hundred zettajoule to raise the temperature of the top two kilometres of the ocean by 1°C. 2,600 ZJ per degree.

Now, look at Figure 1 again. They claim that their error back in 1955 is plus or minus ninety-five zettajoules … and that converts to ± 0.04°C. Four hundredths of one degree celsius … right …

Call me crazy, but I do NOT believe that we know the 1955 temperature of the top two kilometres of the ocean to within plus or minus four hundredths of one degree.

It gets worse. By the year 2018, they are claiming that the error bar is on the order of plus or minus nine zettajoules … which is three thousandths of one degree C. That’s 0.003°C. Get real! Ask any process engineer—determining the average temperature of a typcial swimming pool to within three thousandths of a degree would require a dozen thermometers or more …

The claim is that they can achieve this degree of accuracy because of the ARGO floats. These are floats that drift down deep in the ocean. Every ten days they rise slowly to the surface, sampling temperatures as they go. At present, well, three days ago, there were 3,835 Argo floats in operation.

Figure 2. Distribution of all Argo floats which were active as of January 8, 2019.

Looks pretty dense-packed in this graphic, doesn’t it? Maybe not a couple dozen thermometers per swimming pool, but dense … however,  in fact, that’s only one Argo float for every 93,500 square km (36,000 square miles) of ocean. That’s a box that’s 300 km (190 miles) on a side and two km (1.2 miles) deep … containing one thermometer.

Here’s the underlying problem with their error estimate. As the number of observations goes up, the error bar decreases by one divided by the square root of the number of observations. And that means if we want to get one more decimal in our error, we have to have a hundred times the number of data points.

For example, if we get an error of say a tenth of a degree C from ten observations, then if we want to reduce the error to a hundredth of a degree C we need one thousand observations …

And the same is true in reverse. So let’s assume that their error estimate of ± 0.003°C for 2018 data is correct, and it’s due to the excellent coverage of the 3,835 Argo floats.

That would mean that we would have an error of ten times that, ± 0.03°C if there were only 38 ARGO floats …

Sorry. Not believing it. Thirty-eight thermometers, each taking three vertical temperature profiles per month, to measure the temperature of the top two kilometers of the entire global ocean to plus or minus three hundredths of a degree?

My bad number detector was still going off. So I decided to do a type of “Monte Carlo” analysis. Named after the famous casino, a Monte Carlo analysis implies that you are using random data in an analysis to see if your answer is reasonable.

In this case, what I did was to get gridded 1° latitude by 1° longitude data for ocean temperatures at various depths down to 2000 metres from the Levitus World Ocean Atlas. It contains the long-term monthly averages at each depth for each gridcell for each month. Then I calculated the global monthly average for each month from the surface down to 2000 metres.

Now, there are 33,713 1°x1° gridcells with ocean data. (I excluded the areas poleward of the Arctic/Antarctic Circles, as there are almost no Argo floats there.) And there are 3,825 Argo floats. On average some 5% of them are in a common gridcell. So the Argo floats are sampling on the order of ten percent of the gridcells … meaning that despite having lots of Argo floats, still at any given time, 90% of the 1°x1° ocean gridcells are not sampled. Just sayin …

To see what difference this might make, I did repeated runs by choosing 3,825 ocean gridcells at random. I then ran the same analysis as before—get the averages at depth, and then calculate the global average temperature month by month for just those gridcells. Here’s a map of typical random locations for simulated Argo locations for one run.

Figure 3. Typical simulated distribution of Argo floats for one run of Monte Carlo Analysis.

And in the event, I found what I suspected I’d find. Their claimed accuracy is not borne out by experiment. Figure 4 shows the results of a typical run. The 95% confidence interval for the results varied from 0.05°C to 0.1°C.

Figure 4. Typical run, average global ocean temperature 0-2,000 metres depth, from Levitus World Ocean Atlas (red dots) and from 3.825 simulated Argo locations. White “whisker” lines show the 95% confidence interval (95%CI). For this run, the 95%CI was 0.07°C. Small white whisker line at bottom center shows the claimed 2018 95%CI of ± 0.003°C.

As you can see, using the simulated Argo locations gives an answer that is quite close to the actual temperature average. Monthly averages are within a tenth of a degree of the actual average … but because the Argo floats only measure about 10% of the 1°x1° ocean gridcells, that is still more than an order of magnitude larger than the claimed 2018 95% confidence interval for the AIP data shown in Figure 1.

So I guess my bad number detector must still be working …

Finally, Zeke says that the ocean temperature in 2018 exceeds that in 2017 by “a comfortable margin”. But in fact, it is warmer by only 8 zettajoules … which is less than the claimed 2018 error. So no, that is not a “comfortable margin”. It’s well within even their unbelievably small claimed error, which they say is ± 9 zettajoule for 2018.

In closing, please don’t rag on Zeke about this. He’s one of the good guys, and all of us are wrong at times. As I myself have proven more often than I care to think about, the American scientist Lewis Thomas was totally correct when he said, “We are built to make mistakes, coded for error”

Best regards to everyone,

w.

Greenland Is Way Cool

Reblogged from Watts Up With That:

Guest Post by Willis Eschenbach

As a result of a tweet by Steve McIntyre, I was made aware of an interesting dataset. This is a look by Vinther et al. at the last ~12,000 years of temperatures on the Greenland ice cap. The dataset is available here.

Figure 1 shows the full length of the data, along with the change in summer insolation at 75°N, the general location of the ice cores used to create the temperature dataset.

Figure 1. Temperature anomalies of the Greenland ice sheet (left scale, yellow/black line), and the summer insolation in watts per square metre at 75°N (right scale, blue/black line). The red horizontal dashed line shows the average ice sheet temperature 1960-1980.

I’ll only say a few things about each of the graphs in this post. Regarding Figure 1, the insolation swing shown above is about fifty watts per square metre. Over the period in question, the temperature dropped about two and a half degrees from the peak in about 5800 BCE. That would mean the change is on the order of 0.05°C for each watt per square metre change in insolation …

From about 8300 BCE to 800 BCE, the average temperature of the ice sheet, not the maximum temperature but the average temperature of the ice sheet, was greater than the 1960-1980 average temperature of the ice sheet. That’s 7,500 years of the Holocene when Greenland’s ice sheet was warmer than recent temperatures.

Next, Figure 2 shows the same temperature data as in Figure 1, but this time with the EPICA Dome C ice core CO2 data.

Figure 2. Temperature anomalies of the Greenland ice sheet (left scale, yellow/black line), and EPICA Dome C ice core CO2 data, 9000 BCE – 1515 AD (right scale, blue/black line)

Hmmm … for about 7,000 years, CO2 is going up … and Greenland temperature is going down … who knew?

Finally, here’s the recent Vinther data:

Figure 3. Recent temperature anomalies of the Greenland ice sheet.

Not a whole lot to say about that except that the Greenland ice sheet has been as warm or warmer than the 1960-1980 average a number of times during the last 2000 years.

Finally, I took a look to see if there were any solar-related or other strong cycles in the Vinther data. Neither a Fourier periodogram nor a CEEMD analysis revealed any significant cycles.

And that’s the story of the Vinther reconstruction … here, we’ve had lovely rain for a couple of days now. Our cat wanders the house looking for the door into summer. He goes out time after time hoping for a different outcome … and he is back in ten minutes, wanting to be let in again.

My best to all, rain or shine,

w.

Hansen’s 1988 Predictions Redux

Reblogged from Watts Up With That:

Guest Post by Willis Eschenbach

Over in the Tweeterverse, someone sent me the link to the revered climate scientist James Hansen’s 1988 Senate testimony and told me “Here’s what we were told 30 years ago by NASA scientist James Hansen. It has proven accurate.”

I thought … huh? Can that be right?

Here is a photo of His Most Righteousness, Dr. James “Death Train” Hansen, getting arrested for civil disobedience in support of climate alarmism …

I have to confess, I find myself guilty of schadenfreude in noting that he’s being arrested by … Officer Green …

In any case, let me take as my text for this sermon the aforementioned 1988 Epistle of St. James To The Senators, available here. I show the relevant part below, his temperature forecast.

ORIGINAL CAPTION: Fig. 3. Annual mean global surface air temperature computed for trace gas scenarios A, B, and C described in reference 1. [Scenario A assumes continued growth rates of trace gas emissions typical of the past 20 years, i.e., about 1.5% yr^-1 emission growth; scenario B has emission rates approximately fixed at current rates; scenario C drastically reduces trace gas emissions between 1990 and 2000.] The shaded range is an estimate of global temperature during the peak of the current and previous interglacial periods, about 6,000 and 120,000 years before present, respectively. The zero point for observations is the 1951-1980 mean (reference 6); the zero point for the model is the control run mean.

I was interested in “Scenario A”, which Hansen defined as what would happen assuming “continued growth rates of trace gas emissions typical of the past 20 years, i.e., about 1.5% yr-1“.

To see how well Scenario A fits the period after 1987, which is when Hansen’s observational data ends, I took a look at the rate of growth of CO2 emissions since 1987. Figure 2 shows that graph.

Figure 2. Annual increase in CO2 emissions, percent.

This shows that Hansen’s estimate of future CO2 emissions was quite close, although the reality was ~ 25% MORE annual increase in CO2 than Hansen estimated. As a result, his computer estimate for Scenario A should have shown a bit more warming than we see in Figure 1 above.

Next, I digitized Hansen’s graph to compare it to reality. To start with, here is what is listed as “Observations” in Hansen’s graph. I’ve compared Hansen’s observations to the Goddard Institute for Space Studies Land-Ocean Temperature Index (GISS LOTI) and the HadCRUT global surface temperature datasets.

Figure 3. The line marked “Observations” in Hansen’s graph shown as Figure 1 above, along with modern temperature estimates. All data is expressed as anomalies about the 1951-1980 mean temperature.

OK, so now we have established that:

• Hansen’s “Scenario A” estimate of future growth in CO2 emissions was close, albeit a bit low, and

• Hansen’s historical temperature observations agree reasonably well with modern estimates.

Given that he was pretty accurate in all of that, albeit a bit low on CO2 emissions growth … how did his Scenario A prediction work out?

Well … not so well …

Figure 4. The line marked “Observations” in Hansen’s graph shown as Figure 1 above, along with his Scenario A, and modern temperature estimates. All observational data is expressed as anomalies about the 1951-1980 mean temperature.

So I mentioned this rather substantial miss, predicted warming twice the actual warming, to the man on the Twitter-Totter, the one who’d said that Hansen’s prediction had been “proven accurate”.

His reply?

He said that Dr. Hansen’s prediction was indeed proven accurate—he’d merely used the wrong value for the climate sensitivity, viz: “The only discrepancy in Hansen’s work from 1988 was his estimate of climate sensitivity. Using best current estimates, it plots out perfectly.”

I loved the part about “best current estimates” of climate sensitivity … here are current estimates, from my post on The Picasso Problem  

Figure 5. Changes over time in the estimate of the climate sensitivity parameter “lambda”. “∆T2x(°C)” is the expected temperature change in degrees Celsius resulting from a doubling of atmospheric CO2, which is assumed to increase the forcing by 3.7 watts per square metre. FAR, SAR, TAR, AR4, AR5 are the UN IPCC 1st, second, third, fourth and fifth Assessment Reports giving an assessment of the state of climate science as of the date of each report. Red dots show recent individual estimates of the climate sensitivity

While giving the Tweeterman zero points for accuracy, I did have to applaud him for sheer effrontery and imaginuity. It’s a perfect example of why it is so hard to convince climate alarmists of anything—because to them, everything is a confirmation of their ideas. Whether it is too hot, too cold, too much snow, too little snow, warm winters, brutal winters, or disproven predictions—to the alarmists all of these are clear and obvious signs of the impending Thermageddon, as foretold in the Revelations of St. James of Hansen.

My best to you all, the beat goes on, keep fighting the good fight.

w.