Half of 21st Century Warming Due to El Nino

Reblogged from Dr.RoySpencer.com  [HiFast bold]

May 13th, 2019 by Roy W. Spencer, Ph. D.

A major uncertainty in figuring out how much of recent warming has been human-caused is knowing how much nature has caused. The IPCC is quite sure that nature is responsible for less than half of the warming since the mid-1900s, but politicians, activists, and various green energy pundits go even further, behaving as if warming is 100% human-caused.

The fact is we really don’t understand the causes of natural climate change on the time scale of an individual lifetime, although theories abound. For example, there is plenty of evidence that the Little Ice Age was real, and so some of the warming over the last 150 years (especially prior to 1940) was natural — but how much?

The answer makes as huge difference to energy policy. If global warming is only 50% as large as is predicted by the IPCC (which would make it only 20% of the problem portrayed by the media and politicians), then the immense cost of renewable energy can be avoided until we have new cost-competitive energy technologies.

The recently published paper Recent Global Warming as Confirmed by AIRS used 15 years of infrared satellite data to obtain a rather strong global surface warming trend of +0.24 C/decade. Objections have been made to that study by me (e.g. here) and others, not the least of which is the fact that the 2003-2017 period addressed had a record warm El Nino near the end (2015-16), which means the computed warming trend over that period is not entirely human-caused warming.

If we look at the warming over the 19-year period 2000-2018, we see the record El Nino event during 2015-16 (all monthly anomalies are relative to the 2001-2017 average seasonal cycle):

21st-century-warming-2000-2018-550x733
Fig. 1. 21st Century global-average temperature trends (top) averaged across all CMIP5 climate models (gray), HadCRUT4 observations (green), and UAH tropospheric temperature (purple). The Multivariate ENSO Index (MEI, bottom) shows the upward trend in El Nino activity over the same period, which causes a natural enhancement of the observed warming trend.

We also see that the average of all of the CMIP5 models’ surface temperature trend projections (in which natural variability in the many models is averaged out) has a warmer trend than the observations, despite the trend-enhancing effect of the 2015-16 El Nino event.

So, how much of an influence did that warm event have on the computed trends? The simplest way to address that is to use only the data before that event. To be somewhat objective about it, we can take the period over which there is no trend in El Nino (and La Nina) activity, which happens to be 2000 through June, 2015 (15.5 years):

21st-century-warming-2000-2015.5-550x733
Fig. 2. As in Fig. 1, but for the 15.5 year period 2000 to June 2015, which is the period over which there was no trend in El Nino and La Nina activity.

Note that the observed trend in HadCRUT4 surface temperatures is nearly cut in half compared to the CMIP5 model average warming over the same period, and the UAH tropospheric temperature trend is almost zero.

One might wonder why the UAH LT trend is so low for this period, even though in Fig. 1 it is not that far below the surface temperature observations (+0.12 C/decade versus +0.16 C/decade for the full period through 2018). So, I examined the RSS version of LT for 2000 through June 2015, which had a +0.10 C/decade trend. For a more apples-to-apples comparison, the CMIP5 surface-to-500 hPa layer average temperature averaged across all models is +0.20 C/decade, so even RSS LT (which usually has a warmer trend than UAH LT) has only one-half the warming trend as the average CMIP5 model during this period.

So, once again, we see that the observed rate of warming — when we ignore the natural fluctuations in the climate system (which, along with severe weather events dominate “climate change” news) — is only about one-half of that projected by climate models at this point in the 21st Century. This fraction is consistent with the global energy budget study of Lewis & Curry (2018) which analyzed 100 years of global temperatures and ocean heat content changes, and also found that the climate system is only about 1/2 as sensitive to increasing CO2 as climate models assume.

It will be interesting to see if the new climate model assessment (CMIP6) produces warming more in line with the observations. From what I have heard so far, this appears unlikely. If history is any guide, this means the observations will continue to need adjustments to fit the models, rather than the other way around.

Advertisements

Chinese UHI study finds 0.34C/century inflation effect on average temperature estimate.

Tallbloke's Talkshop

New study published by Springer today makes interesting reading. Phil Jones’ ears will be burning brightly.

Abstract:
Historical temperature records are often partially biased by the urban heat island (UHI) effect. However, the exact magnitude of these biases is an ongoing, controversial scientific question, especially in regions like China where urbanization has greatly increased in recent decades. Previous studies have mainly used statistical information and selected static population targets, or urban areas in a particular year, to classify urban-rural stations and estimate the influence of urbanization on observed warming trends. However, there is a lack of consideration for the dynamic processes of urbanization. The Beijing-Tianjin-Hebei (BTH), Yangtze River Delta (YRD), and Pearl River Delta (PRD) are three major urban agglomerations in China which were selected to investigate the spatiotemporal heterogeneity of urban expansion effects on observed warming trends in this study. Based on remote sensing (RS) data, urban area expansion…

View original post 149 more words

What’s Wrong With The Surface Temperature Record? Guest: Dr. Roger Pielke Sr.

Reblogged from Watts Up With That:

Dr. Roger Pielke Sr. joins [Anthony Watts] on a podcast to discuss the surface temperature record, the upcoming IPCC report, and climate science moving forward.

Dr. Roger Pielke Sr. explains how the Intergovernmental Panel on Climate Change (IPCC) is incorrectly explaining climate change to the media and public. Pielke highlights how the IPCC ignores numerous drivers of climate aside from CO2, leading to numerous factual inaccuracies in the IPCC reports.

Climate monitoring station in a parking lot at University of Arizona, Tucson

We also cover what is wrong with the surface temperature record – specifically why many temperature readings are higher than the actual temperature.

Available on Amazon at a special low price – click image

Pielke is currently a Senior Research Scientist in CIRES and a Senior Research Associate at the University of Colorado-Boulder in the Department of Atmospheric and Oceanic Sciences (ATOC) at the University of Colorado in Boulder (November 2005 -present). He is also an Emeritus Professor of Atmospheric Science at Colorado State University.

BIG NEWS – Verified by NOAA – Poor Weather Station Siting Leads To Artificial Long Term Warming

Sierra Foothill Commentary

Based on the data collected for the Surface Station Project and analysis papers describing the results, my friend Anthony Watts has been saying for years that “surface temperature measurements (and long term trends) have been affected by encroachment of urbanization on the placement of weather stations used to measure surface air temperature, and track long term climate.”

When Ellen and I traveled across the country in the RV we visited weather stations in the historical weather network and took photos of the temperature measurement stations and the surrounding environments.

Now, NOAA has validated Anthony’s findings — weather station siting can influence the surface station long temperature record. Here some samples that were taken by other volunteers :

clip_image004Detroit_lakes_USHCN

Impacts of Small-Scale Urban Encroachment on Air Temperature Observations

Ronald D. Leeper, John Kochendorfer, Timothy Henderson, and Michael A. Palecki
https://journals.ametsoc.org/doi/10.1175/JAMC-D-19-0002.1

Abstract

A field experiment was performed in Oak Ridge, TN, with four…

View original post 248 more words

Bramston Reef Corals – The Other Side of the Mud Flat

Reblogged from Watts Up With That:

0-feature-IMG_6427

Reposted from Jennifer Marohasy’s blog

May 6, 2019 By jennifer

THE First Finding handed down by Judge Salvador Vasta in the Peter Ridd court case concerned Bramston reef off Bowen and a photograph taken in 1994 that Terry Hughes from James Cook University has been claiming proves Acropora corals that were alive in 1890 are now all dead – the fringing reef reduced to mudflat.

Meanwhile, Peter Ridd from the same university, had photographs taken in 2015 showing live Acropora and the need for quality assurance of Hughes’ claims.

Both sides were preparing evidence for over a year – with the lawyers apparently pocketing in excess of one million dollars – yet there was no interest in an independent assessment of the state of Bramston reef.

It more than once crossed my mind, that with all the money floating around for reef research and lawyers … there could perhaps be some mapping, or just one transect, at this most contentious of locations supposedly indicative of the state of the Great Barrier Reef more generally.

In his judgment Judge Salvadore Vasta was left to simply conclude that it was unclear whether there was now mudflat or coral reef where an extensive area of Acropora coral had been photographed back in 1890, but that Peter Ridd nevertheless had the right to ask the question.

Indeed, the court case and the appeal which must be lodged by tomorrow (Tuesday 7th May), is apparently all about ‘academic freedom’ and ‘employment law’, while the average Australian would perhaps be more likely to care if they got to see some coral and some fish – dead or alive.

I visited Bramston Reef over Easter because I couldn’t wait any longer to know if the corals in Peter Ridd’s 2015 photographs had been smashed by Cyclone Debbie that hovered over Bowen two years later, in April 2017.

As I drove into Bowen, I took a detour towards Edgecombe Bay, but I didn’t stop and explore – because I saw the signage warning of crocodiles.

Peter Ridd had told me that his technicians had approached from the south south-east in a rubber dinghy to get their photographs. The day I arrived (April 18, 2019), and the next, there was a strong south south-easterly wind blowing, and no-one prepared to launch a boat to take me out.

On the afternoon of Easter Friday – ignoring the signage warning of crocodiles – I walked through the mangroves to the water’s edge. I found the mudflat which Terry Hughes had claimed now covers once healthy Acropora coral and walked across it. The other side of the mudflat there was reef flat with beds of healthy Halimeda. This area of reef flat over sand extended for nearly one kilometre – before it gave way to hectares of Acropora coral.

Professor Hughes had just not walked far enough.

When, with much excitement, I showed my photographs of all the Acropora to a Bowen local. He described them as, “rubbish corals”. He seemed ashamed that the corals I had photographed at Bramston reef were not colourful.

For a coral to make the front cover of National Geographic it does need to be exceptionally colourful. Indeed, for a woman model to make the cover of Vogue magazine she needs to be exceptionally thin. But neither thin, nor colourful, is necessarily healthy. Indeed, Acropora corals are generally tan or brown in colour when they have masses of zooxanthellae and are thus growing quickly – and are healthy.

White corals have no zooxanthellae and are often dead, because they have been exposed to temperatures that are too high. Colourful corals, like thin women, are more nutrient starved and often exist in environments of intense illumination – existing near the limits of what might be considered healthy.

Such basic facts are not well understood. Instead there is an obsession with saving the Great Barrier Reef from imminent catastrophe while we are either shown pictures of bleached white dead coral, or spectacularly colourful corals from outer reefs in nutrient-starved waters … while thousands of square kilometres of healthy brown coral is ignored.

Peter Ridd did win his high-profile court case for the right to suggest there is a need for some quality assurance of the research – but I can’t see anyone getting on with this. The Science Show on our National Broadcaster, hosted by a most acclaimed scientist journalist, has reported on the case just this last weekend. Rather than launching a dinghy and having a look at Bramston Reef, Robyn Williams has replayed part of a 2008 interview with Peter Ridd, and let it be concluded that because Peter Ridd holds a minority view he is likely wrong.

Understanding the real state of the Great Barrier Reef is not a trivial question: it has implications for tourism, and the allocation of billions of dollars of public monies … with most currently allocated to those properly networked – but not necessarily knowledgeable or prepared to walk beyond a mudflat to find the corals.

Signage warning of crocodiles.

Signage warning of crocodiles.

Photographs of the Acropora out of the water where taken about here.

Photographs of the Acropora out of the water where taken about here

There is a mudflat to the west of Bramston Reef.

There is a mudflat to the west of Bramston Reef.

That mudflat is teeming with life, as expected in an intertidal zone.

That mudflat is teeming with life, as expected in an intertidal zone.

This Porites coral is a healthy tan colour.

This Porites coral is a healthy tan colour.

After the mud flat there was reef flat, with coarse sand and lots of Halimeda. All healthy, and typical of an inner Great Barrier Reef.

After the mud flat there was reef flat, with coarse sand and lots of Halimeda. All healthy, and typical of an inner Great Barrier Reef.

Halimeda is a green macroalgae, it was healthy.

Halimeda is a green macroalgae, it was healthy.

Acropora corals with a view to Gloucester Island.

I did find one bleached coral.

I did find one bleached coral.

Most of the Acropora was a healthy brown colour suggesting good growth, rather than beauty.

Most of the Acropora was a healthy brown colour suggesting good growth, rather than beauty.

There were also corals to the south east.

There were also corals to the south east.

Looking across to Gloucester Island, in front of the mangroves when the tide was in, early on 19 April.

Looking across to Gloucester Island, in front of the mangroves when the tide was in, early on 19 April.

Looking towards Gloucester Island, the day before.

Looking towards Gloucester Island, the day before.

To be sure to know when I post pictures at this blog, and to get the latest news regarding the Peter Ridd court case including the possible appeal by James Cook University, subscribe for my irregular email updates.

Curious Correlations

Reblogged from Watts Up With That:

Guest Post by Willis Eschenbach

I got to thinking about the relationship between the Equatorial Pacific, where we find the El Nino/La Nina phenomenon, and the rest of the world. I’ve seen various claims about what happens to the temperature in various places at various lag-times after the Nino/Nina changes. So I decided to take a look.

To do that, I’ve gotten the temperature of the NINO34 region of the Equatorial Pacific. The NINO34 region stretches from 90°W, near South America, out to 170° West in the mid-Pacific, and from 5° North to 5° South of the Equator. I’ve calculated how well correlated that temperature is with the temperatures in the whole world, at various time lags.

To start with, here’s the correlation of what the temperature of the NINO34 region is doing with what the rest of the world is doing, with no time lag. Figure 1 shows which areas of the planet move in step with or in opposition to the NINO34 region with no lag.

Figure 1. Correlation of the temperature of the NINO34 region (90°-170°W, 5°N/S) with gridcell temperatures of the rest of the globe. Correlation values greater than 0.6 are all shown in red.

Now, perfect correlation is where two variables move in total lockstep. It has a value of 1.0. And if there is perfect anti-correlation, meaning whenever one variable moves up the other moves down, that has a value of minus 1.0.

There are a couple of interesting points about that first look, showing correlations with no lag. The Indian Ocean moves very strongly in harmony with the NINO34 region (red). Hmmm. However, the Atlantic doesn’t do that. Again hmmm. Also, on average northern hemisphere land is positively correlated with the NINO34 region (orange), and southern hemisphere land is the opposite, negatively correlated (blue).

Next, with a one-month lag to give the Nino/Nina effects time to start spreading around the planet, we see the following:

Figure 2. As in Figure 1, but with a one month lag between the NINO34 temperature and the rest of the world. In other words, we’re comparing each month’s temperature with the previous month’s NINO34 temperature.

Here, after a month, the North Pacific and the North Atlantic both start to feel the effects. Their correlation switches from negative (blues and greens) to positive (red-orange). Next, here’s the situation after a two-month lag.

Figure 3. As in previous figures, but with a two month lag.

I found this result most surprising. Two months after a Nino/Nina change, the entire Northern Hemisphere strongly tends to move in the same direction as the NINO34 region moved two months earlier … and at the same time, the entire Southern Hemisphere moves in opposition to what the NINO34 region did two months earlier.

Hmmm …

And here’s the three-month lag:

Figure 4. As in previous figures, but with a three month lag.

An interesting feature of the above figure is that the good correlation of the north-eastern Pacific Ocean off the west coast of North America does not extend over the continent itself.

Finally, after four months, the hemispherical pattern begins to fall apart.

Figure 5. As in previous figures, but with a four & five month lag.

Even at five months, curious patterns remain. In the northern hemisphere, the land is all negatively correlated with NINO34, and the ocean is positively correlated. But in the southern hemisphere, the land is all positively correlated and the ocean negative.

Note that this hemispheric land-ocean difference with a five-month lag is the exact opposite of the land-ocean difference with no lag shown in Figure 1.

Now … what do I make of all this?

The first thing that it brings up for me is the astounding complexity of the climate system. I mean, who would have guessed that the two hemispheres would have totally opposite strong responses to the Nino/Nina phenomenon? And who would have predicted that the land and the ocean would react in opposite directions to the Nino/Nina changes right up to the very coastlines?

Second, it would seem to offer some ability to improve long-range forecasting for certain specific areas. Positive correlation with Hawaii, North Australia, Southern Africa, and Brazil is good up to four-five months out.

Finally, it strikes me that I can run this in reverse. By that, I mean I can find all areas of the planet that are able to predict the future temperature at some pre-selected location. Like, say, what areas of the globe correlate well with whatever the UK will be doing two months from now?

Hmmm indeed …

Warmest regards to all, the mysteries of this wondrous world are endless.

w.

Willis’ Favorite Airport

Reblogged from Watts Up With That:

By Steven Mosher,

AC Osborn made an interesting comment about airports that will give me an opportunity to do two things: Pay tribute to Willis for inspiring me and give you all a few more details about airports and GHCN v4 stations. Think of this as a brief but necessary sideline before returning to the investigation of how many stations in GHCNv4 are “ruralish” or “urbanish”. In his comments AC was most interested in how placement at airports would bias the records and my response was that he was talking about microsite and I would get to that eventually. Also a few other folks had some questions about microsite versus LCZ, so let’s start with a super simple diagram.

fig01

We can define microsite bias as any disturbance/encroachment at the site location which biases the measurement up or down within the “footprint” of the sensor. For a thermometer at 1.5meters, this range varies from a few meters in unstable conditions to hundreds of meters in stable conditions . In the recent NOAA study, they found bias up to 50 meters away from a disturbance. I’ve drawn this as the red circle, but in practice, depending on prevailing wind, it is an ellipse. The NOAA experiment (more on that in a future post) put sensors at 4m, 50m, and 124m from a building and found

The mean urban bias for these conditions quickly dropped from 0.84 °C at tower-A (4 m) to 0.55 and 0.01 °C at towers-B` and -C located 50 and 124 meters from the small-scale built environment. Despite a mean urban signal near 0.9 °C at tower-A, the mean urban biases were not statistically significant given the magnitude of the towers standard 2 deviations; 0.44, 0.40, 0.37, and 0.31 °C for tower-A, -B, -B’, and -C respectively.

While not statistically significant, however, they still recommend precaution and suggest that the first 100m of a site be free of encroachments. In field experiments of the effect of roads on air temperature measured at 1.5m, a bias of .1C was found as far as 10m away from roads. At airports this distance should probably be increased. At an airport where the runway is 50m+ wide, the effect the asphalt has on the air temperature is roughly 1.2C at the edge of the runway and diminishes to ~.1c by 150m away from the runway. (Kinoshita, N. (2014). An Evaluation Method of the Effect of Observation Environment on Air Temperature Measurement. Boundary-Layer Meteorology) Exercising even more caution, I’ve extended this out to 500m, although it should be noted that this could classify good sites as “bad” sites and reduce differences in a good/bad comparison. Obviously, this range can be tested by sensitivity analysis.

Outside the red circle I’ve depicted the “Local Climate Zone”. Per Oke/Stewart this region can extend for kilometers. In simple terms you can think of two kinds of biases: Those biases that arise from the immediate vicinity within the view of the sensor and have a direct impact of the sensor, and those that are outside the view of the sensor and act indirectly– say that tall set of buildings 800m away that disturb the natural airflow to the site. In the previous post, we were discussing the local scale; this is the scale at which we would term the bias “UHI.”

There is another source of bias, from far away areas, and I will cover that in another post. For now, we will use airports to understand the difference between these two scales. Let’s do that by merely picturing some extremes in our mind: An airport in Hong Kong, and an airport on a small island in the middle of the ocean. Both airports might have microsite bias, but the Hong Kong temperature would be influenced by the urban local climate zone with its artificial ground cover. The airport on the island is surrounded by nonurban ocean, with no UHI from the ocean. Simplistically, the total bias a site might be seen as a combination of a micro bias, local bias, and distant bias.

There are, logically, six conditions we can outline:

Rural–natural No Micro Bias Warm Micro Bias Cool Micro Bias
Urban–artificial No Micro Bias Warm Micro Bias Cool Micro Bias

It is important to remember that micro disturbances can bias in both directions, cooling by shading for example. And note that logically you could find a well sited site in an urban location. This was hypothesized by Peterson long ago:

“In a recent talk at the World Meteorological Organization, T. Oke (2001, personal communication) stated that there has been considerable advancement in the understanding of urban climatology in the last 15 years. He went on to say that urban heat islands should be considered on three different scales. First, there is the mesoscale of the whole city. The second is the local scale on the order of the size of a park. And the third scale is the microscale of the garden and buildings near the meteorological observing site. Of the three scales, the microscale and local-scale effects generally are larger than mesoscale effects….

Gallo et al. (1996) examined of the effect of land use/ land cover on observed diurnal temperature range and the results support the notion that microscale influences of land use/land cover are stronger than mesoscale. A metadata survey provided land use information in three radii: 100 m, 1 km, and 10 km. The analysis found that the strongest effect of differences in land use/land cover was for the 100-m radius. While the land use/land cover effect ‘‘remains present even at 10,000 m….

Recent research by Spronken-Smith and Oke (1998) also concluded that there was a marked park cool island effect within the UHI. They report that under ideal conditions the park cool island can be greater than 5 C, though in midlatitude cities they are typically 1 –2C. In the cities studied, the nocturnal cooling in parks is often similar to that of rural areas. They reported that the thermal influence of parks on air temperatures appears to be restricted to a distance of about one park width….

Park cool islands are not the only potential mitigating factor for in situ urban temperature observations. Oceans and large lakes can have a significant influence on the temperature of nearby land stations whether the station is rural or urban. The stations used in this analysis that were within 2 km of the shore of a large body of water disproportionally tended to be urban (5.8% of urban were coastal versus 2.4% of rural).

Looking at airports will also help you cement the difference between the micro and the LCZ in your thinking. With that in mind we will turn to airports and look at various pictures to understand the difference between the micro and the local- the nearby city or the nearby ocean or field.

First a few details about airports. In my metadata I have airports classified as small, medium and large

First, the small: some are paved. Pixels (30m) detected as artificial surface are colored orange:

clip_image004

Some are dirt

clip_image006

Now large airports

clip_image008

We will get to medium, but first a few other airports by water, a 10km look, the blue dot is the station, red squares are 30meter urban cover

clip_image010

Zooming in

clip_image012

The medium airport I choose was one of Willis’ favorite airports, discussed in this post. Before we get to that visual, I encourage you all to read that post, because it put me on a 6 year journey. Willis is rather rare among those who question climate science. He does his own work, and he raises interesting testable questions. He doesn’t merely speculate; he looks and reads and does actual work. He raised two points I want to highlight:

Many of the siting problems have nothing to do with proximity to an urban area.

Instead, many of them have everything to do with proximity to jet planes, or to air conditioner exhaust, or to the back of a single house in a big field, or to being located over a patch of gravel.

And sadly, even with a map averaged on a 500 metre grid, there’s no way to determine those things.

And that’s why I didn’t expect they would find any difference … because their division into categories has little to do with the actual freedom of the station from human influences on the temperature. Urban vs Rural is not the issue. The real dichotomy is Well Sited vs Poorly Sited.

It is for this reason that I think that the “Urban Heat Island” or UHI is very poorly named. I’ve been agitating for a while to call it the LHI, for the “Local Heat Island”. It’s not essentially urban in nature. It doesn’t matter what’s causing the local heat island, whether it’s shelter from the wind as the trees grow up or proximity to a barbecue pit.

Nor does the local heat island have to be large. A thermometer sitting above a small patch of gravel will show a very different temperature response from one just a short distance away in a grassy field. The local heat island only needs to be big enough to contain the thermometer, one air conditioner exhaust is plenty, as is a jet exhaust

I think we both agree that the micro, what he calls local, is important. However, the area outside of the immediate area cannot be discounted: Hong Kong airport next to a huge city is going to be influenced by that locale, whereas, a large airport ( see above) on an island next to the sea, is arguably not going to be biased as much.

The second point Willis made was about the problems with 500meter data. In particular the MODIS classification system which required multiple adjacent pixels before a pixel was classified as urban. At that time we did not have a world database at 30m; Today we can look at that station and calculate the artificial area using 30m data. The next 4 images show the site at various scales: 500m, 1000m, 5000m and lastly 10000m. At the microscale ( <500meters) it classified as greater than 10% artificial, at 1km greater than 10% artificial, and at 5km and 10km it was less than 10% artificial.

clip_image014clip_image016clip_image018clip_image020

There were some concerns about the temperature at this station being used. However, there has never been enough data from this station to include in any global series, even Berkeley’s. Nevertheless, it lets us see the kind of improvements that can be made now that higher resolution data is available for the entire world. Also, even when airports are included in the data analysis, the bias can be reduced in some cases. Here a 2C bias is removed.

One last small airport to give you some kind of idea of that data that we can produce today.

clip_image022

AC Osborn also wanted to know just how many airports were in GHCN v4; and, I think it’s safe to say that many skeptics believe that the record is dominated by airport stations. Well, is it? We can count them and see. For this count I will use 1km as a distance cut off. There are couple ways to “determine” if a station is at an airport. The least accurate way is to look at the names of the stations. This misses a large number of airports. To answer the question I use GPS coordinates compiled for over 55000 airports world wide, including small airports, heliports, balloon ports, and seaplane ports. I then calculate the distance between all 27K stations and the 55K airports and select the closest airport. I then cross check with those stations in GHCN that have a “name” that indicates it is an airport.

For this we consider a 1km distance for being “at an airport”. While this is farther than the microsite boundary, the point of the exercise is to illustrate that not all the stations are at airports.

Using 1km as a cut off, I find there are 1,129 stations by small airports, 1830 by medium airports, and 267 by large airports. That’s from a total of ~27,000 stations.

To assess the ability of the 30m data to detect airport runways and other artificial surfaces we can look at the stations that are within 500 meters of a large airport and ask? Does our 30m data show artificial surface?. There are 131 stations within 500m of an airport. We know that no sensor data/image classification system is perfect, but we can see that in the aggregate the 30m data performs well.

clip_image024

We can also ask how many large airports are embedded in Local climate Zones that have less than 10% artificial cover out to 10km. As expected large airports are in local areas that are also built up at levels above 10%. You don’t get large airports where there are no people.

clip_image026

Conversely, you get small airports embedded in local zones that are not heavily built out, a few cases of small airports embedded in Local Climate Zones that are heavily built out.

clip_image028

Summary

Here are the points that I would like to emphasize.

1. We can discuss or differentiate between at least 2 types/sources of bias: the close and immediate and those sources more distant

2. Bias at the short range (micro) can be more important than bias at the long range.

3. A good site can be embedded in a “bad” area or “good” area, similarly for a bad site.

4. 30m data is better than 500m data

5. Skeptics should not argue that all the sites or a majority are at airports. They are not.

6. There are different types of airports.

7. One way to tell if there is a bias is by comparing Airports with Non airports.

Climate data shows no recent warming in Antarctica, instead a slight cooling

Reblogged from Watts Up With That:

Below is a plot from a resource we have not used before on WUWT, “RIMFROST“. It depicts the average temperatures for all weather stations in Antarctica. Note that there is some recent cooling in contrast to a steady warming since about 1959.

Data and plot provided by http://rimfrost.no 

Contrast that with claims by Michael Mann, Eric Steig, and others who used statistical tricks to make Antarctica warm up. Fortunately, it wasn’t just falsified by climate skeptics, but rebutted in peer review too.

Data provided by http://rimfrost.no 

H/T to Kjell Arne Høyvik‏  on Twitter

ADDED:

No warming has occurred on the South Pole from 1978 to 2019 according to satellite data (UAH V6). The linear trend is flat!

The Cooling Rains

Reblogged from Watts Up With That:

Guest Post by Willis Eschenbach

I took another ramble through the Tropical Rainfall Measurement Mission (TRMM) satellite-measured rainfall data. Figure 1 shows a Pacific-centered and an Atlantic-centered view of the average rainfall from the end of 1997 to the start of 2015 as measured by the TRMM satellite.

Figure 1. Average rainfall, meters per year, on a 1° latitude by 1° longitude basis. The area covered by the satellite data, forty degrees north and south of the Equator, is just under 2/3 of the globe. The blue areas by the Equator mark the InterTropical Convergence Zone (ITCZ). The two black horizontal dashed lines mark the Tropics of Cancer and Capricorn, the lines showing how far north and south the sun travels each year (23.45°, for those interested).

There’s lots of interesting stuff in those two graphs. I was surprised by how much of the planet in general, and the ocean in particular, are bright red, meaning they get less than half a meter (20″) of rain per year.

I was also intrigued by how narrowly the rainfall is concentrated at the average Inter-Tropical Convergence Zone (ITCZ). The ITCZ is where the two great global hemispheres of the atmospheric circulation meet near the Equator. In the Pacific and Atlantic on average the ITCZ is just above the Equator, and in the Indian Ocean, it’s just below the Equator. However, that’s just on average. Sometimes in the Pacific, the ITCZ is below the Equator. You can see kind of a mirror image as a light orange horizontal area just below the Equator.

Here’s an idealized view of the global circulation. On the left-hand edge of the globe, I’ve drawn a cross section through the atmosphere, showing the circulation of the great atmospheric cells.

Figure 2. Generalized overview of planetary atmospheric circulation. At the ITCZ along the Equator, tall thunderstorms take warm surface air, strip out the moisture as rain, and drive the warm dry air vertically. This warm dry air eventually subsides somewhere around 25-30°N and 25-30S of the Equator, creating the global desert belts at around those latitudes.

The ITCZ is shown in cross-section at the left edge of the globe in Figure 2. You can see the general tropical circulation. Surface air in both hemispheres moves towards the Equator. It is warmed there and rises. This thermal circulation is greatly sped up by air driven vertically at high rates of speed through the tall thunderstorm towers. These thunderstorms form all along the ITCZ. These thunderstorms provide much of the mechanical energy that drives the atmospheric circulation of the Hadley cells.

With all of that as prologue, here’s what I looked at. I got to thinking, was there a trend in the rainfall? Is it getting wetter or drier? So I looked at that using the TRMM data. Figure 3 shows the annual change in rainfall, in millimeters per year, on a 1° latitude by 1° longitude basis.

Figure 3. Annual change in the rainfall, 1° latitude x 1° longitude gridcells.

I note that the increase in rain is greater on the ocean vs land, is greatest at the ITCZ, and is generally greater in the tropics.

Why is this overall trend in rainfall of interest? It gives us a way to calculate how much this cools the surface. Remember the old saying, what comes down must go up … or perhaps it’s the other way around, same thing. If it rains an extra millimeter of water, somewhere it must have evaporated an extra millimeter of water.

And in the same way that our bodies are cooled by evaporation, the surface of the planet is also cooled by evaporation.

Now, we note above that on average, the increase is 1.33 millimeters of water per year. Metric is nice because volume and size are related. Here’s a great example.

One millimeter of rain falling on one square meter of the surface is one liter of water which is one kilo of water. Nice, huh?

So the extra 1.33 millimeters of rain per year is equal to 1.33 extra liters of water evaporated per square meter of surface area.

Next, how much energy does it take to evaporate that extra 1.33 liters of water per square meter so it can come down as rain? The calculations are in the endnotes. It turns out that this 1.33 extra liters per year represents an additional cooling of a tenth of a watt per square meter (0.10 W/m2).

And how does this compare to the warming from increased longwave radiation due to the additional CO2? Well, again, the calculations are in the endnotes. The answer is, per the IPCC calculations, CO2 alone over the period gave a yearly increase in downwelling radiation of ~ 0.03 W/m2. Generally, they double that number to allow for other greenhouse gases (GHGs), so for purposes of discussion, we’ll call it 0.06 W/m2 per year.

So over the period of this record, we have increased evaporative cooling of 0.10 W/m2 per year, and we have increased radiative warming from GHGs of 0.06 W/m2 per year.

Which means that over that period and that area at least, the calculated increase in warming radiation from GHGs was more than counterbalanced by the observed increase in surface cooling from increased evaporation.

Regards to all,

w.

As usual: please quote the exact words you are discussing so we can all understand exactly what and who you are replying to.

Additional Cooling

Finally, note that this calculation is only evaporative cooling. There are other cooling mechanisms at work that are related to rainstorms. These include:

• Increased cloud albedo reflecting hundreds of watts/square meter of sunshine back to space

• Moving surface air to the upper troposphere where it is above most GHGs and freer to cool to space.

• Increased ocean surface albedo from whitecaps, foam, and spume.

• Cold rain falling from a layer of the troposphere that is much cooler than the surface.

• Rain re-evaporating as it falls to cool the atmosphere

• Cold wind entrained by the rain blowing outwards at surface level to cool surrounding areas

• Dry descending air between rain cells and thunderstorms allowing increased longwave radiation to space.

Between all of these, they form a very strong temperature regulating mechanism that prevents overheating of the planet.

Calculation of energy required to evaporate 1.33 liters of water.

#latent heat evaporation joules/kg @ salinity 35 psu, temperature 24°C

> latevap = gsw_latentheat_evap_t( 35, 24 ) ; latevap

[1] 2441369

# joules/yr/m2 required to evaporate 1.33 liters/yr/m2

> evapj = latevap * 1.33 ; evapj

[1] 3247021

# convert joules/yr/m2 to W/m2

> evapwm2 = evapj / secsperyear ; evapwm2

[1] 0.1028941

Note: the exact answer varies dependent on seawater temperature, salinity, and density. These only make a difference of a couple percent (say 0.1043 vs 0.1028941). I’ve used average values.

Calculation of downwelling radiation change from CO2 increase.

#starting CO2 ppmv Dec 1997

> thestart = as.double( coshort[1] ) ; thestart

[1] 364.38

#ending CO2 ppmv Mar 2015

> theend = as.double( last( coshort )) ; theend

[1] 401.54

# longwave increase, W/m2 per year over 17 years 4 months

> 3.7 * log( theend / thestart, 2)/17.33

[1] 0.0299117

Fake climate science and scientists

Reblogged from Watts Up With That:

Alarmists game the system to enrich and empower themselves, and hurt everyone else

by Paul Driessen

The multi-colored placard in front of a $2-million home in North Center Chicago proudly proclaimed, “In this house we believe: No human is illegal” – and “Science is real” (plus a few other liberal mantras).

I knew right away where the owners stood on climate change, and other hot-button political issues. They would likely tolerate no dissension or debate on “settled” climate science or any of the other topics.

But they have it exactly backward on the science issue. Real science is not belief – or consensus, 97% or otherwise. Real science constantly asks questions, expresses skepticism, reexamines hypotheses and evidence. If debate, skepticism and empirical evidence are prohibited – it’s pseudo-science, at best.

Real science – and real scientists – seek to understand natural phenomena and processes. They pose hypotheses that they think best explain what they have witnessed, then test them against actual evidence, observations and experimental data. If the hypotheses (and predictions based on them) are borne out by their subsequent findings, the hypotheses become theories, rules, laws of nature – at least until someone finds new evidence that pokes holes in their assessments, or devises better explanations.

Real science does not involve simply declaring that you “believe” something, It’s not immutable doctrine. It doesn’t claim “science is real” – or demand that a particular scientific explanation be carved in stone. Earth-centric concepts gave way to a sun-centered solar system. Miasma disease beliefs surrendered to the germ theory. The certainty that continents are locked in place was replaced by plate tectonics (and the realization that you can’t stop continental drift, any more than you stop climate change).

Real scientists often employ computers to analyze data more quickly and accurately, depict or model complex natural systems, or forecast future events or conditions. But they test their models against real-world evidence. If the models, observations and predictions don’t match up, real scientists modify or discard the models, and the hypotheses behind them. They engage in robust discussion and debate.

They don’t let models or hypotheses become substitutes for real-world evidence and observations. They don’t alter or “homogenize” raw or historic data to make it look like the models actually work. They don’t hide their data and computer algorithms (AlGoreRythms?), restrict peer review to closed circles of like-minded colleagues who protect one another’s reputations and funding, claim “the debate is over,” or try to silence anyone who dares to ask inconvenient questions or find fault with their claims and models. They don’t concoct hockey stick temperature graphs that can be replicated by plugging in random numbers.

In the realm contemplated by the Chicago yard sign, we ought to be doing all we can to understand Earth’s highly complex, largely chaotic, frequently changing climate system – all we can to figure out how the sun and other powerful forces interact with each other. Only in that way can we accurately predict future climate changes, prepare for them, and not waste money and resources chasing goblins.

But instead, we have people in white lab coats masquerading as real scientists. They’re doing what I just explained true scientists don’t do. They also ignore fluctuations in solar energy output and numerous other powerful, interconnected natural forces that have driven climate change throughout Earth’s history. They look only (or 97% of the time) at carbon dioxide as the principle or sole driving force behind current and future climate changes – and blame every weather event, fire and walrus death on manmade CO2.

Even worse, they let their biases drive their research and use their pseudo-science to justify demands that we eliminate all fossil fuel use, and all carbon dioxide and methane emissions, by little more than a decade from now. Otherwise, they claim, we will bring unprecedented cataclysms to people and planet.

Not surprisingly, their bad behavior is applauded, funded and employed by politicians, environmentalists, journalists, celebrities, corporate executives, billionaires and others who have their own axes to grind, their own egos to inflate – and their intense desire to profit from climate alarmism and pseudo-science.

Worst of all, while they get rich and famous, their immoral actions impoverish billions and kill millions, by depriving them of the affordable, reliable fossil fuel energy that powers modern societies.

And still these slippery characters endlessly repeat the tired trope that they “believe in science” – and anyone who doesn’t agree to “keep fossil fuels in the ground” to stop climate change is a “science denier.”

When these folks and the yard sign crowd brandish the term “science,” political analyst Robert Tracinski suggests, it is primarily to “provide a badge of tribal identity” – while ironically demonstrating that they have no real understanding of or interest in “the guiding principles of actual science.”

Genuine climate scientist (and former chair of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology) Dr. Judith Curry echoes Tracinski. Politicians like Senator Elizabeth Warren use “science” as a way of “declaring belief in a proposition which is outside their knowledge and which they do not understand…. The purpose of the trope is to bypass any meaningful discussion of these separate questions, rolling them all into one package deal – and one political party ticket,” she explains.

The ultimate purpose of all this, of course, is to silence the dissenting voices of evidence- and reality-based climate science, block creation of a Presidential Committee on Climate Science, and ensure that the only debate is over which actions to take first to end fossil fuel use … and upend modern economies.

The last thing fake/alarmist climate scientists want is a full-throated debate with real climate scientists – a debate that forces them to defend their doomsday assertions, methodologies, data manipulation … and claims that solar and other powerful natural forces are minuscule or irrelevant compared to manmade carbon dioxide that constitutes less that 0.02% of Earth’s atmosphere (natural CO2 adds another 0.02%).

Thankfully, there are many reasons for hope. For recognizing that we do not face a climate crisis, much less threats to our very existence. For realizing there is no need to subject ourselves to punitive carbon taxes or the misery, poverty, deprivation, disease and death that banning fossil fuels would cause.

Between the peak of the great global cooling scare in 1975 until around 1998, atmospheric carbon dioxide levels and temperatures did rise in rough conjunction. But then temperatures mostly flat-lined, while CO2 levels kept climbing. Now actual average global temperatures are already 1 degree F below the Garbage In-Garbage Out computer model predictions. Other alarmist forecasts are also out of touch with reality.

Instead of fearing rising CO2, we should thank it for making crop, forest and grassland plants grow faster and better, benefitting nature and humanity – especially in conjunction with slightly warmer temperatures that extend growing seasons, expand arable land and increase crop production.

The rate of sea level rise has not changed for over a century – and much of what alarmists attribute to climate change and rising seas is actually due to land subsidence and other factors.

Weather is not becoming more extreme. In fact, Harvey was the first Category 3-5 hurricane to make US landfall in a record 12 years – and the number of violent F3 to F5 tornadoes has fallen from an average of 56 per year from 1950 to 1985 to only 34 per year since then.

Human ingenuity and adaptability have enabled humans to survive and thrive in all sorts of climates, even during our far more primitive past. Allowed to use our brains, fossil fuels and technologies, we will deal just fine with whatever climate changes might confront us in the future. (Of course, another nature-driven Pleistocene-style glacier pulling 400 feet of water out of our oceans and crushing Northern Hemisphere forests and cities under mile-high walls of ice truly would be an existential threat to life as we know it.)

So if NYC Mayor Bill De Blasio and other egotistical grand-standing politicians and fake climate scientists want to ban fossil fuels, glass-and-steel buildings, cows and even hotdogs – in the name of preventing “dangerous manmade climate change” – let them impose their schemes on themselves and their own families. The rest of us are tired of being made guinea pigs in their fake-science experiments.

Paul Driessen is senior policy advisor for the Committee For A Constructive Tomorrow (CFACT) and author of articles and books on energy, environmental and human rights issues.