July 2019 Was Not the Warmest on Record

Reblogged from Dr Roy Spencer.com:

August 2nd, 2019 by Roy W. Spencer, Ph. D.

July 2019 was probably the 4th warmest of the last 41 years. Global “reanalysis” datasets need to start being used for monitoring of global surface temperatures.

[NOTE: It turns out that the WMO, which announced July 2019 as a near-record, relies upon the ERA5 reanalysis which apparently departs substantially from the CFSv2 reanalysis, making my proposed reliance on only reanalysis data for surface temperature monitoring also subject to considerable uncertainty].

We are now seeing news reports (e.g. CNN, BBC, Reuters) that July 2019 was the hottest month on record for global average surface air temperatures.

One would think that the very best data would be used to make this assessment. After all, it comes from official government sources (such as NOAA, and the World Meteorological Organization [WMO]).

But current official pronouncements of global temperature records come from a fairly limited and error-prone array of thermometers which were never intended to measure global temperature trends. The global surface thermometer network has three major problems when it comes to getting global-average temperatures:

(1) The urban heat island (UHI) effect has caused a gradual warming of most land thermometer sites due to encroachment of buildings, parking lots, air conditioning units, vehicles, etc. These effects are localized, not indicative of most of the global land surface (which remains most rural), and not caused by increasing carbon dioxide in the atmosphere. Because UHI warming “looks like” global warming, it is difficult to remove from the data. In fact, NOAA’s efforts to make UHI-contaminated data look like rural data seems to have had the opposite effect. The best strategy would be to simply use only the best (most rural) sited thermometers. This is currently not done.

(2) Ocean temperatures are notoriously uncertain due to changing temperature measurement technologies (canvas buckets thrown overboard to get a sea surface temperature sample long ago, ship engine water intake temperatures more recently, buoys, satellite measurements only since about 1983, etc.)

(3) Both land and ocean temperatures are notoriously incomplete geographically. How does one estimate temperatures in a 1 million square mile area where no measurements exist?

There’s a better way.

A more complete picture: Global Reanalysis datasets

(If you want to ignore my explanation of why reanalysis estimates of monthly global temperatures should be trusted over official government pronouncements, skip to the next section.)

Various weather forecast centers around the world have experts who take a wide variety of data from many sources and figure out which ones have information about the weather and which ones don’t.

But, how can they know the difference? Because good data produce good weather forecasts; bad data don’t.

The data sources include surface thermometers, buoys, and ships (as do the “official” global temperature calculations), but they also add in weather balloons, commercial aircraft data, and a wide variety of satellite data sources.

Why would one use non-surface data to get better surface temperature measurements? Since surface weather affects weather conditions higher in the atmosphere (and vice versa), one can get a better estimate of global average surface temperature if you have satellite measurements of upper air temperatures on a global basis and in regions where no surface data exist. Knowing whether there is a warm or cold airmass there from satellite data is better than knowing nothing at all.

Furthermore, weather systems move. And this is the beauty of reanalysis datasets: Because all of the various data sources have been thoroughly researched to see what mixture of them provide the best weather forecasts
(including adjustments for possible instrumental biases and drifts over time), we know that the physical consistency of the various data inputs was also optimized.

Part of this process is making forecasts to get “data” where no data exists. Because weather systems continuously move around the world, the equations of motion, thermodynamics, and moisture can be used to estimate temperatures where no data exists by doing a “physics extrapolation” using data observed on one day in one area, then watching how those atmospheric characteristics are carried into an area with no data on the next day. This is how we knew there were going to be some exceeding hot days in France recently: a hot Saharan air layer was forecast to move from the Sahara desert into western Europe.

This kind of physics-based extrapolation (which is what weather forecasting is) is much more realistic than (for example) using land surface temperatures in July around the Arctic Ocean to simply guess temperatures out over the cold ocean water and ice where summer temperatures seldom rise much above freezing. This is actually one of the questionable techniques used (by NASA GISS) to get temperature estimates where no data exists.

If you think the reanalysis technique sounds suspect, once again I point out it is used for your daily weather forecast. We like to make fun of how poor some weather forecasts can be, but the objective evidence is that forecasts out 2-3 days are pretty accurate, and continue to improve over time.

The Reanalysis picture for July 2019

The only reanalysis data I am aware of that is available in near real time to the public is from WeatherBell.com, and comes from NOAA’s Climate Forecast System Version 2 (CFSv2).

The plot of surface temperature departures from the 1981-2010 mean for July 2019 shows a global average warmth of just over 0.3 C (0.5 deg. F) above normal:

CFSv2-global-July-2019.jpg

Note from that figure how distorted the news reporting was concerning the temporary hot spells in France, which the media reports said contributed to global-average warmth. Yes, it was unusually warm in France in July. But look at the cold in Eastern Europe and western Russia. Where was the reporting on that? How about the fact that the U.S. was, on average, below normal?

The CFSv2 reanalysis dataset goes back to only 1979, and from it we find that July 2019 was actually cooler than three other Julys: 2016, 2002, and 2017, and so was 4th warmest in 41 years. And being only 0.5 deg. F above average is not terribly alarming.

Our UAH lower tropospheric temperature measurements had July 2019 as the third warmest, behind 1998 and 2016, at +0.38 C above normal.

Why don’t the people who track global temperatures use the reanalysis datasets?

The main limitation with the reanalysis datasets is that most only go back to 1979, and I believe at least one goes back to the 1950s. Since people who monitor global temperature trends want data as far back as possible (at least 1900 or before) they can legitimately say they want to construct their own datasets from the longest record of data: from surface thermometers.

But most warming has (arguably) occurred in the last 50 years, and if one is trying to tie global temperature to greenhouse gas emissions, the period since 1979 (the last 40+ years) seems sufficient since that is the period with the greatest greenhouse gas emissions and so when the most warming should be observed.

So, I suggest that the global reanalysis datasets be used to give a more accurate estimate of changes in global temperature for the purposes of monitoring warming trends over the last 40 years, and going forward in time. They are clearly the most physically-based datasets, having been optimized to produce the best weather forecasts, and are less prone to ad hoc fiddling with adjustments to get what the dataset provider thinks should be the answer, rather than letting the physics of the atmosphere decide.

Leftist Agenda and Climate Change Linked by Indoctrination Tactics

PA Pundits International

Why is the same age group that helped to tear down the Iron Curtain now advocating for policies that would reduce freedoms? ~

Joe Bastardi ~    

As a meteorologist in the private sector, wherein success is largely determined by forecasting skill, I cannot afford to be wrong. I was taught that studying the past helps one predict the future. This is the origin of my involvement in the climate debate, since the “worst ever” bloviating we see today can easily be challenged through examination of the past.

My politics are simple. I believe one should have as much freedom as possible to enjoy life, liberty, and pursue happiness. In my opinion, the role of government is to establish standards to maximize these freedoms. I assume no one has anything against life, liberty, and the pursuit of happiness. I also assume there is a large population of young people…

View original post 677 more words

Climate Out of Control

Science Matters

Coors Baseball Field, Denver, Colorado, April 29, 2019

Frank Miele writes at Real Politics Climate Is Unpredictable, Weather You Like It or Not!
Excerpts in italics with my bolds.

They say all politics is local; so is all weather.

So on behalf of my fellow Westerners, I have to ask: What’s up with all this cold weather? It may not be a crisis yet, but in the two weeks leading up to Memorial Day — the traditional start of summer activities — much of the country has been donning sweaters and turning up the heat.

I know, I know. Weather is not climate, and you can’t generalize from anecdotal evidence of localized weather conditions to a unified theory of thermal dynamics, but isn’t that exactly what the climate alarmists have done, on a larger scale, for the past 25 years?

Haven’t we been brainwashed by political scientists (oops! I mean…

View original post 588 more words

Arctic Sea Ice Volume 20190526

DMI has stopped producing their Ice Volume Charts. 

[Edit:  DMI has embedded their Ice Volume charts in the Thickness Chart, here:

https://i0.wp.com/ocean.dmi.dk/arctic/icethickness/images/FullSize_CICE_combine_thick_SM_EN_20190526.pngHere’s the link to the DMI charts (click here):

So here’s the PIOMAS product.

BPIOMASIceVolumeAnomaly20190526

A linear trend fit onto a naturally cyclical physical system is ridiculous.

So, here’s a crude hand-drawn curve to their product.  Different perspective.

BPIOMASIceVolumeAnomaly20190526 with sine curve

Be wary of linear trend lines.

 

Climate Models Have Been Predicting Too Much Warming

NOT A LOT OF PEOPLE KNOW THAT

By Paul Homewood

John Christy has a new paper out about climate models are predicting too much warming:

image

https://www.thegwpf.org/climate-models-have-been-predicting-too-much-warming/

Below is the GWPF press release:

A leading climatologist has said that the computer simulations that are used to predict global warming are failing on a key measure of the climate today, and cannot be trusted.

Speaking to a meeting in the Palace of Westminster in London, Professor John Christy of the University of Alabama in Huntsville, told MPs and peers that almost all climate models have predicted rapid warming at high altitudes in the tropics:

A similar discrepancy between empirical measurements and computer predictions has been confirmed at the global level:

And Dr Christy says that lessons are not being learned:

“An early look at some of the latest generation of climate models reveals they are predicting even faster warming. This is simply not credible.”

A paper outlining Dr Christy’s…

View original post 11 more words

Models Wrong About the Past Produce Unbelievable Futures

Science Matters

Models vs. Observations. Christy and McKitrick (2018) Figure 3

The title of this post is the theme driven home by Patrick J. Michaels in his critique of the most recent US National Climate Assessment (NA4). The failure of General Circulation Models (GCMs) is the focal point of his presentation February 14, 2018. Comments on the Fourth National Climate Assessment. Excerpts in italics with my bolds.

NA4 uses a flawed ensemble of models that dramatically overforecast warming of the lower troposphere, with even larger errors in the upper tropical troposphere. The model ensemble also could not accommodate the “pause” or “slowdown” in warming between the two large El Niños of 1997-8 and 2015-6. The distribution of warming rates within the CMIP5 ensemble is not a true indication of a statistical range of prospective warming, as it is a collection of systematic errors. Despite a glib statement about this Assessment fulfilling…

View original post 1,135 more words

Required Reading: NIPCC 2019 Summary on Fossil Fuels

Science Matters

Those who seek the truth about global warming/climate change should welcome this latest publication from the Nongovernmental International Panel on Climate Change (NIPCC). Excerpts from the Coauthors’ introduction in italics with my bolds. H/T Lubos Motl

Climate Change Reconsidered II: Fossil Fuels assesses the costs and benefits of the use of fossil fuels (principally coal, oil, and natural gas) by reviewing scientific and economic literature on organic chemistry, climate science, public health, economic history, human security, and theoretical studies based on integrated assessment models (IAMs). It is the fifth volume in the Climate Change Reconsidered series and, like the preceding volumes, it focuses on research overlooked or ignored by the United Nations’ Intergovernmental Panel on Climate Change (IPCC).

NIPCC was created by Dr. S. Fred Singer in 2003 to provide an independent peer review of the reports of the United Nations’ Intergovernmental Panel on Climate Change (IPCC). Unlike the…

View original post 500 more words

The Greenhouse Deception Explained

NOT A LOT OF PEOPLE KNOW THAT

By Paul Homewood

A nice and concise video, well worth watching and circulating:

View original post

Scientific Hubris and Global Warming

Reblogged from Watts Up With That:

Scientific Hubris and Global Warming

Guest Post by Gregory Sloop

Notwithstanding portrayals in the movies as eccentrics who frantically warn humanity about genetically modified dinosaurs, aliens, and planet-killing asteroids, the popular image of a scientist is probably closer to the humble, bookish Professor, who used his intellect to save the castaways on practically every episode of Gilligan’s Island. The stereotypical scientist is seen as driven by a magnificent call, not some common, base motive. Unquestionably, science progresses unerringly to the truth.

This picture was challenged by the influential twentieth-century philosopher of science Thomas Kuhn, who held that scientific ”truth” is determined not as much by facts as by the consensus of the scientific community. The influence of thought leaders, rewarding of grants, and scorn of dissenters are used to protect mainstream theory. Unfortunately, science only makes genuine progress when the mainstream theory is disproved, what Kuhn called a “paradigm shift.” Data which conflict with the mainstream paradigm are ignored instead of used to develop a better one. Like most people, scientists are ultimately motivated by financial security, career advancement, and the desire for admiration. Thus, nonscientific considerations impact scientific “truth.”

This corruption of a noble pursuit permits scientific hubris to prosper. It can only exist when scientists are less than dispassionate seekers of truth. Scientific hubris condones suppression of criticism, promotes unfounded speculation, and excuses rejection of conflicting data. Consequently, scientific hubris allows errors to persist indefinitely. However, science advances so slowly the public usually has no idea of how often it is wrong.

Reconstructing extinct organisms from fossils requires scientific hubris. The fewer the number of fossils available, the greater the hubris required for reconstruction. The original reconstruction of the peculiar organism Hallucigenia, which lived 505 million years ago, showed it upside down and backwards. This was easily corrected when more fossils were found and no harm was done.

In contrast, scientific hubris causes harm when bad science is used to influence behavior. The 17th century microscopist Nicholas Hartsoeker drew a complete human within the head of a sperm, speculating that this was what might be beneath the “skin” of a sperm. Belief in preformation, the notion that sperm and eggs contain complete humans, was common at the time. His drawing could easily have been used to demonstrate why every sperm is sacred and masturbation is a sin.

Scientific hubris has claimed many. many lives. In the mid 19th century, the medical establishment rejected Ignaz Semmelweis’ recommendation that physicians disinfect their hands prior to examining pregnant women despite his unequivocal demonstration that this practice slashed the death rate due to obstetric infections. Because of scientific hubris, “medicine has a dark history of opposing new ideas and those who proposed them.” It was only when the germ theory of disease was established two decades later that the body of evidence supporting Semmelweis’ work became impossible to ignore. The greatest harm caused by scientific hubris is that it slows progress towards the truth.

Record keeping of earth’s surface temperature began around 1880, so there is less than 150 years of quantitative data about climate, which evolves at a glacial pace. Common sense suggests that quantitative data covering multiple warming and cooling periods is necessary to give perspective about the evolution of climate. Only then will scientists be able to make an educated guess whether the 1.5 degrees Fahrenheit increase in earth’s temperature since 1930 is the beginning of sustained warming which will negatively impact civilization, or a transient blip.

The inconvenient truth is that science is in the data acquisition phase of climate study, which must be completed before there is any chance of predicting climate, if it is predictable [vide infra]. Hubris goads scientists into giving answers even when the data are insufficient.

To put our knowledge about climate in perspective, imagine an investor has the first two weeks of data on the performance of a new stock market. Will those data allow the investor to know where the stock market will be in twenty years? No, because the behavior of the many variables which determine the performance of a stock market is unpredictable. Currently, predicting climate is no different.

Scientists use data from proxies to estimate earth’s surface temperature when the real temperature is unknowable. In medicine, these substitutes are called “surrogate markers.” Because hospital laboratories are rigorously inspected and the reproducibility, accuracy, and precision of their data is verified, hospital laboratory practices provide a useful standard for evaluating the quality of any scientific data.

Surrogate markers must be validated by showing that they correlate with “gold standard” data before they are used clinically. Comparison of data from tree growth rings, a surrogate marker for earth’s surface temperature, with the actual temperature shows that correlation between the two is worsening for unknown reasons. Earth’s temperature is only one factor which determines tree growth. Because soil conditions, genetics, rainfall, competition for nutrients, disease, age, fire, atmospheric carbon dioxide concentrations and consumption by herbivores and insects affect tree growth, the correlation between growth rings and earth’s temperature is imperfect.

Currently, growth rings cannot be regarded as a valid surrogate marker for the temperature of earth’s surface. The cause of the divergence problem must be identified and somehow remedied, and the remedy validated before growth rings are a credible surrogate marker or used to validate other surrogate markers.

Data from ice cores, boreholes, corals, and lake and ocean sediments are also used as surrogate markers. These are said to correlate with each other. Surrogate marker data are interpreted as showing a warm period between c.950 and c. 1250, which is sometimes called the “Medieval Climate Optimum,” and a cooler period called the “Little Ice Age” between the 16th and 19th centuries. The data from these surrogate markers have not been validated by comparison with a quantitative standard. Therefore, they give qualitative, not quantitative data. In medical terms, qualitative data are considered to be only “suggestive” of a diagnosis, not diagnostic. This level of diagnostic certainty is typically used to justify further diagnostic testing, not definitive therapy.

Anthropogenic global warming is often presented as fact. According to the philosopher Sir Karl Popper, a single conflicting observation is sufficient to disprove a theory. For example, the theory that all swans are white is disproved by one black swan. Therefore, the goal of science is to disprove, not prove a theory. Popper described how science should be practiced, while Kuhn described how science is actually practiced. Few theories satisfy Popper’s criterion. They are highly esteemed and above controversy. These include relativity, quantum mechanics, and plate tectonics. These theories come as close to settled science as is possible.

Data conflict about anthropogenic global warming. Using data from ice cores and lake sediments, Professor Gernot Patzelt argues that over the last 10,000 years, 65% of the time earth’s temperature was warmer than today. If his data are correct, human deforestation and carbon emissions are not required for global warming and intervention to forestall it may be futile.

The definitive test of anthropogenic global warming would be to study a duplicate earth without humans. Realistically, the only way is develop a successful computer model. However, modeling climate may be impossible because climate is a chaotic system. Small changes in the initial state of a chaotic system can cause very different outcomes, making them unpredictable. This is commonly called the “butterfly effect” because of the possibility that an action as fleeting as the beating of a butterfly’s wings can affect distant weather. This phenomenon also limits the predictability of weather.

Between 1880 and 1920, increasing atmospheric carbon dioxide concentrations were not associated with global warming. These variables did correlate between 1920 and 1940 and from around 1970 to today. These associations may appear to be compelling evidence for global warming, but associations cannot prove cause and effect. One example of a misleading association was published in a paper entitled “The prediction of lung cancer in Australia 1939–1981.” According to this paper, “Lung cancer is shown to be predicted from petrol consumption figures for a period of 42 years. The mean time for the disease to develop is discussed and the difference in the mortality rate for male and females is explained.” Obviously, gasoline use does not cause lung cancer.

The idea that an association is due to cause and effect is so attractive that these claims continue to be published. Recently, an implausible association between watching television and chronic inflammation was reported. In their book Follies and Fallacies in Medicine, Skrabanek and McCormick wrote, “As a result of failing to make this distinction [between association and cause], learning from experience may lead to nothing more than learning to make the same mistakes with increasing confidence.” Failure to learn from mistakes is another manifestation of scientific hubris. Those who are old enough to remember the late 1970’s may recall predictions of a global cooling crisis based on transient glacial growth and slight global cooling.

The current situation regarding climate change is similar to that confronting cavemen when facing winter and progressively shorter days. Every day there was less time to hunt and gather food and more cold, useless darkness. Shamans must have desperately called for ever harsher sacrifices to stop what otherwise seemed inevitable. Only when science enabled man to predict the return of longer days was sacrifice no longer necessary.

The mainstream position about anthropogenic global warming is established. The endorsement of the United Nations, U.S. governmental agencies, politicians, and the media buttresses this position. This nonscientific input has contributed to the perception that anthropogenic global warming is settled science. A critical evaluation of the available data about global warming, and anthropogenic global warming in particular, allow only a guess about the future climate. It is scientific hubris not to recognize that guess for what it is.

Half of 21st Century Warming Due to El Nino

Reblogged from Dr.RoySpencer.com  [HiFast bold]

May 13th, 2019 by Roy W. Spencer, Ph. D.

A major uncertainty in figuring out how much of recent warming has been human-caused is knowing how much nature has caused. The IPCC is quite sure that nature is responsible for less than half of the warming since the mid-1900s, but politicians, activists, and various green energy pundits go even further, behaving as if warming is 100% human-caused.

The fact is we really don’t understand the causes of natural climate change on the time scale of an individual lifetime, although theories abound. For example, there is plenty of evidence that the Little Ice Age was real, and so some of the warming over the last 150 years (especially prior to 1940) was natural — but how much?

The answer makes as huge difference to energy policy. If global warming is only 50% as large as is predicted by the IPCC (which would make it only 20% of the problem portrayed by the media and politicians), then the immense cost of renewable energy can be avoided until we have new cost-competitive energy technologies.

The recently published paper Recent Global Warming as Confirmed by AIRS used 15 years of infrared satellite data to obtain a rather strong global surface warming trend of +0.24 C/decade. Objections have been made to that study by me (e.g. here) and others, not the least of which is the fact that the 2003-2017 period addressed had a record warm El Nino near the end (2015-16), which means the computed warming trend over that period is not entirely human-caused warming.

If we look at the warming over the 19-year period 2000-2018, we see the record El Nino event during 2015-16 (all monthly anomalies are relative to the 2001-2017 average seasonal cycle):

21st-century-warming-2000-2018-550x733
Fig. 1. 21st Century global-average temperature trends (top) averaged across all CMIP5 climate models (gray), HadCRUT4 observations (green), and UAH tropospheric temperature (purple). The Multivariate ENSO Index (MEI, bottom) shows the upward trend in El Nino activity over the same period, which causes a natural enhancement of the observed warming trend.

We also see that the average of all of the CMIP5 models’ surface temperature trend projections (in which natural variability in the many models is averaged out) has a warmer trend than the observations, despite the trend-enhancing effect of the 2015-16 El Nino event.

So, how much of an influence did that warm event have on the computed trends? The simplest way to address that is to use only the data before that event. To be somewhat objective about it, we can take the period over which there is no trend in El Nino (and La Nina) activity, which happens to be 2000 through June, 2015 (15.5 years):

21st-century-warming-2000-2015.5-550x733
Fig. 2. As in Fig. 1, but for the 15.5 year period 2000 to June 2015, which is the period over which there was no trend in El Nino and La Nina activity.

Note that the observed trend in HadCRUT4 surface temperatures is nearly cut in half compared to the CMIP5 model average warming over the same period, and the UAH tropospheric temperature trend is almost zero.

One might wonder why the UAH LT trend is so low for this period, even though in Fig. 1 it is not that far below the surface temperature observations (+0.12 C/decade versus +0.16 C/decade for the full period through 2018). So, I examined the RSS version of LT for 2000 through June 2015, which had a +0.10 C/decade trend. For a more apples-to-apples comparison, the CMIP5 surface-to-500 hPa layer average temperature averaged across all models is +0.20 C/decade, so even RSS LT (which usually has a warmer trend than UAH LT) has only one-half the warming trend as the average CMIP5 model during this period.

So, once again, we see that the observed rate of warming — when we ignore the natural fluctuations in the climate system (which, along with severe weather events dominate “climate change” news) — is only about one-half of that projected by climate models at this point in the 21st Century. This fraction is consistent with the global energy budget study of Lewis & Curry (2018) which analyzed 100 years of global temperatures and ocean heat content changes, and also found that the climate system is only about 1/2 as sensitive to increasing CO2 as climate models assume.

It will be interesting to see if the new climate model assessment (CMIP6) produces warming more in line with the observations. From what I have heard so far, this appears unlikely. If history is any guide, this means the observations will continue to need adjustments to fit the models, rather than the other way around.