The Greenhouse Deception Explained

NOT A LOT OF PEOPLE KNOW THAT

By Paul Homewood

A nice and concise video, well worth watching and circulating:

View original post

Advertisements

Scientific Hubris and Global Warming

Reblogged from Watts Up With That:

Scientific Hubris and Global Warming

Guest Post by Gregory Sloop

Notwithstanding portrayals in the movies as eccentrics who frantically warn humanity about genetically modified dinosaurs, aliens, and planet-killing asteroids, the popular image of a scientist is probably closer to the humble, bookish Professor, who used his intellect to save the castaways on practically every episode of Gilligan’s Island. The stereotypical scientist is seen as driven by a magnificent call, not some common, base motive. Unquestionably, science progresses unerringly to the truth.

This picture was challenged by the influential twentieth-century philosopher of science Thomas Kuhn, who held that scientific ”truth” is determined not as much by facts as by the consensus of the scientific community. The influence of thought leaders, rewarding of grants, and scorn of dissenters are used to protect mainstream theory. Unfortunately, science only makes genuine progress when the mainstream theory is disproved, what Kuhn called a “paradigm shift.” Data which conflict with the mainstream paradigm are ignored instead of used to develop a better one. Like most people, scientists are ultimately motivated by financial security, career advancement, and the desire for admiration. Thus, nonscientific considerations impact scientific “truth.”

This corruption of a noble pursuit permits scientific hubris to prosper. It can only exist when scientists are less than dispassionate seekers of truth. Scientific hubris condones suppression of criticism, promotes unfounded speculation, and excuses rejection of conflicting data. Consequently, scientific hubris allows errors to persist indefinitely. However, science advances so slowly the public usually has no idea of how often it is wrong.

Reconstructing extinct organisms from fossils requires scientific hubris. The fewer the number of fossils available, the greater the hubris required for reconstruction. The original reconstruction of the peculiar organism Hallucigenia, which lived 505 million years ago, showed it upside down and backwards. This was easily corrected when more fossils were found and no harm was done.

In contrast, scientific hubris causes harm when bad science is used to influence behavior. The 17th century microscopist Nicholas Hartsoeker drew a complete human within the head of a sperm, speculating that this was what might be beneath the “skin” of a sperm. Belief in preformation, the notion that sperm and eggs contain complete humans, was common at the time. His drawing could easily have been used to demonstrate why every sperm is sacred and masturbation is a sin.

Scientific hubris has claimed many. many lives. In the mid 19th century, the medical establishment rejected Ignaz Semmelweis’ recommendation that physicians disinfect their hands prior to examining pregnant women despite his unequivocal demonstration that this practice slashed the death rate due to obstetric infections. Because of scientific hubris, “medicine has a dark history of opposing new ideas and those who proposed them.” It was only when the germ theory of disease was established two decades later that the body of evidence supporting Semmelweis’ work became impossible to ignore. The greatest harm caused by scientific hubris is that it slows progress towards the truth.

Record keeping of earth’s surface temperature began around 1880, so there is less than 150 years of quantitative data about climate, which evolves at a glacial pace. Common sense suggests that quantitative data covering multiple warming and cooling periods is necessary to give perspective about the evolution of climate. Only then will scientists be able to make an educated guess whether the 1.5 degrees Fahrenheit increase in earth’s temperature since 1930 is the beginning of sustained warming which will negatively impact civilization, or a transient blip.

The inconvenient truth is that science is in the data acquisition phase of climate study, which must be completed before there is any chance of predicting climate, if it is predictable [vide infra]. Hubris goads scientists into giving answers even when the data are insufficient.

To put our knowledge about climate in perspective, imagine an investor has the first two weeks of data on the performance of a new stock market. Will those data allow the investor to know where the stock market will be in twenty years? No, because the behavior of the many variables which determine the performance of a stock market is unpredictable. Currently, predicting climate is no different.

Scientists use data from proxies to estimate earth’s surface temperature when the real temperature is unknowable. In medicine, these substitutes are called “surrogate markers.” Because hospital laboratories are rigorously inspected and the reproducibility, accuracy, and precision of their data is verified, hospital laboratory practices provide a useful standard for evaluating the quality of any scientific data.

Surrogate markers must be validated by showing that they correlate with “gold standard” data before they are used clinically. Comparison of data from tree growth rings, a surrogate marker for earth’s surface temperature, with the actual temperature shows that correlation between the two is worsening for unknown reasons. Earth’s temperature is only one factor which determines tree growth. Because soil conditions, genetics, rainfall, competition for nutrients, disease, age, fire, atmospheric carbon dioxide concentrations and consumption by herbivores and insects affect tree growth, the correlation between growth rings and earth’s temperature is imperfect.

Currently, growth rings cannot be regarded as a valid surrogate marker for the temperature of earth’s surface. The cause of the divergence problem must be identified and somehow remedied, and the remedy validated before growth rings are a credible surrogate marker or used to validate other surrogate markers.

Data from ice cores, boreholes, corals, and lake and ocean sediments are also used as surrogate markers. These are said to correlate with each other. Surrogate marker data are interpreted as showing a warm period between c.950 and c. 1250, which is sometimes called the “Medieval Climate Optimum,” and a cooler period called the “Little Ice Age” between the 16th and 19th centuries. The data from these surrogate markers have not been validated by comparison with a quantitative standard. Therefore, they give qualitative, not quantitative data. In medical terms, qualitative data are considered to be only “suggestive” of a diagnosis, not diagnostic. This level of diagnostic certainty is typically used to justify further diagnostic testing, not definitive therapy.

Anthropogenic global warming is often presented as fact. According to the philosopher Sir Karl Popper, a single conflicting observation is sufficient to disprove a theory. For example, the theory that all swans are white is disproved by one black swan. Therefore, the goal of science is to disprove, not prove a theory. Popper described how science should be practiced, while Kuhn described how science is actually practiced. Few theories satisfy Popper’s criterion. They are highly esteemed and above controversy. These include relativity, quantum mechanics, and plate tectonics. These theories come as close to settled science as is possible.

Data conflict about anthropogenic global warming. Using data from ice cores and lake sediments, Professor Gernot Patzelt argues that over the last 10,000 years, 65% of the time earth’s temperature was warmer than today. If his data are correct, human deforestation and carbon emissions are not required for global warming and intervention to forestall it may be futile.

The definitive test of anthropogenic global warming would be to study a duplicate earth without humans. Realistically, the only way is develop a successful computer model. However, modeling climate may be impossible because climate is a chaotic system. Small changes in the initial state of a chaotic system can cause very different outcomes, making them unpredictable. This is commonly called the “butterfly effect” because of the possibility that an action as fleeting as the beating of a butterfly’s wings can affect distant weather. This phenomenon also limits the predictability of weather.

Between 1880 and 1920, increasing atmospheric carbon dioxide concentrations were not associated with global warming. These variables did correlate between 1920 and 1940 and from around 1970 to today. These associations may appear to be compelling evidence for global warming, but associations cannot prove cause and effect. One example of a misleading association was published in a paper entitled “The prediction of lung cancer in Australia 1939–1981.” According to this paper, “Lung cancer is shown to be predicted from petrol consumption figures for a period of 42 years. The mean time for the disease to develop is discussed and the difference in the mortality rate for male and females is explained.” Obviously, gasoline use does not cause lung cancer.

The idea that an association is due to cause and effect is so attractive that these claims continue to be published. Recently, an implausible association between watching television and chronic inflammation was reported. In their book Follies and Fallacies in Medicine, Skrabanek and McCormick wrote, “As a result of failing to make this distinction [between association and cause], learning from experience may lead to nothing more than learning to make the same mistakes with increasing confidence.” Failure to learn from mistakes is another manifestation of scientific hubris. Those who are old enough to remember the late 1970’s may recall predictions of a global cooling crisis based on transient glacial growth and slight global cooling.

The current situation regarding climate change is similar to that confronting cavemen when facing winter and progressively shorter days. Every day there was less time to hunt and gather food and more cold, useless darkness. Shamans must have desperately called for ever harsher sacrifices to stop what otherwise seemed inevitable. Only when science enabled man to predict the return of longer days was sacrifice no longer necessary.

The mainstream position about anthropogenic global warming is established. The endorsement of the United Nations, U.S. governmental agencies, politicians, and the media buttresses this position. This nonscientific input has contributed to the perception that anthropogenic global warming is settled science. A critical evaluation of the available data about global warming, and anthropogenic global warming in particular, allow only a guess about the future climate. It is scientific hubris not to recognize that guess for what it is.

Half of 21st Century Warming Due to El Nino

Reblogged from Dr.RoySpencer.com  [HiFast bold]

May 13th, 2019 by Roy W. Spencer, Ph. D.

A major uncertainty in figuring out how much of recent warming has been human-caused is knowing how much nature has caused. The IPCC is quite sure that nature is responsible for less than half of the warming since the mid-1900s, but politicians, activists, and various green energy pundits go even further, behaving as if warming is 100% human-caused.

The fact is we really don’t understand the causes of natural climate change on the time scale of an individual lifetime, although theories abound. For example, there is plenty of evidence that the Little Ice Age was real, and so some of the warming over the last 150 years (especially prior to 1940) was natural — but how much?

The answer makes as huge difference to energy policy. If global warming is only 50% as large as is predicted by the IPCC (which would make it only 20% of the problem portrayed by the media and politicians), then the immense cost of renewable energy can be avoided until we have new cost-competitive energy technologies.

The recently published paper Recent Global Warming as Confirmed by AIRS used 15 years of infrared satellite data to obtain a rather strong global surface warming trend of +0.24 C/decade. Objections have been made to that study by me (e.g. here) and others, not the least of which is the fact that the 2003-2017 period addressed had a record warm El Nino near the end (2015-16), which means the computed warming trend over that period is not entirely human-caused warming.

If we look at the warming over the 19-year period 2000-2018, we see the record El Nino event during 2015-16 (all monthly anomalies are relative to the 2001-2017 average seasonal cycle):

21st-century-warming-2000-2018-550x733
Fig. 1. 21st Century global-average temperature trends (top) averaged across all CMIP5 climate models (gray), HadCRUT4 observations (green), and UAH tropospheric temperature (purple). The Multivariate ENSO Index (MEI, bottom) shows the upward trend in El Nino activity over the same period, which causes a natural enhancement of the observed warming trend.

We also see that the average of all of the CMIP5 models’ surface temperature trend projections (in which natural variability in the many models is averaged out) has a warmer trend than the observations, despite the trend-enhancing effect of the 2015-16 El Nino event.

So, how much of an influence did that warm event have on the computed trends? The simplest way to address that is to use only the data before that event. To be somewhat objective about it, we can take the period over which there is no trend in El Nino (and La Nina) activity, which happens to be 2000 through June, 2015 (15.5 years):

21st-century-warming-2000-2015.5-550x733
Fig. 2. As in Fig. 1, but for the 15.5 year period 2000 to June 2015, which is the period over which there was no trend in El Nino and La Nina activity.

Note that the observed trend in HadCRUT4 surface temperatures is nearly cut in half compared to the CMIP5 model average warming over the same period, and the UAH tropospheric temperature trend is almost zero.

One might wonder why the UAH LT trend is so low for this period, even though in Fig. 1 it is not that far below the surface temperature observations (+0.12 C/decade versus +0.16 C/decade for the full period through 2018). So, I examined the RSS version of LT for 2000 through June 2015, which had a +0.10 C/decade trend. For a more apples-to-apples comparison, the CMIP5 surface-to-500 hPa layer average temperature averaged across all models is +0.20 C/decade, so even RSS LT (which usually has a warmer trend than UAH LT) has only one-half the warming trend as the average CMIP5 model during this period.

So, once again, we see that the observed rate of warming — when we ignore the natural fluctuations in the climate system (which, along with severe weather events dominate “climate change” news) — is only about one-half of that projected by climate models at this point in the 21st Century. This fraction is consistent with the global energy budget study of Lewis & Curry (2018) which analyzed 100 years of global temperatures and ocean heat content changes, and also found that the climate system is only about 1/2 as sensitive to increasing CO2 as climate models assume.

It will be interesting to see if the new climate model assessment (CMIP6) produces warming more in line with the observations. From what I have heard so far, this appears unlikely. If history is any guide, this means the observations will continue to need adjustments to fit the models, rather than the other way around.

BIG NEWS – Verified by NOAA – Poor Weather Station Siting Leads To Artificial Long Term Warming

Sierra Foothill Commentary

Based on the data collected for the Surface Station Project and analysis papers describing the results, my friend Anthony Watts has been saying for years that “surface temperature measurements (and long term trends) have been affected by encroachment of urbanization on the placement of weather stations used to measure surface air temperature, and track long term climate.”

When Ellen and I traveled across the country in the RV we visited weather stations in the historical weather network and took photos of the temperature measurement stations and the surrounding environments.

Now, NOAA has validated Anthony’s findings — weather station siting can influence the surface station long temperature record. Here some samples that were taken by other volunteers :

clip_image004Detroit_lakes_USHCN

Impacts of Small-Scale Urban Encroachment on Air Temperature Observations

Ronald D. Leeper, John Kochendorfer, Timothy Henderson, and Michael A. Palecki
https://journals.ametsoc.org/doi/10.1175/JAMC-D-19-0002.1

Abstract

A field experiment was performed in Oak Ridge, TN, with four…

View original post 248 more words

Climate data shows no recent warming in Antarctica, instead a slight cooling

Reblogged from Watts Up With That:

Below is a plot from a resource we have not used before on WUWT, “RIMFROST“. It depicts the average temperatures for all weather stations in Antarctica. Note that there is some recent cooling in contrast to a steady warming since about 1959.

Data and plot provided by http://rimfrost.no 

Contrast that with claims by Michael Mann, Eric Steig, and others who used statistical tricks to make Antarctica warm up. Fortunately, it wasn’t just falsified by climate skeptics, but rebutted in peer review too.

Data provided by http://rimfrost.no 

H/T to Kjell Arne Høyvik‏  on Twitter

ADDED:

No warming has occurred on the South Pole from 1978 to 2019 according to satellite data (UAH V6). The linear trend is flat!

SVENSMARK’s Force Majeure, The Sun’s Large Role in Climate Change

Reblogged from Watts Up With That:

GUEST: HENRIK SVENSMARK

By H. Sterling Burnett

By bombarding the Earth with cosmic rays and being a driving force behind cloud formations, the sun plays a much larger role on climate than “consensus scientists” care to admit.

The Danish National Space Institute’s Dr. Henrik Svensmark has assembled a powerful array of data and evidence in his recent study, Force Majeure the Sun’s Large Role in Climate Change.

The study shows that throughout history and now, the sun plays a powerful role in climate change. Solar activity impacts cosmic rays which are tied to cloud formation. Clouds, their abundance or dearth, directly affects the earth’s climate.

Climate models don’t accurately account for the role of clouds or solar activity in climate change, with the result they assume the earth is much more sensitive to greenhouse gas levels than it is. Unfortunately, the impact of clouds and the sun on climate are understudied because climate science has become so politicized.

Full audio interview here:  Interview with Dr. Henrick Svensmark

 

H. Sterling Burnett, Ph.D. is a Heartland senior fellow on environmental policy and the managing editor of Environment & Climate News.

The reproducibility crisis in science

Reblogged from Watts Up With That:

Dorothy Bishop describes how threats to reproducibility, recognized but unaddressed for decades, might finally be brought under control.

From Nature:

More than four decades into my scientific career, I find myself an outlier among academics of similar age and seniority: I strongly identify with the movement to make the practice of science more robust. It’s not that my contemporaries are unconcerned about doing science well; it’s just that many of them don’t seem to recognize that there are serious problems with current practices. By contrast, I think that, in two decades, we will look back on the past 60 years — particularly in biomedical science — and marvel at how much time and money has been wasted on flawed research.

How can that be? We know how to formulate and test hypotheses in controlled experiments. We can account for unwanted variation with statistical techniques. We appreciate the need to replicate observations.

Yet many researchers persist in working in a way almost guaranteed not to deliver meaningful results. They ride with what I refer to as the four horsemen of the reproducibility apocalypse: publication bias, low statistical power, P-value hacking and HARKing (hypothesizing after results are known). My generation and the one before us have done little to rein these in.

In 1975, psychologist Anthony Greenwald noted that science is prejudiced against null hypotheses; we even refer to sound work supporting such conclusions as ‘failed experiments’. This prejudice leads to publication bias: researchers are less likely to write up studies that show no effect, and journal editors are less likely to accept them. Consequently, no one can learn from them, and researchers waste time and resources on repeating experiments, redundantly.

That has begun to change for two reasons. First, clinicians have realized that publication bias harms patients. If there are 20 studies of a drug and only one shows a benefit, but that is the one that is published, we get a distorted view of drug efficacy. Second, the growing use of meta-analyses, which combine results across studies, has started to make clear that the tendency not to publish negative results gives misleading impressions.

Low statistical power followed a similar trajectory. My undergraduate statistics courses had nothing to say on statistical power, and few of us realized we should take it seriously. Simply, if a study has a small sample size, and the effect of an experimental manipulation is small, then odds are you won’t detect the effect — even if one is there.

I stumbled on the issue of P-hacking before the term existed. In the 1980s, I reviewed the literature on brain lateralization (how sides of the brain take on different functions) and developmental disorders, and I noticed that, although many studies described links between handedness and dyslexia, the definition of ‘atypical handedness’ changed from study to study — even within the same research group. I published a sarcastic note, including a simulation to show how easy it was to find an effect if you explored the data after collecting results (D. V. M. Bishop J. Clin. Exp. Neuropsychol. 12, 812–816; 1990). I subsequently noticed similar phenomena in other fields: researchers try out many analyses but report only the ones that are ‘statistically significant’.

This practice, now known as P-hacking, was once endemic to most branches of science that rely on P values to test significance of results, yet few people realized how seriously it could distort findings. That started to change in 2011, with an elegant, comic paper in which the authors crafted analyses to prove that listening to the Beatles could make undergraduates younger (J. P. Simmons et al. Psychol. Sci. 22, 1359–1366; 2011). “Undisclosed flexibility,” they wrote, “allows presenting anything as significant.”

Analysis of new NASA AIRS study: 80% of U.S. Warming has been at Night

Reblogged from Watts Up With That:

By Dr. Roy Spencer

I have previously addressed the NASA study that concluded the AIRS satellite temperatures “verified global warming trends“. The AIRS is an infrared temperature sounding instrument on the NASA Aqua satellite, providing data since late 2002 (over 16 years). All results in that study, and presented here, are based upon infrared measurements alone, with no microwave temperature sounder data being used in these products.

That reported study addressed only the surface “skin” temperature measurements, but the AIRS is also used to retrieve temperature profiles throughout the troposphere and stratosphere — that’s 99.9% of the total mass of the atmosphere.

Since AIRS data are also used to retrieve a 2 meter temperature (the traditional surface air temperature measurement height), I was curious why that wasn’t used instead of the surface skin temperature. Also, AIRS allows me to compare to our UAH tropospheric deep-layer temperature products.

So, I downloaded the entire archive of monthly average AIRS temperature retrievals on a 1 deg. lat/lon grid (85 GB of data). I’ve been analyzing those data over various regions (global, tropical, land, ocean). While there are a lot of interesting results I could show, today I’m going to focus just on the United States.

AIRS temperature trend profiles averaged over the contiguous United States, Sept. 2002 through March 2019. Gray represents an average of day and night. Trends are based upon monthly departures from the average seasonal cycle during 2003-2018. The UAH LT temperature trend (and it’s approximate vertical extent) is in violet, and NOAA surface air temperature trends (Tmax, Tmin, Tavg) are indicated by triangles. The open circles are the T2m retrievals, which appear to be less trustworthy than the Tskin retrievals.

Because the Aqua satellite observes at nominal local times of 1:30 a.m. and 1:30 p.m., this allows separation of data into “day” and “night”. It is well known that recent warming of surface air temperatures (both in the U.S. and globally) has been stronger at night than during the day, but the AIRS data shows just how dramatic the day-night difference is… keeping in mind this is only the most recent 16.6 years (since September 2002):

The AIRS surface skin temperature trend at night (1:30 a.m.) is a whopping +0.57 C/decade, while the daytime (1:30 p.m.) trend is only +0.15 C/decade. This is a bigger diurnal difference than indicated by the NOAA Tmax and Tmin trends (triangles in the above plot). Admittedly, 1:30 a.m. and 1:30 pm are not when the lowest and highest temperatures of the day occur, but I wouldn’t expect as large a difference in trends as is seen here, at least at night.

Furthermore, these day-night differences extend up through the lower troposphere, to higher than 850 mb (about 5,000 ft altitude), even showing up at 700 mb (about 12,000 ft. altitude).

This behavior also shows up in globally-averaged land areas, and reverses over the ocean (but with a much weaker day-night difference). I will report on this at some point in the future.

If real, these large day-night differences in temperature trends is fascinating behavior. My first suspicion is that it has something to do with a change in moist convection and cloud activity during warming. For instance more clouds would reduce daytime warming but increase nighttime warming. But I looked at the seasonal variations in these signatures and (unexpectedly) the day-night difference is greatest in winter (DJF) when there is the least convective activity and weakest in summer (JJA) when there is the most convective activity.

One possibility is that there is a problem with the AIRS temperature retrievals (now at Version 6). But it seems unlikely that this problem would extend through such a large depth of the lower troposphere. I can’t think of any reason why there would be such a large bias between day and night retrievals when it can be seen in the above figure that there is essentially no difference from the 500 mb level upward.

It should be kept in mind that the lower tropospheric and surface temperatures can only be measured by AIRS in the absence of clouds (or in between clouds). I have no idea how much of an effect this sampling bias would have on the results.

Finally, note how well the AIRS low- to mid-troposphere temperature trends match the bulk trend in our UAH LT product. I will be examining this further for larger areas as well.

Fake climate science and scientists

Reblogged from Watts Up With That:

Alarmists game the system to enrich and empower themselves, and hurt everyone else

by Paul Driessen

The multi-colored placard in front of a $2-million home in North Center Chicago proudly proclaimed, “In this house we believe: No human is illegal” – and “Science is real” (plus a few other liberal mantras).

I knew right away where the owners stood on climate change, and other hot-button political issues. They would likely tolerate no dissension or debate on “settled” climate science or any of the other topics.

But they have it exactly backward on the science issue. Real science is not belief – or consensus, 97% or otherwise. Real science constantly asks questions, expresses skepticism, reexamines hypotheses and evidence. If debate, skepticism and empirical evidence are prohibited – it’s pseudo-science, at best.

Real science – and real scientists – seek to understand natural phenomena and processes. They pose hypotheses that they think best explain what they have witnessed, then test them against actual evidence, observations and experimental data. If the hypotheses (and predictions based on them) are borne out by their subsequent findings, the hypotheses become theories, rules, laws of nature – at least until someone finds new evidence that pokes holes in their assessments, or devises better explanations.

Real science does not involve simply declaring that you “believe” something, It’s not immutable doctrine. It doesn’t claim “science is real” – or demand that a particular scientific explanation be carved in stone. Earth-centric concepts gave way to a sun-centered solar system. Miasma disease beliefs surrendered to the germ theory. The certainty that continents are locked in place was replaced by plate tectonics (and the realization that you can’t stop continental drift, any more than you stop climate change).

Real scientists often employ computers to analyze data more quickly and accurately, depict or model complex natural systems, or forecast future events or conditions. But they test their models against real-world evidence. If the models, observations and predictions don’t match up, real scientists modify or discard the models, and the hypotheses behind them. They engage in robust discussion and debate.

They don’t let models or hypotheses become substitutes for real-world evidence and observations. They don’t alter or “homogenize” raw or historic data to make it look like the models actually work. They don’t hide their data and computer algorithms (AlGoreRythms?), restrict peer review to closed circles of like-minded colleagues who protect one another’s reputations and funding, claim “the debate is over,” or try to silence anyone who dares to ask inconvenient questions or find fault with their claims and models. They don’t concoct hockey stick temperature graphs that can be replicated by plugging in random numbers.

In the realm contemplated by the Chicago yard sign, we ought to be doing all we can to understand Earth’s highly complex, largely chaotic, frequently changing climate system – all we can to figure out how the sun and other powerful forces interact with each other. Only in that way can we accurately predict future climate changes, prepare for them, and not waste money and resources chasing goblins.

But instead, we have people in white lab coats masquerading as real scientists. They’re doing what I just explained true scientists don’t do. They also ignore fluctuations in solar energy output and numerous other powerful, interconnected natural forces that have driven climate change throughout Earth’s history. They look only (or 97% of the time) at carbon dioxide as the principle or sole driving force behind current and future climate changes – and blame every weather event, fire and walrus death on manmade CO2.

Even worse, they let their biases drive their research and use their pseudo-science to justify demands that we eliminate all fossil fuel use, and all carbon dioxide and methane emissions, by little more than a decade from now. Otherwise, they claim, we will bring unprecedented cataclysms to people and planet.

Not surprisingly, their bad behavior is applauded, funded and employed by politicians, environmentalists, journalists, celebrities, corporate executives, billionaires and others who have their own axes to grind, their own egos to inflate – and their intense desire to profit from climate alarmism and pseudo-science.

Worst of all, while they get rich and famous, their immoral actions impoverish billions and kill millions, by depriving them of the affordable, reliable fossil fuel energy that powers modern societies.

And still these slippery characters endlessly repeat the tired trope that they “believe in science” – and anyone who doesn’t agree to “keep fossil fuels in the ground” to stop climate change is a “science denier.”

When these folks and the yard sign crowd brandish the term “science,” political analyst Robert Tracinski suggests, it is primarily to “provide a badge of tribal identity” – while ironically demonstrating that they have no real understanding of or interest in “the guiding principles of actual science.”

Genuine climate scientist (and former chair of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology) Dr. Judith Curry echoes Tracinski. Politicians like Senator Elizabeth Warren use “science” as a way of “declaring belief in a proposition which is outside their knowledge and which they do not understand…. The purpose of the trope is to bypass any meaningful discussion of these separate questions, rolling them all into one package deal – and one political party ticket,” she explains.

The ultimate purpose of all this, of course, is to silence the dissenting voices of evidence- and reality-based climate science, block creation of a Presidential Committee on Climate Science, and ensure that the only debate is over which actions to take first to end fossil fuel use … and upend modern economies.

The last thing fake/alarmist climate scientists want is a full-throated debate with real climate scientists – a debate that forces them to defend their doomsday assertions, methodologies, data manipulation … and claims that solar and other powerful natural forces are minuscule or irrelevant compared to manmade carbon dioxide that constitutes less that 0.02% of Earth’s atmosphere (natural CO2 adds another 0.02%).

Thankfully, there are many reasons for hope. For recognizing that we do not face a climate crisis, much less threats to our very existence. For realizing there is no need to subject ourselves to punitive carbon taxes or the misery, poverty, deprivation, disease and death that banning fossil fuels would cause.

Between the peak of the great global cooling scare in 1975 until around 1998, atmospheric carbon dioxide levels and temperatures did rise in rough conjunction. But then temperatures mostly flat-lined, while CO2 levels kept climbing. Now actual average global temperatures are already 1 degree F below the Garbage In-Garbage Out computer model predictions. Other alarmist forecasts are also out of touch with reality.

Instead of fearing rising CO2, we should thank it for making crop, forest and grassland plants grow faster and better, benefitting nature and humanity – especially in conjunction with slightly warmer temperatures that extend growing seasons, expand arable land and increase crop production.

The rate of sea level rise has not changed for over a century – and much of what alarmists attribute to climate change and rising seas is actually due to land subsidence and other factors.

Weather is not becoming more extreme. In fact, Harvey was the first Category 3-5 hurricane to make US landfall in a record 12 years – and the number of violent F3 to F5 tornadoes has fallen from an average of 56 per year from 1950 to 1985 to only 34 per year since then.

Human ingenuity and adaptability have enabled humans to survive and thrive in all sorts of climates, even during our far more primitive past. Allowed to use our brains, fossil fuels and technologies, we will deal just fine with whatever climate changes might confront us in the future. (Of course, another nature-driven Pleistocene-style glacier pulling 400 feet of water out of our oceans and crushing Northern Hemisphere forests and cities under mile-high walls of ice truly would be an existential threat to life as we know it.)

So if NYC Mayor Bill De Blasio and other egotistical grand-standing politicians and fake climate scientists want to ban fossil fuels, glass-and-steel buildings, cows and even hotdogs – in the name of preventing “dangerous manmade climate change” – let them impose their schemes on themselves and their own families. The rest of us are tired of being made guinea pigs in their fake-science experiments.

Paul Driessen is senior policy advisor for the Committee For A Constructive Tomorrow (CFACT) and author of articles and books on energy, environmental and human rights issues.

UAH, RSS, NOAA, UW: Which Satellite Dataset Should We Believe?

Reblogged from DrRoySpencer.com:

April 23rd, 2019 by Roy W. Spencer, Ph. D.

NOTE: See the update from John Christy below, addressing the use of RATPAC radiosonde data.

This post has two related parts. The first has to do with the recently published study of AIRS satellite-based surface skin temperature trends. The second is our response to a rather nasty Twitter comment maligning our UAH global temperature dataset that was a response to that study.

The AIRS Study

NASA’s Atmospheric InfraRed Sounder (AIRS) has thousands of infrared channels and has provided a large quantity of new remote sensing information since the launch of the Aqua satellite in early 2002. AIRS has even demonstrated how increasing CO2 in the last 15+ years has reduced the infrared cooling to outer space at the wavelengths impacted by CO2 emission and absorption, the first observational evidence I am aware of that increasing CO2 can alter — however minimally — the global energy budget.

The challenge for AIRS as a global warming monitoring instrument is that it is cloud-limited, a problem that worsens as one gets closer to the surface of the Earth. It can only measure surface skin temperatures when there are essentially no clouds present. The skin temperature is still “retrieved” in partly- (and even mostly-) cloudy conditions from other channels higher up in the atmosphere, and with “cloud clearing” algorithms, but these exotic numerical exercises can never get around the fact that the surface skin temperature can only be observed with satellite infrared measurements when no clouds are present.

Then there is the additional problem of comparing surface skin temperatures to traditional 2 meter air temperatures, especially over land. There will be large biases at the 1:30 a.m./p.m. observation times of AIRS. But I would think that climate trends in skin temperature should be reasonably close to trends in air temperature, so this is not a serious concern with me (although Roger Pielke, Sr. disagrees with me on this).

The new paper by Susskind et al. describes a 15-year dataset of global surface skin temperatures from the AIRS instrument on NASA’s Aqua satellite. ScienceDaily proclaimed that the study “verified global warming trends“, even though the period addressed (15 years) is too short to say much of anything much of value about global warming trends, especially since there was a record-setting warm El Nino near the end of that period.

Furthermore, that period (January 2003 through December 2017) shows significant warming even in our UAH lower tropospheric temperature (LT) data, with a trend 0.01 warmer than the “gold standard” HadCRUT4 surface temperature dataset (all deg. C/decade):

AIRS: +0.24
GISTEMP: +0.22
ECMWF: +0.20
Cowtan & Way: +0.19
UAH LT: +0.18
HadCRUT4: +0.17

I’m pretty sure the Susskind et al. paper was meant to prop up Gavin Schmidt’s GISTEMP dataset, which generally shows greater warming trends than the HadCRUT4 dataset that the IPCC tends to favor more. It remains to be seen whether the AIRS skin temperature dataset, with its “clear sky bias”, will be accepted as a way to monitor global temperature trends into the future.

What Satellite Dataset Should We Believe?

Of course, the short period of record of the AIRS dataset means that it really can’t address the pre-2003 adjustments made to the various global temperature datasets which significantly impact temperature trends computed with 40+ years of data.

What I want to specifically address here is a public comment made by Dr. Scott Denning on Twitter, maligning our (UAH) satellite dataset. He was responding to someone who objected to the new study, claiming our UAH satellite data shows minimal warming. While the person posting this objection didn’t have his numbers right (and as seen above, our trend even agrees with HadCRUT4 over the 2003-2017 period), Denning took it upon himself to take a swipe at us (see his large-font response, below):

Scott-Denning-tweet-1-550x733

First of all, I have no idea what Scott is talking about when he lists “towers” and “aircraft”…there has been no comprehensive comparisons of such data sources to global satellite data, mainly because there isn’t nearly enough geographic coverage by towers and aircraft.

Secondly, in the 25+ years that John Christy and I have pioneered the methods that others now use, we made only one “error” (found by RSS, and which we promptly fixed, having to do with an early diurnal drift adjustment). The additional finding by RSS of the orbit decay effect was not an “error” on our part any more than our finding of the “instrument body temperature effect” was an error on their part. All satellite datasets now include adjustments for both of these effects.

Nevertheless, as many of you know, our UAH dataset is now considered the “outlier” among the satellite datasets (which also include RSS, NOAA, and U. of Washington), with the least amount of global-average warming since 1979 (although we agree better in the tropics, where little warming has occurred). So let’s address the remaining claim of Scott Denning’s: that we disagree with independent data.

The only direct comparisons to satellite-based deep-layer temperatures are from radiosondes and global reanalysis datasets (which include all meteorological observations in a physically consistent fashion). What we will find is that RSS, NOAA, and UW have remaining errors in their datasets which they refuse to make adjustments for.

From late 1998 through 2004, there were two satellites operating: NOAA-14 with the last of the old MSU series of instruments on it, and NOAA-15 with the first new AMSU instrument on it. In the latter half of this overlap period there was considerable disagreement that developed between the two satellites. Since the older MSU was known to have a substantial measurement dependence on the physical temperature of the instrument (a problem fixed on the AMSU), and the NOAA-14 satellite carrying that MSU had drifted much farther in local observation time than any of the previous satellites, we chose to cut off the NOAA-14 processing when it started disagreeing substantially with AMSU. (Engineer James Shiue at NASA/Goddard once described the new AMSU as the “Cadillac” of well-calibrated microwave temperature sounders).

Despite the most obvious explanation that the NOAA-14 MSU was no longer usable, RSS, NOAA, and UW continue to use all of the NOAA-14 data through its entire lifetime and treat it as just as accurate as NOAA-15 AMSU data. Since NOAA-14 was warming significantly relative to NOAA-15, this puts a stronger warming trend into their satellite datasets, raising the temperature of all subsequent satellites’ measurements after about 2000.

But rather than just asserting the new AMSU should be believed over the old (drifting) MSU, let’s look at some data. Since Scott Denning mentions weather balloon (radiosonde) data, let’s look at our published comparisons between the 4 satellite datasets and radiosondes (as well as global reanalysis datasets) and see who agrees with independent data the best:

Sat-datasets-vs-sondes-reanalyses-tropics-Christy-et-al-2018-550x413
Trend differences 1979-2005 between 4 satellite datasets and either radiosondes (blue) or reanalyses (red) for the MSU2/AMSU5 tropospheric channel in the tropics. The balloon trends are calculated from the subset of gripoints where the radiosonde stations are located, whereas the reanalyses contain complete coverage of the tropics. For direct comparisons of full versus station-only grids see the paper.

Clearly, the RSS, NOAA, and UW satellite datasets are the outliers when it comes to comparisons to radiosondes and reanalyses, having too much warming compared to independent data.

But you might ask, why do those 3 satellite datasets agree so well with each other? Mainly because UW and NOAA have largely followed the RSS lead… using NOAA-14 data even when its calibration was drifting, and using similar strategies for diurnal drift adjustments. Thus, NOAA and UW are, to a first approximation, slightly altered versions of the RSS dataset.

Maybe Scott Denning was just having a bad day. In the past, he has been reasonable, being the only climate “alarmist” willing to speak at a Heartland climate conference. Or maybe he has since been pressured into toeing the alarmist line, and not being allowed to wander off the reservation.

In any event, I felt compelled to defend our work in response to what I consider (and the evidence shows) to be an unfair and inaccurate attack in social media of our UAH dataset.

UPDATE from John Christy (11:10 CDT April 26, 2019):

In response to comments about the RATPAC radiosonde data having more warming, John Christy provides the following:

The comparison with RATPAC-A referred to in the comments below is unclear (no area mentioned, no time frame).  But be that as it may, if you read our paper, RATPAC-A2 was one of the radiosonde datasets we used.  RATPAC-A2 has virtually no adjustments after 1998, so contains warming shifts known to have occurred in the Australian and U.S. VIZ sondes for example.  The IGRA dataset used in Christy et al. 2018 utilized 564 stations, whereas RATPAC uses about 85 globally, and far fewer just in the tropics where this comparison shown in the post was made.  RATPAC-A warms relative to the other radiosonde/reanalyses datasets since 1998 (which use over 500 sondes), but was included anyway in the comparisons in our paper. The warming bias relative to 7 other radiosonde and reanalysis datasets can be seen in the following plot:

RATPAC-vs-7-others-550x413