10 fallacies about Arctic sea ice & polar bear survival: teachers & parents take note

polarbearscience

Summer sea ice loss is finally ramping up: first year is disappearing, as it has done every year since ice came to the Arctic millions of years ago. But critical misconceptions, fallacies, and disinformation abound regarding Arctic sea ice and polar bear survival. Ahead of Arctic Sea Ice Day (15 July), here are 10 fallacies that teachers and parents especially need to know about.

Polar_Bear_Summer_2 FINAL (2)

The cartoon above was done by Josh: you can drop off the price of a beer (or more) for his efforts here.

As always, please contact me if you would like to examine any of the references included in this post. These references are what make my efforts different from the activist organization Polar Bears International. PBI virtually never provide references within the content it provides, including material it presents as ‘educational’. Links to previous posts of mine that provide expanded explanations, images, and…

View original post 3,839 more words

Advertisements

Climate Out of Control

Science Matters

Coors Baseball Field, Denver, Colorado, April 29, 2019

Frank Miele writes at Real Politics Climate Is Unpredictable, Weather You Like It or Not!
Excerpts in italics with my bolds.

They say all politics is local; so is all weather.

So on behalf of my fellow Westerners, I have to ask: What’s up with all this cold weather? It may not be a crisis yet, but in the two weeks leading up to Memorial Day — the traditional start of summer activities — much of the country has been donning sweaters and turning up the heat.

I know, I know. Weather is not climate, and you can’t generalize from anecdotal evidence of localized weather conditions to a unified theory of thermal dynamics, but isn’t that exactly what the climate alarmists have done, on a larger scale, for the past 25 years?

Haven’t we been brainwashed by political scientists (oops! I mean…

View original post 588 more words

Climate Models Have Been Predicting Too Much Warming

NOT A LOT OF PEOPLE KNOW THAT

By Paul Homewood

John Christy has a new paper out about climate models are predicting too much warming:

image

https://www.thegwpf.org/climate-models-have-been-predicting-too-much-warming/

Below is the GWPF press release:

A leading climatologist has said that the computer simulations that are used to predict global warming are failing on a key measure of the climate today, and cannot be trusted.

Speaking to a meeting in the Palace of Westminster in London, Professor John Christy of the University of Alabama in Huntsville, told MPs and peers that almost all climate models have predicted rapid warming at high altitudes in the tropics:

A similar discrepancy between empirical measurements and computer predictions has been confirmed at the global level:

And Dr Christy says that lessons are not being learned:

“An early look at some of the latest generation of climate models reveals they are predicting even faster warming. This is simply not credible.”

A paper outlining Dr Christy’s…

View original post 11 more words

Models Wrong About the Past Produce Unbelievable Futures

Science Matters

Models vs. Observations. Christy and McKitrick (2018) Figure 3

The title of this post is the theme driven home by Patrick J. Michaels in his critique of the most recent US National Climate Assessment (NA4). The failure of General Circulation Models (GCMs) is the focal point of his presentation February 14, 2018. Comments on the Fourth National Climate Assessment. Excerpts in italics with my bolds.

NA4 uses a flawed ensemble of models that dramatically overforecast warming of the lower troposphere, with even larger errors in the upper tropical troposphere. The model ensemble also could not accommodate the “pause” or “slowdown” in warming between the two large El Niños of 1997-8 and 2015-6. The distribution of warming rates within the CMIP5 ensemble is not a true indication of a statistical range of prospective warming, as it is a collection of systematic errors. Despite a glib statement about this Assessment fulfilling…

View original post 1,135 more words

The Greenhouse Deception Explained

NOT A LOT OF PEOPLE KNOW THAT

By Paul Homewood

A nice and concise video, well worth watching and circulating:

View original post

Scientific Hubris and Global Warming

Reblogged from Watts Up With That:

Scientific Hubris and Global Warming

Guest Post by Gregory Sloop

Notwithstanding portrayals in the movies as eccentrics who frantically warn humanity about genetically modified dinosaurs, aliens, and planet-killing asteroids, the popular image of a scientist is probably closer to the humble, bookish Professor, who used his intellect to save the castaways on practically every episode of Gilligan’s Island. The stereotypical scientist is seen as driven by a magnificent call, not some common, base motive. Unquestionably, science progresses unerringly to the truth.

This picture was challenged by the influential twentieth-century philosopher of science Thomas Kuhn, who held that scientific ”truth” is determined not as much by facts as by the consensus of the scientific community. The influence of thought leaders, rewarding of grants, and scorn of dissenters are used to protect mainstream theory. Unfortunately, science only makes genuine progress when the mainstream theory is disproved, what Kuhn called a “paradigm shift.” Data which conflict with the mainstream paradigm are ignored instead of used to develop a better one. Like most people, scientists are ultimately motivated by financial security, career advancement, and the desire for admiration. Thus, nonscientific considerations impact scientific “truth.”

This corruption of a noble pursuit permits scientific hubris to prosper. It can only exist when scientists are less than dispassionate seekers of truth. Scientific hubris condones suppression of criticism, promotes unfounded speculation, and excuses rejection of conflicting data. Consequently, scientific hubris allows errors to persist indefinitely. However, science advances so slowly the public usually has no idea of how often it is wrong.

Reconstructing extinct organisms from fossils requires scientific hubris. The fewer the number of fossils available, the greater the hubris required for reconstruction. The original reconstruction of the peculiar organism Hallucigenia, which lived 505 million years ago, showed it upside down and backwards. This was easily corrected when more fossils were found and no harm was done.

In contrast, scientific hubris causes harm when bad science is used to influence behavior. The 17th century microscopist Nicholas Hartsoeker drew a complete human within the head of a sperm, speculating that this was what might be beneath the “skin” of a sperm. Belief in preformation, the notion that sperm and eggs contain complete humans, was common at the time. His drawing could easily have been used to demonstrate why every sperm is sacred and masturbation is a sin.

Scientific hubris has claimed many. many lives. In the mid 19th century, the medical establishment rejected Ignaz Semmelweis’ recommendation that physicians disinfect their hands prior to examining pregnant women despite his unequivocal demonstration that this practice slashed the death rate due to obstetric infections. Because of scientific hubris, “medicine has a dark history of opposing new ideas and those who proposed them.” It was only when the germ theory of disease was established two decades later that the body of evidence supporting Semmelweis’ work became impossible to ignore. The greatest harm caused by scientific hubris is that it slows progress towards the truth.

Record keeping of earth’s surface temperature began around 1880, so there is less than 150 years of quantitative data about climate, which evolves at a glacial pace. Common sense suggests that quantitative data covering multiple warming and cooling periods is necessary to give perspective about the evolution of climate. Only then will scientists be able to make an educated guess whether the 1.5 degrees Fahrenheit increase in earth’s temperature since 1930 is the beginning of sustained warming which will negatively impact civilization, or a transient blip.

The inconvenient truth is that science is in the data acquisition phase of climate study, which must be completed before there is any chance of predicting climate, if it is predictable [vide infra]. Hubris goads scientists into giving answers even when the data are insufficient.

To put our knowledge about climate in perspective, imagine an investor has the first two weeks of data on the performance of a new stock market. Will those data allow the investor to know where the stock market will be in twenty years? No, because the behavior of the many variables which determine the performance of a stock market is unpredictable. Currently, predicting climate is no different.

Scientists use data from proxies to estimate earth’s surface temperature when the real temperature is unknowable. In medicine, these substitutes are called “surrogate markers.” Because hospital laboratories are rigorously inspected and the reproducibility, accuracy, and precision of their data is verified, hospital laboratory practices provide a useful standard for evaluating the quality of any scientific data.

Surrogate markers must be validated by showing that they correlate with “gold standard” data before they are used clinically. Comparison of data from tree growth rings, a surrogate marker for earth’s surface temperature, with the actual temperature shows that correlation between the two is worsening for unknown reasons. Earth’s temperature is only one factor which determines tree growth. Because soil conditions, genetics, rainfall, competition for nutrients, disease, age, fire, atmospheric carbon dioxide concentrations and consumption by herbivores and insects affect tree growth, the correlation between growth rings and earth’s temperature is imperfect.

Currently, growth rings cannot be regarded as a valid surrogate marker for the temperature of earth’s surface. The cause of the divergence problem must be identified and somehow remedied, and the remedy validated before growth rings are a credible surrogate marker or used to validate other surrogate markers.

Data from ice cores, boreholes, corals, and lake and ocean sediments are also used as surrogate markers. These are said to correlate with each other. Surrogate marker data are interpreted as showing a warm period between c.950 and c. 1250, which is sometimes called the “Medieval Climate Optimum,” and a cooler period called the “Little Ice Age” between the 16th and 19th centuries. The data from these surrogate markers have not been validated by comparison with a quantitative standard. Therefore, they give qualitative, not quantitative data. In medical terms, qualitative data are considered to be only “suggestive” of a diagnosis, not diagnostic. This level of diagnostic certainty is typically used to justify further diagnostic testing, not definitive therapy.

Anthropogenic global warming is often presented as fact. According to the philosopher Sir Karl Popper, a single conflicting observation is sufficient to disprove a theory. For example, the theory that all swans are white is disproved by one black swan. Therefore, the goal of science is to disprove, not prove a theory. Popper described how science should be practiced, while Kuhn described how science is actually practiced. Few theories satisfy Popper’s criterion. They are highly esteemed and above controversy. These include relativity, quantum mechanics, and plate tectonics. These theories come as close to settled science as is possible.

Data conflict about anthropogenic global warming. Using data from ice cores and lake sediments, Professor Gernot Patzelt argues that over the last 10,000 years, 65% of the time earth’s temperature was warmer than today. If his data are correct, human deforestation and carbon emissions are not required for global warming and intervention to forestall it may be futile.

The definitive test of anthropogenic global warming would be to study a duplicate earth without humans. Realistically, the only way is develop a successful computer model. However, modeling climate may be impossible because climate is a chaotic system. Small changes in the initial state of a chaotic system can cause very different outcomes, making them unpredictable. This is commonly called the “butterfly effect” because of the possibility that an action as fleeting as the beating of a butterfly’s wings can affect distant weather. This phenomenon also limits the predictability of weather.

Between 1880 and 1920, increasing atmospheric carbon dioxide concentrations were not associated with global warming. These variables did correlate between 1920 and 1940 and from around 1970 to today. These associations may appear to be compelling evidence for global warming, but associations cannot prove cause and effect. One example of a misleading association was published in a paper entitled “The prediction of lung cancer in Australia 1939–1981.” According to this paper, “Lung cancer is shown to be predicted from petrol consumption figures for a period of 42 years. The mean time for the disease to develop is discussed and the difference in the mortality rate for male and females is explained.” Obviously, gasoline use does not cause lung cancer.

The idea that an association is due to cause and effect is so attractive that these claims continue to be published. Recently, an implausible association between watching television and chronic inflammation was reported. In their book Follies and Fallacies in Medicine, Skrabanek and McCormick wrote, “As a result of failing to make this distinction [between association and cause], learning from experience may lead to nothing more than learning to make the same mistakes with increasing confidence.” Failure to learn from mistakes is another manifestation of scientific hubris. Those who are old enough to remember the late 1970’s may recall predictions of a global cooling crisis based on transient glacial growth and slight global cooling.

The current situation regarding climate change is similar to that confronting cavemen when facing winter and progressively shorter days. Every day there was less time to hunt and gather food and more cold, useless darkness. Shamans must have desperately called for ever harsher sacrifices to stop what otherwise seemed inevitable. Only when science enabled man to predict the return of longer days was sacrifice no longer necessary.

The mainstream position about anthropogenic global warming is established. The endorsement of the United Nations, U.S. governmental agencies, politicians, and the media buttresses this position. This nonscientific input has contributed to the perception that anthropogenic global warming is settled science. A critical evaluation of the available data about global warming, and anthropogenic global warming in particular, allow only a guess about the future climate. It is scientific hubris not to recognize that guess for what it is.

Half of 21st Century Warming Due to El Nino

Reblogged from Dr.RoySpencer.com  [HiFast bold]

May 13th, 2019 by Roy W. Spencer, Ph. D.

A major uncertainty in figuring out how much of recent warming has been human-caused is knowing how much nature has caused. The IPCC is quite sure that nature is responsible for less than half of the warming since the mid-1900s, but politicians, activists, and various green energy pundits go even further, behaving as if warming is 100% human-caused.

The fact is we really don’t understand the causes of natural climate change on the time scale of an individual lifetime, although theories abound. For example, there is plenty of evidence that the Little Ice Age was real, and so some of the warming over the last 150 years (especially prior to 1940) was natural — but how much?

The answer makes as huge difference to energy policy. If global warming is only 50% as large as is predicted by the IPCC (which would make it only 20% of the problem portrayed by the media and politicians), then the immense cost of renewable energy can be avoided until we have new cost-competitive energy technologies.

The recently published paper Recent Global Warming as Confirmed by AIRS used 15 years of infrared satellite data to obtain a rather strong global surface warming trend of +0.24 C/decade. Objections have been made to that study by me (e.g. here) and others, not the least of which is the fact that the 2003-2017 period addressed had a record warm El Nino near the end (2015-16), which means the computed warming trend over that period is not entirely human-caused warming.

If we look at the warming over the 19-year period 2000-2018, we see the record El Nino event during 2015-16 (all monthly anomalies are relative to the 2001-2017 average seasonal cycle):

21st-century-warming-2000-2018-550x733
Fig. 1. 21st Century global-average temperature trends (top) averaged across all CMIP5 climate models (gray), HadCRUT4 observations (green), and UAH tropospheric temperature (purple). The Multivariate ENSO Index (MEI, bottom) shows the upward trend in El Nino activity over the same period, which causes a natural enhancement of the observed warming trend.

We also see that the average of all of the CMIP5 models’ surface temperature trend projections (in which natural variability in the many models is averaged out) has a warmer trend than the observations, despite the trend-enhancing effect of the 2015-16 El Nino event.

So, how much of an influence did that warm event have on the computed trends? The simplest way to address that is to use only the data before that event. To be somewhat objective about it, we can take the period over which there is no trend in El Nino (and La Nina) activity, which happens to be 2000 through June, 2015 (15.5 years):

21st-century-warming-2000-2015.5-550x733
Fig. 2. As in Fig. 1, but for the 15.5 year period 2000 to June 2015, which is the period over which there was no trend in El Nino and La Nina activity.

Note that the observed trend in HadCRUT4 surface temperatures is nearly cut in half compared to the CMIP5 model average warming over the same period, and the UAH tropospheric temperature trend is almost zero.

One might wonder why the UAH LT trend is so low for this period, even though in Fig. 1 it is not that far below the surface temperature observations (+0.12 C/decade versus +0.16 C/decade for the full period through 2018). So, I examined the RSS version of LT for 2000 through June 2015, which had a +0.10 C/decade trend. For a more apples-to-apples comparison, the CMIP5 surface-to-500 hPa layer average temperature averaged across all models is +0.20 C/decade, so even RSS LT (which usually has a warmer trend than UAH LT) has only one-half the warming trend as the average CMIP5 model during this period.

So, once again, we see that the observed rate of warming — when we ignore the natural fluctuations in the climate system (which, along with severe weather events dominate “climate change” news) — is only about one-half of that projected by climate models at this point in the 21st Century. This fraction is consistent with the global energy budget study of Lewis & Curry (2018) which analyzed 100 years of global temperatures and ocean heat content changes, and also found that the climate system is only about 1/2 as sensitive to increasing CO2 as climate models assume.

It will be interesting to see if the new climate model assessment (CMIP6) produces warming more in line with the observations. From what I have heard so far, this appears unlikely. If history is any guide, this means the observations will continue to need adjustments to fit the models, rather than the other way around.

Continuous observations in the North Atlantic challenges current view about ocean circulation variability

Reblogged from Watts Up With That:

Kevin Kilty

May 10, 2019

[HiFast Note:  Figures A and B added:

osnap_array_schematic_v2_13Nov14

Figure A. OSNAP Array Schematic, source:  https://www.o-snap.org/]

20160329_OSNAP_planeview-1Figure B. OSNAP Array, source:  https://www.o-snap.org/observations/configuration/]

clip_image002Figure 1: Transect of the North Atlantic basins showing color coded salinity, and gray vertical lines showing mooring locations of OSNAP sensor arrays. (Figure from OSNAP Configuration page)

Figure 1: Transect of the North Atlantic basins showing color coded salinity, and gray vertical lines showing mooring locations of OSNAP sensor arrays. (Figure from OSNAP Configuration page)

From Physics Today (April 2019 Issue, p. 19)1:

The overturning of water in the North Atlantic depends on meridional overturning circulation (MOC) wherein warm surface waters in the tropical Atlantic move to higher latitudes losing heat and moisture to the atmosphere along the way. In the North Atlantic and Arctic this water, now saline and cold, sinks to produce north Atlantic Deep water (NADW). It completes its circulation by flowing back toward the tropics or into other ocean basins at depth, and then subsequently upwelling through a variety of mechanisms. The time scale of this overturning is 600 years or so2.

The MOC transports large amounts of heat from the tropics toward the poles, and is thought to be responsible for the relatively mild climate of northern Europe. The heat being transferred from the ocean surface back into the atmosphere at high latitudes is as large as 50W/m2, which is roughly equivalent to solar radiation reaching the surface at high latitudes during winter months2.

In order to evaluate models of ocean overturning oceanographers have relied upon hydrographic research cruises. But the time increment between successive cruises is often long, and infrequent sampling cannot measure long term trends reliably nor gauge current ocean dynamics.

To get a better handle on MOC behavior an array of sensors to continuously monitor temperature, salinity, and velocity measurements known as the Overturning in the Subpolar North Atlantic Program (OSNAP) was recently deployed across the region at multiple depths. Figure 1 shows sensor moorings in relation to the various ocean basins of the North Atlantic. Figure 2 shows data from the first 21 months of operation, and displays a rather large variability of overturning in the eastern North Atlantic between Greenland and Scotland that reaches +/-10 Sverdrup (Sv=one million cubic meters per second) monthly, and amounts to one-half the MOC’s total annual transport. Researchers had thought that such variability was only possible on time scales of decades or longer.

Figure 2: Twenty-one months of observational data showing large month to month variation in MOC flows.

Figure 2: Twenty-one months of observational data showing large month to month variation in MOC flows.

The original experimental design for sensor placement in OSNAP was predicated on much smaller variability of a few Sv per month3. The report does not address what impact this surprising level of transport variability has on validity of the experiment design; but the surprisingly large variations in flow challenge expectations derived from climate models regarding the relative amount of overturning between the Labrador Sea and the gateway to the Arctic between Greenland and Scotland.

As one oceanographer put it, the process of deep water formation and sinking of the MOC is more complex than people believed, and these results should prepare people to modify their ideas about how the oceans work. This improved data should not only help test and improve climate models, but also produce more realistic estimates of CO2 uptake and storage.

References:

1. Alex Lopatka, Altantic water carried northward sinks farther east of previous estimates, Physics Today, 72, 4, 19(2019).

2. J. Robert Toggweiler, The Ocean’s Overturning Circulation, Physics Today, 47, 11, 45(1994).

3. Susan Lozier, Bill Johns, Fiamma Straneo, and Amy Bower, Workshop for the Design of a Subpolar North Atlantic Observing System, URL= https://www.whoi.edu/fileserver.do?id=163724&pt=2&p=175489, accessed 05/10/2019.

SVENSMARK’s Force Majeure, The Sun’s Large Role in Climate Change

Reblogged from Watts Up With That:

GUEST: HENRIK SVENSMARK

By H. Sterling Burnett

By bombarding the Earth with cosmic rays and being a driving force behind cloud formations, the sun plays a much larger role on climate than “consensus scientists” care to admit.

The Danish National Space Institute’s Dr. Henrik Svensmark has assembled a powerful array of data and evidence in his recent study, Force Majeure the Sun’s Large Role in Climate Change.

The study shows that throughout history and now, the sun plays a powerful role in climate change. Solar activity impacts cosmic rays which are tied to cloud formation. Clouds, their abundance or dearth, directly affects the earth’s climate.

Climate models don’t accurately account for the role of clouds or solar activity in climate change, with the result they assume the earth is much more sensitive to greenhouse gas levels than it is. Unfortunately, the impact of clouds and the sun on climate are understudied because climate science has become so politicized.

Full audio interview here:  Interview with Dr. Henrick Svensmark

 

H. Sterling Burnett, Ph.D. is a Heartland senior fellow on environmental policy and the managing editor of Environment & Climate News.

The reproducibility crisis in science

Reblogged from Watts Up With That:

Dorothy Bishop describes how threats to reproducibility, recognized but unaddressed for decades, might finally be brought under control.

From Nature:

More than four decades into my scientific career, I find myself an outlier among academics of similar age and seniority: I strongly identify with the movement to make the practice of science more robust. It’s not that my contemporaries are unconcerned about doing science well; it’s just that many of them don’t seem to recognize that there are serious problems with current practices. By contrast, I think that, in two decades, we will look back on the past 60 years — particularly in biomedical science — and marvel at how much time and money has been wasted on flawed research.

How can that be? We know how to formulate and test hypotheses in controlled experiments. We can account for unwanted variation with statistical techniques. We appreciate the need to replicate observations.

Yet many researchers persist in working in a way almost guaranteed not to deliver meaningful results. They ride with what I refer to as the four horsemen of the reproducibility apocalypse: publication bias, low statistical power, P-value hacking and HARKing (hypothesizing after results are known). My generation and the one before us have done little to rein these in.

In 1975, psychologist Anthony Greenwald noted that science is prejudiced against null hypotheses; we even refer to sound work supporting such conclusions as ‘failed experiments’. This prejudice leads to publication bias: researchers are less likely to write up studies that show no effect, and journal editors are less likely to accept them. Consequently, no one can learn from them, and researchers waste time and resources on repeating experiments, redundantly.

That has begun to change for two reasons. First, clinicians have realized that publication bias harms patients. If there are 20 studies of a drug and only one shows a benefit, but that is the one that is published, we get a distorted view of drug efficacy. Second, the growing use of meta-analyses, which combine results across studies, has started to make clear that the tendency not to publish negative results gives misleading impressions.

Low statistical power followed a similar trajectory. My undergraduate statistics courses had nothing to say on statistical power, and few of us realized we should take it seriously. Simply, if a study has a small sample size, and the effect of an experimental manipulation is small, then odds are you won’t detect the effect — even if one is there.

I stumbled on the issue of P-hacking before the term existed. In the 1980s, I reviewed the literature on brain lateralization (how sides of the brain take on different functions) and developmental disorders, and I noticed that, although many studies described links between handedness and dyslexia, the definition of ‘atypical handedness’ changed from study to study — even within the same research group. I published a sarcastic note, including a simulation to show how easy it was to find an effect if you explored the data after collecting results (D. V. M. Bishop J. Clin. Exp. Neuropsychol. 12, 812–816; 1990). I subsequently noticed similar phenomena in other fields: researchers try out many analyses but report only the ones that are ‘statistically significant’.

This practice, now known as P-hacking, was once endemic to most branches of science that rely on P values to test significance of results, yet few people realized how seriously it could distort findings. That started to change in 2011, with an elegant, comic paper in which the authors crafted analyses to prove that listening to the Beatles could make undergraduates younger (J. P. Simmons et al. Psychol. Sci. 22, 1359–1366; 2011). “Undisclosed flexibility,” they wrote, “allows presenting anything as significant.”