Scientific Hubris and Global Warming

Reblogged from Watts Up With That:

Scientific Hubris and Global Warming

Guest Post by Gregory Sloop

Notwithstanding portrayals in the movies as eccentrics who frantically warn humanity about genetically modified dinosaurs, aliens, and planet-killing asteroids, the popular image of a scientist is probably closer to the humble, bookish Professor, who used his intellect to save the castaways on practically every episode of Gilligan’s Island. The stereotypical scientist is seen as driven by a magnificent call, not some common, base motive. Unquestionably, science progresses unerringly to the truth.

This picture was challenged by the influential twentieth-century philosopher of science Thomas Kuhn, who held that scientific ”truth” is determined not as much by facts as by the consensus of the scientific community. The influence of thought leaders, rewarding of grants, and scorn of dissenters are used to protect mainstream theory. Unfortunately, science only makes genuine progress when the mainstream theory is disproved, what Kuhn called a “paradigm shift.” Data which conflict with the mainstream paradigm are ignored instead of used to develop a better one. Like most people, scientists are ultimately motivated by financial security, career advancement, and the desire for admiration. Thus, nonscientific considerations impact scientific “truth.”

This corruption of a noble pursuit permits scientific hubris to prosper. It can only exist when scientists are less than dispassionate seekers of truth. Scientific hubris condones suppression of criticism, promotes unfounded speculation, and excuses rejection of conflicting data. Consequently, scientific hubris allows errors to persist indefinitely. However, science advances so slowly the public usually has no idea of how often it is wrong.

Reconstructing extinct organisms from fossils requires scientific hubris. The fewer the number of fossils available, the greater the hubris required for reconstruction. The original reconstruction of the peculiar organism Hallucigenia, which lived 505 million years ago, showed it upside down and backwards. This was easily corrected when more fossils were found and no harm was done.

In contrast, scientific hubris causes harm when bad science is used to influence behavior. The 17th century microscopist Nicholas Hartsoeker drew a complete human within the head of a sperm, speculating that this was what might be beneath the “skin” of a sperm. Belief in preformation, the notion that sperm and eggs contain complete humans, was common at the time. His drawing could easily have been used to demonstrate why every sperm is sacred and masturbation is a sin.

Scientific hubris has claimed many. many lives. In the mid 19th century, the medical establishment rejected Ignaz Semmelweis’ recommendation that physicians disinfect their hands prior to examining pregnant women despite his unequivocal demonstration that this practice slashed the death rate due to obstetric infections. Because of scientific hubris, “medicine has a dark history of opposing new ideas and those who proposed them.” It was only when the germ theory of disease was established two decades later that the body of evidence supporting Semmelweis’ work became impossible to ignore. The greatest harm caused by scientific hubris is that it slows progress towards the truth.

Record keeping of earth’s surface temperature began around 1880, so there is less than 150 years of quantitative data about climate, which evolves at a glacial pace. Common sense suggests that quantitative data covering multiple warming and cooling periods is necessary to give perspective about the evolution of climate. Only then will scientists be able to make an educated guess whether the 1.5 degrees Fahrenheit increase in earth’s temperature since 1930 is the beginning of sustained warming which will negatively impact civilization, or a transient blip.

The inconvenient truth is that science is in the data acquisition phase of climate study, which must be completed before there is any chance of predicting climate, if it is predictable [vide infra]. Hubris goads scientists into giving answers even when the data are insufficient.

To put our knowledge about climate in perspective, imagine an investor has the first two weeks of data on the performance of a new stock market. Will those data allow the investor to know where the stock market will be in twenty years? No, because the behavior of the many variables which determine the performance of a stock market is unpredictable. Currently, predicting climate is no different.

Scientists use data from proxies to estimate earth’s surface temperature when the real temperature is unknowable. In medicine, these substitutes are called “surrogate markers.” Because hospital laboratories are rigorously inspected and the reproducibility, accuracy, and precision of their data is verified, hospital laboratory practices provide a useful standard for evaluating the quality of any scientific data.

Surrogate markers must be validated by showing that they correlate with “gold standard” data before they are used clinically. Comparison of data from tree growth rings, a surrogate marker for earth’s surface temperature, with the actual temperature shows that correlation between the two is worsening for unknown reasons. Earth’s temperature is only one factor which determines tree growth. Because soil conditions, genetics, rainfall, competition for nutrients, disease, age, fire, atmospheric carbon dioxide concentrations and consumption by herbivores and insects affect tree growth, the correlation between growth rings and earth’s temperature is imperfect.

Currently, growth rings cannot be regarded as a valid surrogate marker for the temperature of earth’s surface. The cause of the divergence problem must be identified and somehow remedied, and the remedy validated before growth rings are a credible surrogate marker or used to validate other surrogate markers.

Data from ice cores, boreholes, corals, and lake and ocean sediments are also used as surrogate markers. These are said to correlate with each other. Surrogate marker data are interpreted as showing a warm period between c.950 and c. 1250, which is sometimes called the “Medieval Climate Optimum,” and a cooler period called the “Little Ice Age” between the 16th and 19th centuries. The data from these surrogate markers have not been validated by comparison with a quantitative standard. Therefore, they give qualitative, not quantitative data. In medical terms, qualitative data are considered to be only “suggestive” of a diagnosis, not diagnostic. This level of diagnostic certainty is typically used to justify further diagnostic testing, not definitive therapy.

Anthropogenic global warming is often presented as fact. According to the philosopher Sir Karl Popper, a single conflicting observation is sufficient to disprove a theory. For example, the theory that all swans are white is disproved by one black swan. Therefore, the goal of science is to disprove, not prove a theory. Popper described how science should be practiced, while Kuhn described how science is actually practiced. Few theories satisfy Popper’s criterion. They are highly esteemed and above controversy. These include relativity, quantum mechanics, and plate tectonics. These theories come as close to settled science as is possible.

Data conflict about anthropogenic global warming. Using data from ice cores and lake sediments, Professor Gernot Patzelt argues that over the last 10,000 years, 65% of the time earth’s temperature was warmer than today. If his data are correct, human deforestation and carbon emissions are not required for global warming and intervention to forestall it may be futile.

The definitive test of anthropogenic global warming would be to study a duplicate earth without humans. Realistically, the only way is develop a successful computer model. However, modeling climate may be impossible because climate is a chaotic system. Small changes in the initial state of a chaotic system can cause very different outcomes, making them unpredictable. This is commonly called the “butterfly effect” because of the possibility that an action as fleeting as the beating of a butterfly’s wings can affect distant weather. This phenomenon also limits the predictability of weather.

Between 1880 and 1920, increasing atmospheric carbon dioxide concentrations were not associated with global warming. These variables did correlate between 1920 and 1940 and from around 1970 to today. These associations may appear to be compelling evidence for global warming, but associations cannot prove cause and effect. One example of a misleading association was published in a paper entitled “The prediction of lung cancer in Australia 1939–1981.” According to this paper, “Lung cancer is shown to be predicted from petrol consumption figures for a period of 42 years. The mean time for the disease to develop is discussed and the difference in the mortality rate for male and females is explained.” Obviously, gasoline use does not cause lung cancer.

The idea that an association is due to cause and effect is so attractive that these claims continue to be published. Recently, an implausible association between watching television and chronic inflammation was reported. In their book Follies and Fallacies in Medicine, Skrabanek and McCormick wrote, “As a result of failing to make this distinction [between association and cause], learning from experience may lead to nothing more than learning to make the same mistakes with increasing confidence.” Failure to learn from mistakes is another manifestation of scientific hubris. Those who are old enough to remember the late 1970’s may recall predictions of a global cooling crisis based on transient glacial growth and slight global cooling.

The current situation regarding climate change is similar to that confronting cavemen when facing winter and progressively shorter days. Every day there was less time to hunt and gather food and more cold, useless darkness. Shamans must have desperately called for ever harsher sacrifices to stop what otherwise seemed inevitable. Only when science enabled man to predict the return of longer days was sacrifice no longer necessary.

The mainstream position about anthropogenic global warming is established. The endorsement of the United Nations, U.S. governmental agencies, politicians, and the media buttresses this position. This nonscientific input has contributed to the perception that anthropogenic global warming is settled science. A critical evaluation of the available data about global warming, and anthropogenic global warming in particular, allow only a guess about the future climate. It is scientific hubris not to recognize that guess for what it is.

Advertisements

Half of 21st Century Warming Due to El Nino

Reblogged from Dr.RoySpencer.com  [HiFast bold]

May 13th, 2019 by Roy W. Spencer, Ph. D.

A major uncertainty in figuring out how much of recent warming has been human-caused is knowing how much nature has caused. The IPCC is quite sure that nature is responsible for less than half of the warming since the mid-1900s, but politicians, activists, and various green energy pundits go even further, behaving as if warming is 100% human-caused.

The fact is we really don’t understand the causes of natural climate change on the time scale of an individual lifetime, although theories abound. For example, there is plenty of evidence that the Little Ice Age was real, and so some of the warming over the last 150 years (especially prior to 1940) was natural — but how much?

The answer makes as huge difference to energy policy. If global warming is only 50% as large as is predicted by the IPCC (which would make it only 20% of the problem portrayed by the media and politicians), then the immense cost of renewable energy can be avoided until we have new cost-competitive energy technologies.

The recently published paper Recent Global Warming as Confirmed by AIRS used 15 years of infrared satellite data to obtain a rather strong global surface warming trend of +0.24 C/decade. Objections have been made to that study by me (e.g. here) and others, not the least of which is the fact that the 2003-2017 period addressed had a record warm El Nino near the end (2015-16), which means the computed warming trend over that period is not entirely human-caused warming.

If we look at the warming over the 19-year period 2000-2018, we see the record El Nino event during 2015-16 (all monthly anomalies are relative to the 2001-2017 average seasonal cycle):

21st-century-warming-2000-2018-550x733
Fig. 1. 21st Century global-average temperature trends (top) averaged across all CMIP5 climate models (gray), HadCRUT4 observations (green), and UAH tropospheric temperature (purple). The Multivariate ENSO Index (MEI, bottom) shows the upward trend in El Nino activity over the same period, which causes a natural enhancement of the observed warming trend.

We also see that the average of all of the CMIP5 models’ surface temperature trend projections (in which natural variability in the many models is averaged out) has a warmer trend than the observations, despite the trend-enhancing effect of the 2015-16 El Nino event.

So, how much of an influence did that warm event have on the computed trends? The simplest way to address that is to use only the data before that event. To be somewhat objective about it, we can take the period over which there is no trend in El Nino (and La Nina) activity, which happens to be 2000 through June, 2015 (15.5 years):

21st-century-warming-2000-2015.5-550x733
Fig. 2. As in Fig. 1, but for the 15.5 year period 2000 to June 2015, which is the period over which there was no trend in El Nino and La Nina activity.

Note that the observed trend in HadCRUT4 surface temperatures is nearly cut in half compared to the CMIP5 model average warming over the same period, and the UAH tropospheric temperature trend is almost zero.

One might wonder why the UAH LT trend is so low for this period, even though in Fig. 1 it is not that far below the surface temperature observations (+0.12 C/decade versus +0.16 C/decade for the full period through 2018). So, I examined the RSS version of LT for 2000 through June 2015, which had a +0.10 C/decade trend. For a more apples-to-apples comparison, the CMIP5 surface-to-500 hPa layer average temperature averaged across all models is +0.20 C/decade, so even RSS LT (which usually has a warmer trend than UAH LT) has only one-half the warming trend as the average CMIP5 model during this period.

So, once again, we see that the observed rate of warming — when we ignore the natural fluctuations in the climate system (which, along with severe weather events dominate “climate change” news) — is only about one-half of that projected by climate models at this point in the 21st Century. This fraction is consistent with the global energy budget study of Lewis & Curry (2018) which analyzed 100 years of global temperatures and ocean heat content changes, and also found that the climate system is only about 1/2 as sensitive to increasing CO2 as climate models assume.

It will be interesting to see if the new climate model assessment (CMIP6) produces warming more in line with the observations. From what I have heard so far, this appears unlikely. If history is any guide, this means the observations will continue to need adjustments to fit the models, rather than the other way around.

Mighty Greenland glacier slams on brakes

Tallbloke's Talkshop

Jakobshavn glacier, West Greenland [image credit: Wikipedia]
Even the climate alarm oriented BBC has finally had to admit the inconvenient truth about Greenland’s largest glacier. Instead of dropping in height by 20m. a year, it’s now thickening by 20m. a year. This isn’t supposed to happen when one of the stock phrases of the fearmongering media is ‘the rapidly melting Arctic’. Of course logic says that since glaciers can grow naturally they can also retreat naturally, despite attempts to blame humans.

European satellites have detailed the abrupt change in behaviour of one of Greenland’s most important glaciers, says BBC News.

In the 2000s, Jakobshavn Isbrae was the fastest flowing ice stream on the island, travelling at 17km a year.

As it sped to the ocean, its front end also retreated and thinned, dropping in height by as much as 20m year.

But now it’s all change. Jakobshavn is travelling…

View original post 230 more words

Continuous observations in the North Atlantic challenges current view about ocean circulation variability

Reblogged from Watts Up With That:

Kevin Kilty

May 10, 2019

[HiFast Note:  Figures A and B added:

osnap_array_schematic_v2_13Nov14

Figure A. OSNAP Array Schematic, source:  https://www.o-snap.org/]

20160329_OSNAP_planeview-1Figure B. OSNAP Array, source:  https://www.o-snap.org/observations/configuration/]

clip_image002Figure 1: Transect of the North Atlantic basins showing color coded salinity, and gray vertical lines showing mooring locations of OSNAP sensor arrays. (Figure from OSNAP Configuration page)

Figure 1: Transect of the North Atlantic basins showing color coded salinity, and gray vertical lines showing mooring locations of OSNAP sensor arrays. (Figure from OSNAP Configuration page)

From Physics Today (April 2019 Issue, p. 19)1:

The overturning of water in the North Atlantic depends on meridional overturning circulation (MOC) wherein warm surface waters in the tropical Atlantic move to higher latitudes losing heat and moisture to the atmosphere along the way. In the North Atlantic and Arctic this water, now saline and cold, sinks to produce north Atlantic Deep water (NADW). It completes its circulation by flowing back toward the tropics or into other ocean basins at depth, and then subsequently upwelling through a variety of mechanisms. The time scale of this overturning is 600 years or so2.

The MOC transports large amounts of heat from the tropics toward the poles, and is thought to be responsible for the relatively mild climate of northern Europe. The heat being transferred from the ocean surface back into the atmosphere at high latitudes is as large as 50W/m2, which is roughly equivalent to solar radiation reaching the surface at high latitudes during winter months2.

In order to evaluate models of ocean overturning oceanographers have relied upon hydrographic research cruises. But the time increment between successive cruises is often long, and infrequent sampling cannot measure long term trends reliably nor gauge current ocean dynamics.

To get a better handle on MOC behavior an array of sensors to continuously monitor temperature, salinity, and velocity measurements known as the Overturning in the Subpolar North Atlantic Program (OSNAP) was recently deployed across the region at multiple depths. Figure 1 shows sensor moorings in relation to the various ocean basins of the North Atlantic. Figure 2 shows data from the first 21 months of operation, and displays a rather large variability of overturning in the eastern North Atlantic between Greenland and Scotland that reaches +/-10 Sverdrup (Sv=one million cubic meters per second) monthly, and amounts to one-half the MOC’s total annual transport. Researchers had thought that such variability was only possible on time scales of decades or longer.

Figure 2: Twenty-one months of observational data showing large month to month variation in MOC flows.

Figure 2: Twenty-one months of observational data showing large month to month variation in MOC flows.

The original experimental design for sensor placement in OSNAP was predicated on much smaller variability of a few Sv per month3. The report does not address what impact this surprising level of transport variability has on validity of the experiment design; but the surprisingly large variations in flow challenge expectations derived from climate models regarding the relative amount of overturning between the Labrador Sea and the gateway to the Arctic between Greenland and Scotland.

As one oceanographer put it, the process of deep water formation and sinking of the MOC is more complex than people believed, and these results should prepare people to modify their ideas about how the oceans work. This improved data should not only help test and improve climate models, but also produce more realistic estimates of CO2 uptake and storage.

References:

1. Alex Lopatka, Altantic water carried northward sinks farther east of previous estimates, Physics Today, 72, 4, 19(2019).

2. J. Robert Toggweiler, The Ocean’s Overturning Circulation, Physics Today, 47, 11, 45(1994).

3. Susan Lozier, Bill Johns, Fiamma Straneo, and Amy Bower, Workshop for the Design of a Subpolar North Atlantic Observing System, URL= https://www.whoi.edu/fileserver.do?id=163724&pt=2&p=175489, accessed 05/10/2019.

Curious Correlations

Reblogged from Watts Up With That:

Guest Post by Willis Eschenbach

I got to thinking about the relationship between the Equatorial Pacific, where we find the El Nino/La Nina phenomenon, and the rest of the world. I’ve seen various claims about what happens to the temperature in various places at various lag-times after the Nino/Nina changes. So I decided to take a look.

To do that, I’ve gotten the temperature of the NINO34 region of the Equatorial Pacific. The NINO34 region stretches from 90°W, near South America, out to 170° West in the mid-Pacific, and from 5° North to 5° South of the Equator. I’ve calculated how well correlated that temperature is with the temperatures in the whole world, at various time lags.

To start with, here’s the correlation of what the temperature of the NINO34 region is doing with what the rest of the world is doing, with no time lag. Figure 1 shows which areas of the planet move in step with or in opposition to the NINO34 region with no lag.

Figure 1. Correlation of the temperature of the NINO34 region (90°-170°W, 5°N/S) with gridcell temperatures of the rest of the globe. Correlation values greater than 0.6 are all shown in red.

Now, perfect correlation is where two variables move in total lockstep. It has a value of 1.0. And if there is perfect anti-correlation, meaning whenever one variable moves up the other moves down, that has a value of minus 1.0.

There are a couple of interesting points about that first look, showing correlations with no lag. The Indian Ocean moves very strongly in harmony with the NINO34 region (red). Hmmm. However, the Atlantic doesn’t do that. Again hmmm. Also, on average northern hemisphere land is positively correlated with the NINO34 region (orange), and southern hemisphere land is the opposite, negatively correlated (blue).

Next, with a one-month lag to give the Nino/Nina effects time to start spreading around the planet, we see the following:

Figure 2. As in Figure 1, but with a one month lag between the NINO34 temperature and the rest of the world. In other words, we’re comparing each month’s temperature with the previous month’s NINO34 temperature.

Here, after a month, the North Pacific and the North Atlantic both start to feel the effects. Their correlation switches from negative (blues and greens) to positive (red-orange). Next, here’s the situation after a two-month lag.

Figure 3. As in previous figures, but with a two month lag.

I found this result most surprising. Two months after a Nino/Nina change, the entire Northern Hemisphere strongly tends to move in the same direction as the NINO34 region moved two months earlier … and at the same time, the entire Southern Hemisphere moves in opposition to what the NINO34 region did two months earlier.

Hmmm …

And here’s the three-month lag:

Figure 4. As in previous figures, but with a three month lag.

An interesting feature of the above figure is that the good correlation of the north-eastern Pacific Ocean off the west coast of North America does not extend over the continent itself.

Finally, after four months, the hemispherical pattern begins to fall apart.

Figure 5. As in previous figures, but with a four & five month lag.

Even at five months, curious patterns remain. In the northern hemisphere, the land is all negatively correlated with NINO34, and the ocean is positively correlated. But in the southern hemisphere, the land is all positively correlated and the ocean negative.

Note that this hemispheric land-ocean difference with a five-month lag is the exact opposite of the land-ocean difference with no lag shown in Figure 1.

Now … what do I make of all this?

The first thing that it brings up for me is the astounding complexity of the climate system. I mean, who would have guessed that the two hemispheres would have totally opposite strong responses to the Nino/Nina phenomenon? And who would have predicted that the land and the ocean would react in opposite directions to the Nino/Nina changes right up to the very coastlines?

Second, it would seem to offer some ability to improve long-range forecasting for certain specific areas. Positive correlation with Hawaii, North Australia, Southern Africa, and Brazil is good up to four-five months out.

Finally, it strikes me that I can run this in reverse. By that, I mean I can find all areas of the planet that are able to predict the future temperature at some pre-selected location. Like, say, what areas of the globe correlate well with whatever the UK will be doing two months from now?

Hmmm indeed …

Warmest regards to all, the mysteries of this wondrous world are endless.

w.

Comparison of global climatologies confirms warming of the global ocean

Reblogged from Watts Up With That:

Institute of Atmospheric Physics, Chinese Academy of Sciences

200635_web

IMAGE: Deployment of an APEX float from a German research ship.

Credit: Argo

The global ocean represents the most important component of the Earth climate system. The oceans accumulate heat energy and transport heat from the tropics to higher latitudes, responding very slowly to changes in the atmosphere. Digital gridded climatologies of the global ocean provide helpful background information for many oceanographic, geochemical and biological applications. Because both the global ocean and the observational basis are changing, periodic updates of ocean climatologies are needed, which is in line with the World Meteorological Organization’s recommendations to provide decadal updates of atmospheric climatologies.

“Constructing ocean climatologies consists of several steps, including data quality control, adjustments for instrumental biases, and filling the data gaps by means of a suitable interpolation method”, says Professor Viktor Gouretski of the University of Hamburg and a scholarship holder of the Chinese Academy of Sciences’ President’s International Fellowship Initiative (PIFI) at the Institute of Atmospheric Physics, Chinese Academy of Sciences, and the author of a report recently published in Atmospheric and Oceanic Science Letters.

“Sea water is essentially a two-component system, with a nonlinear dependency of density on temperature and salinity, with the mixing in the ocean interior taking place predominantly along isopycnal surfaces. Therefore, interpolation of oceanic parameters should be performed on isopycnals rather than on isobaric levels, to minimize production of artificial water masses. The differences between these two methods of data interpolation are most pronounced in the high-gradient regions like the Gulf Stream, Kuroshio, and Antarctic Circumpolar Current,” continues Professor Gouretski.

In his recent report, Professor Gouretski presents a new World Ocean Circulation Experiment/ARGO Global Hydrographic Climatology (WAGHC), with temperature and salinity averaged on local isopycnal surfaces. Based on high-quality ship-board data and temperature and salinity profiles from ARGO floats, the new climatology has a monthly resolution and is available on a 1/4° latitude-longitude grid.

“We have compared the WAGHC climatology with NOAA’s WOA13 gridded climatology. These climatologies represent alternative digital products, but the WAGHC has benefited from the addition of new ARGO float data and hydrographic data from the North Polar regions”, says Professor Gourteski. “The two climatologies characterize mean ocean states that are 25 years apart, and the zonally averaged section of the WAGHC-minus-WOA13 temperature difference clearly shows the ocean warming signal, with a mean temperature increase of 0.05°C for the upper 1500-m layer since 1984”.

Levin Interviews Dr. Patrick Michaels On Climate

From Musings from the Chiefio:

 

Fourteen minutes well spent that shows how wrong the “Climate Models” are, and why that matters to all of us due to the EPA “Endangerment Finding” being based 100% on those broken models.

I find it interesting that The Russians have a climate model that works. Wonder if it is open sourced? If anyone knows, or knows how to get a copy, let me know! It would save a lot of time trying to make one that works from the crap that doesn’t…

The Cooling Rains

Reblogged from Watts Up With That:

Guest Post by Willis Eschenbach

I took another ramble through the Tropical Rainfall Measurement Mission (TRMM) satellite-measured rainfall data. Figure 1 shows a Pacific-centered and an Atlantic-centered view of the average rainfall from the end of 1997 to the start of 2015 as measured by the TRMM satellite.

Figure 1. Average rainfall, meters per year, on a 1° latitude by 1° longitude basis. The area covered by the satellite data, forty degrees north and south of the Equator, is just under 2/3 of the globe. The blue areas by the Equator mark the InterTropical Convergence Zone (ITCZ). The two black horizontal dashed lines mark the Tropics of Cancer and Capricorn, the lines showing how far north and south the sun travels each year (23.45°, for those interested).

There’s lots of interesting stuff in those two graphs. I was surprised by how much of the planet in general, and the ocean in particular, are bright red, meaning they get less than half a meter (20″) of rain per year.

I was also intrigued by how narrowly the rainfall is concentrated at the average Inter-Tropical Convergence Zone (ITCZ). The ITCZ is where the two great global hemispheres of the atmospheric circulation meet near the Equator. In the Pacific and Atlantic on average the ITCZ is just above the Equator, and in the Indian Ocean, it’s just below the Equator. However, that’s just on average. Sometimes in the Pacific, the ITCZ is below the Equator. You can see kind of a mirror image as a light orange horizontal area just below the Equator.

Here’s an idealized view of the global circulation. On the left-hand edge of the globe, I’ve drawn a cross section through the atmosphere, showing the circulation of the great atmospheric cells.

Figure 2. Generalized overview of planetary atmospheric circulation. At the ITCZ along the Equator, tall thunderstorms take warm surface air, strip out the moisture as rain, and drive the warm dry air vertically. This warm dry air eventually subsides somewhere around 25-30°N and 25-30S of the Equator, creating the global desert belts at around those latitudes.

The ITCZ is shown in cross-section at the left edge of the globe in Figure 2. You can see the general tropical circulation. Surface air in both hemispheres moves towards the Equator. It is warmed there and rises. This thermal circulation is greatly sped up by air driven vertically at high rates of speed through the tall thunderstorm towers. These thunderstorms form all along the ITCZ. These thunderstorms provide much of the mechanical energy that drives the atmospheric circulation of the Hadley cells.

With all of that as prologue, here’s what I looked at. I got to thinking, was there a trend in the rainfall? Is it getting wetter or drier? So I looked at that using the TRMM data. Figure 3 shows the annual change in rainfall, in millimeters per year, on a 1° latitude by 1° longitude basis.

Figure 3. Annual change in the rainfall, 1° latitude x 1° longitude gridcells.

I note that the increase in rain is greater on the ocean vs land, is greatest at the ITCZ, and is generally greater in the tropics.

Why is this overall trend in rainfall of interest? It gives us a way to calculate how much this cools the surface. Remember the old saying, what comes down must go up … or perhaps it’s the other way around, same thing. If it rains an extra millimeter of water, somewhere it must have evaporated an extra millimeter of water.

And in the same way that our bodies are cooled by evaporation, the surface of the planet is also cooled by evaporation.

Now, we note above that on average, the increase is 1.33 millimeters of water per year. Metric is nice because volume and size are related. Here’s a great example.

One millimeter of rain falling on one square meter of the surface is one liter of water which is one kilo of water. Nice, huh?

So the extra 1.33 millimeters of rain per year is equal to 1.33 extra liters of water evaporated per square meter of surface area.

Next, how much energy does it take to evaporate that extra 1.33 liters of water per square meter so it can come down as rain? The calculations are in the endnotes. It turns out that this 1.33 extra liters per year represents an additional cooling of a tenth of a watt per square meter (0.10 W/m2).

And how does this compare to the warming from increased longwave radiation due to the additional CO2? Well, again, the calculations are in the endnotes. The answer is, per the IPCC calculations, CO2 alone over the period gave a yearly increase in downwelling radiation of ~ 0.03 W/m2. Generally, they double that number to allow for other greenhouse gases (GHGs), so for purposes of discussion, we’ll call it 0.06 W/m2 per year.

So over the period of this record, we have increased evaporative cooling of 0.10 W/m2 per year, and we have increased radiative warming from GHGs of 0.06 W/m2 per year.

Which means that over that period and that area at least, the calculated increase in warming radiation from GHGs was more than counterbalanced by the observed increase in surface cooling from increased evaporation.

Regards to all,

w.

As usual: please quote the exact words you are discussing so we can all understand exactly what and who you are replying to.

Additional Cooling

Finally, note that this calculation is only evaporative cooling. There are other cooling mechanisms at work that are related to rainstorms. These include:

• Increased cloud albedo reflecting hundreds of watts/square meter of sunshine back to space

• Moving surface air to the upper troposphere where it is above most GHGs and freer to cool to space.

• Increased ocean surface albedo from whitecaps, foam, and spume.

• Cold rain falling from a layer of the troposphere that is much cooler than the surface.

• Rain re-evaporating as it falls to cool the atmosphere

• Cold wind entrained by the rain blowing outwards at surface level to cool surrounding areas

• Dry descending air between rain cells and thunderstorms allowing increased longwave radiation to space.

Between all of these, they form a very strong temperature regulating mechanism that prevents overheating of the planet.

Calculation of energy required to evaporate 1.33 liters of water.

#latent heat evaporation joules/kg @ salinity 35 psu, temperature 24°C

> latevap = gsw_latentheat_evap_t( 35, 24 ) ; latevap

[1] 2441369

# joules/yr/m2 required to evaporate 1.33 liters/yr/m2

> evapj = latevap * 1.33 ; evapj

[1] 3247021

# convert joules/yr/m2 to W/m2

> evapwm2 = evapj / secsperyear ; evapwm2

[1] 0.1028941

Note: the exact answer varies dependent on seawater temperature, salinity, and density. These only make a difference of a couple percent (say 0.1043 vs 0.1028941). I’ve used average values.

Calculation of downwelling radiation change from CO2 increase.

#starting CO2 ppmv Dec 1997

> thestart = as.double( coshort[1] ) ; thestart

[1] 364.38

#ending CO2 ppmv Mar 2015

> theend = as.double( last( coshort )) ; theend

[1] 401.54

# longwave increase, W/m2 per year over 17 years 4 months

> 3.7 * log( theend / thestart, 2)/17.33

[1] 0.0299117

Fake climate science and scientists

Reblogged from Watts Up With That:

Alarmists game the system to enrich and empower themselves, and hurt everyone else

by Paul Driessen

The multi-colored placard in front of a $2-million home in North Center Chicago proudly proclaimed, “In this house we believe: No human is illegal” – and “Science is real” (plus a few other liberal mantras).

I knew right away where the owners stood on climate change, and other hot-button political issues. They would likely tolerate no dissension or debate on “settled” climate science or any of the other topics.

But they have it exactly backward on the science issue. Real science is not belief – or consensus, 97% or otherwise. Real science constantly asks questions, expresses skepticism, reexamines hypotheses and evidence. If debate, skepticism and empirical evidence are prohibited – it’s pseudo-science, at best.

Real science – and real scientists – seek to understand natural phenomena and processes. They pose hypotheses that they think best explain what they have witnessed, then test them against actual evidence, observations and experimental data. If the hypotheses (and predictions based on them) are borne out by their subsequent findings, the hypotheses become theories, rules, laws of nature – at least until someone finds new evidence that pokes holes in their assessments, or devises better explanations.

Real science does not involve simply declaring that you “believe” something, It’s not immutable doctrine. It doesn’t claim “science is real” – or demand that a particular scientific explanation be carved in stone. Earth-centric concepts gave way to a sun-centered solar system. Miasma disease beliefs surrendered to the germ theory. The certainty that continents are locked in place was replaced by plate tectonics (and the realization that you can’t stop continental drift, any more than you stop climate change).

Real scientists often employ computers to analyze data more quickly and accurately, depict or model complex natural systems, or forecast future events or conditions. But they test their models against real-world evidence. If the models, observations and predictions don’t match up, real scientists modify or discard the models, and the hypotheses behind them. They engage in robust discussion and debate.

They don’t let models or hypotheses become substitutes for real-world evidence and observations. They don’t alter or “homogenize” raw or historic data to make it look like the models actually work. They don’t hide their data and computer algorithms (AlGoreRythms?), restrict peer review to closed circles of like-minded colleagues who protect one another’s reputations and funding, claim “the debate is over,” or try to silence anyone who dares to ask inconvenient questions or find fault with their claims and models. They don’t concoct hockey stick temperature graphs that can be replicated by plugging in random numbers.

In the realm contemplated by the Chicago yard sign, we ought to be doing all we can to understand Earth’s highly complex, largely chaotic, frequently changing climate system – all we can to figure out how the sun and other powerful forces interact with each other. Only in that way can we accurately predict future climate changes, prepare for them, and not waste money and resources chasing goblins.

But instead, we have people in white lab coats masquerading as real scientists. They’re doing what I just explained true scientists don’t do. They also ignore fluctuations in solar energy output and numerous other powerful, interconnected natural forces that have driven climate change throughout Earth’s history. They look only (or 97% of the time) at carbon dioxide as the principle or sole driving force behind current and future climate changes – and blame every weather event, fire and walrus death on manmade CO2.

Even worse, they let their biases drive their research and use their pseudo-science to justify demands that we eliminate all fossil fuel use, and all carbon dioxide and methane emissions, by little more than a decade from now. Otherwise, they claim, we will bring unprecedented cataclysms to people and planet.

Not surprisingly, their bad behavior is applauded, funded and employed by politicians, environmentalists, journalists, celebrities, corporate executives, billionaires and others who have their own axes to grind, their own egos to inflate – and their intense desire to profit from climate alarmism and pseudo-science.

Worst of all, while they get rich and famous, their immoral actions impoverish billions and kill millions, by depriving them of the affordable, reliable fossil fuel energy that powers modern societies.

And still these slippery characters endlessly repeat the tired trope that they “believe in science” – and anyone who doesn’t agree to “keep fossil fuels in the ground” to stop climate change is a “science denier.”

When these folks and the yard sign crowd brandish the term “science,” political analyst Robert Tracinski suggests, it is primarily to “provide a badge of tribal identity” – while ironically demonstrating that they have no real understanding of or interest in “the guiding principles of actual science.”

Genuine climate scientist (and former chair of the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology) Dr. Judith Curry echoes Tracinski. Politicians like Senator Elizabeth Warren use “science” as a way of “declaring belief in a proposition which is outside their knowledge and which they do not understand…. The purpose of the trope is to bypass any meaningful discussion of these separate questions, rolling them all into one package deal – and one political party ticket,” she explains.

The ultimate purpose of all this, of course, is to silence the dissenting voices of evidence- and reality-based climate science, block creation of a Presidential Committee on Climate Science, and ensure that the only debate is over which actions to take first to end fossil fuel use … and upend modern economies.

The last thing fake/alarmist climate scientists want is a full-throated debate with real climate scientists – a debate that forces them to defend their doomsday assertions, methodologies, data manipulation … and claims that solar and other powerful natural forces are minuscule or irrelevant compared to manmade carbon dioxide that constitutes less that 0.02% of Earth’s atmosphere (natural CO2 adds another 0.02%).

Thankfully, there are many reasons for hope. For recognizing that we do not face a climate crisis, much less threats to our very existence. For realizing there is no need to subject ourselves to punitive carbon taxes or the misery, poverty, deprivation, disease and death that banning fossil fuels would cause.

Between the peak of the great global cooling scare in 1975 until around 1998, atmospheric carbon dioxide levels and temperatures did rise in rough conjunction. But then temperatures mostly flat-lined, while CO2 levels kept climbing. Now actual average global temperatures are already 1 degree F below the Garbage In-Garbage Out computer model predictions. Other alarmist forecasts are also out of touch with reality.

Instead of fearing rising CO2, we should thank it for making crop, forest and grassland plants grow faster and better, benefitting nature and humanity – especially in conjunction with slightly warmer temperatures that extend growing seasons, expand arable land and increase crop production.

The rate of sea level rise has not changed for over a century – and much of what alarmists attribute to climate change and rising seas is actually due to land subsidence and other factors.

Weather is not becoming more extreme. In fact, Harvey was the first Category 3-5 hurricane to make US landfall in a record 12 years – and the number of violent F3 to F5 tornadoes has fallen from an average of 56 per year from 1950 to 1985 to only 34 per year since then.

Human ingenuity and adaptability have enabled humans to survive and thrive in all sorts of climates, even during our far more primitive past. Allowed to use our brains, fossil fuels and technologies, we will deal just fine with whatever climate changes might confront us in the future. (Of course, another nature-driven Pleistocene-style glacier pulling 400 feet of water out of our oceans and crushing Northern Hemisphere forests and cities under mile-high walls of ice truly would be an existential threat to life as we know it.)

So if NYC Mayor Bill De Blasio and other egotistical grand-standing politicians and fake climate scientists want to ban fossil fuels, glass-and-steel buildings, cows and even hotdogs – in the name of preventing “dangerous manmade climate change” – let them impose their schemes on themselves and their own families. The rest of us are tired of being made guinea pigs in their fake-science experiments.

Paul Driessen is senior policy advisor for the Committee For A Constructive Tomorrow (CFACT) and author of articles and books on energy, environmental and human rights issues.

Emperor Penguins “Wiped Out”

NOT A LOT OF PEOPLE KNOW THAT

By Paul Homewood

image

Thousands of emperor penguin chicks drowned when the sea-ice on which they were being raised was destroyed in severe weather.

The catastrophe occurred in 2016 in Antarctica’s Weddell Sea.

Scientists say the colony at the edge of the Brunt Ice Shelf has collapsed with adult birds showing no sign of trying to re-establish the population.

And it would probably be pointless for them to try as a giant iceberg is about to disrupt the site.

The dramatic loss of the young emperor birds is reported by a team from the British Antarctic Survey (BAS).

Drs Peter Fretwell and Phil Trathan noticed the disappearance of the so-called Halley Bay colony in satellite pictures.

It is possible even from 800km up to spot the animals’ excrement, or guano, on the white ice and then to estimate the likely size of any gathering.

But the Brunt population, which had sustained…

View original post 850 more words