Albeit without all the variable in place? Here's yet another effect not taken into account in AGW computer models either. Sigh.
New map charts a 'leaky' Earth
Crucially, the researchers at the University of British Columbia say, it could help unravel the hidden underground movements of most of the planet's fresh water -- water that is not taken into account in computer models used to predict climate.
"Groundwater makes up 99 percent of the fresh unfrozen water on Earth," says UBC researcher Tom Gleeson. "That huge store could somehow modulate the climate. There may be really complex interactions that we don't appreciate."
Some climate alarmists would have us believe that these storms are yet another baleful consequence of man-made CO2 emissions. In addition to the latest weather events, they also point to recent cyclones in Burma, last winter's fatal chills in Nepal and Bangladesh, December's blizzards in Britain, and every other drought, typhoon and unseasonable heat wave around the world.
But is it true? To answer that question, you need to understand whether recent weather trends are extreme by historical standards. The Twentieth Century Reanalysis Project is the latest attempt to find out, using super-computers to generate a dataset of global atmospheric circulation from 1871 to the present.
As it happens, the project's initial findings, published last month, show no evidence of an intensifying weather trend. "In the climate models, the extremes get more extreme as we move into a doubled CO2 world in 100 years," atmospheric scientist Gilbert Compo, one of the researchers on the project, tells me from his office at the University of Colorado, Boulder. "So we were surprised that none of the three major indices of climate variability that we used show a trend of increased circulation going back to 1871."
In other words, researchers have yet to find evidence of more-extreme weather patterns over the period, contrary to what the models predict. "There's no data-driven answer yet to the question of how human activity has affected extreme weather," adds Roger Pielke Jr., another University of Colorado climate researcher.
For those who believed that LEDs were a clean source of light. LED lights' toxic chemicals
Looks like tungsten is still the best of the bunch.
__________________ ~WARNING: If you see links to ads in the above post, blame the cheesy ad-linking software used by the owners of this website. These ad links were inserted without either my direct or implied permission. I do not endorse these links. Don't click on them.~
Do NOT Touch this computer!!! Touching this computer WILL cause irreversible brain damage.
the report says they found 'varying amounts' of these toxins, and recommendations are to reduce these.
They still use far less energy than the incandescents. I'd say it appears it's still worth pursuing. Good on them for calling this out thought to push for improvements.
The rules and regs of this forum state that my posts are authored by me and are my own views. This may not be true. If you see links in my posts to items for sale etc., they are not my links, nor my views. I do not wish to be held responsible for anything that occurs as a result of clicking these links inserted without my permission, So if you click any of these links, do so at your own risk.
I don't have a high opinion of computer models in the first place, because they are simply that: models. Guesstimates based on parts (rarely all, and never all in the case of climate models) of the equation, with the programmers' bias built in. Wonderful little exercises in "what if" but certainly nothing you can extract hard science from.
What the author has done is plot sunspots and the Pacific Decadal Oscillation plus the Atlantic Multidecadal Oscillation (the most significant ocean oscillations) against the temperature record...and come up with a .96 correlation (near a perfect correlation of 1)!!
This opposed to the .44 correlation of CO2 vs temp.
Just a link to an article that shows James Hansen has manipulated data from his original article in 1999 to the more contemporary version.
In '99, he believed that 1934 was the hottest year in recorded US history. Since then, the data has been changed to show that 1998 is the winner.
In the original version, 1934 was much warmer than 1998. Sometime in the year 2000, the data was “adjusted” to make 1998 warmer. The blink comparator shows the radical change which occurred.
the original graph was corrupted and is no longer visible – as seen below. I do image processing professionally, and I compared the underlying byte data of the image with the original version. I don’t see how this could have happened without someone having touched the file. Files don’t normally change unless someone writes to them.
Further to my observations yesterday about computer models (and my statement about programmer bias) is this analysis of a recently updated paper by Dr. Noor van Andel that asks the question "Why do climate scientists continue to adjust the data to fit their models instead of fixing their models to agree with the data, the 'normal' procedure?"
There has been a large activity to bring models and observations in line, strangely only by adjusting the measurements instead of adjusting the models. The radiosonde measurements are adjusted so that they show the larger warming trend around 300 hPa that the models must assume to exist to get antropogenic CO2 induced warming, or to attribute the surface warming to increased CO2. Scores of publications and discussions try to prove this “atmospheric hot spot” must exist in the real world because the models say so.
The main error in the climate models is that they suppose heating and moistening, and thus higher θe [temperature], of the upper troposphere by CO2, in contradiction with radiosonde and satellite measurements. This assumed heating & moistening leads the model to assume an increase of θe [temperature] at this height, which makes deep convection decrease as a result of increasing SST, very unphysical as we have seen here above.
In the first, the analysis of the paper notes that cloud resolving models or CRMs:
"have become one of the primary tools to develop the physical parameterizations of moist and other subgrid-scale processes in global circulation and climate models," and that CRMs could someday be used in place of traditional cloud parameterizations in such models.
Cool. Now, do they work?
In light of these several significant findings, it is clear that CRMs still have a long way to go before they are ready for "prime time" in the complex quest to properly assess the roles of various types of clouds and forms of precipitation in the future evolution of earth's climate in response to variations in numerous anthropogenic and background forcings. This evaluation is not meant to denigrate the CRMs in any way; it is merely done to indicate that the climate modeling enterprise is not yet at the stage where implicit faith should be placed in what it currently suggests about earth's climatic response to the ongoing rise in the air's CO2 content.
OK, how about the second, which analyzes extremes in precipitation and temperature?
Kiktev et al. introduce their study by stating the obvious (but extremely important) fact that "comparing climate modeling results with historical observations is important to further develop climate models and to understand the capabilities and limitations of climate change projections [italics added]."
Agreed. So, what happens?
According to the international research team, hailing from Australia, Japan, Russia and the United Kingdom, "the results mostly show moderate skill for temperature indices and low skill or its absence for precipitation indices [italics added]."
If climate model results are utilized as the basis for mandating a complete overhaul of the world's energy system - as the world's climate alarmists are attempting to do - the models should possess considerably more than moderate skill at what they do. But they definitely should not have low skill. And to employ models that have an absence of skill is the height of folly.