I like MacDoc... really! But I sense he's simply has no patience to explain himself to others. I don't know if it's sheer arrogance or indifference to fully explaining his thinking but it is unfortunate and his style does repeatedly belittle his own arguments. Plenty of us have attempted to pin him down and asked him to refrain from resorting to multiple links (with little else save a few cheap shots, a sprinkling of emoticons and, as Manny says, ellipses a'plenty (as if those make all the connections he needs).
In his mind, the connections and arguments are all self-evident. Those who disagree are branded as fools or lap puppies. He never deigns to respond to such criticisms, either. He has already moved on to wage fresh campaigns.
To be fair MD is being stabbed in the back. Every time he finds a study that would seem to support his point of view it turns out that someone fudged the data.
It is hard to prove AGW when you have not taken the time to figure out what the planet's temperature is or was.
Predicting exceptionally mild winters which turn out to be brutally cold also fails to advance the AGW cause.
Perhaps he should relegate any studies by Mann, NASA or Hadley to the trash heap then start from there.
__________________ Ad links appearing in my posts were not placed there by me. I do not endorse any products which may be linked to my posts. Do not click on those links.
I retain all rights to photo-images I have posted on ehMac. They were posted that other members of the community could enjoy them. They may not be used or sold in any other way without my written consent.
Bill C-51 is an act of Terrorism! It cannot be fixed and should be immediately repealed!
it's not merely gazillions of links. It's more about MD refraining from using his own words to state and expand his opinion... if he bothered to back up his links with his own thoughts, I'd be far more forgiving (not that he seeks my forgiveness on anything, mind you!). Because he doesn't, it just looks like an almost spasmodic, involuntary parroting of links.
Anyway, it's no way to have a good discussion. But this is the net, after all. It's a platform for abuse and soapboxing as much as it is for enlightenment and sharing.
Remember the hockey stick article by the Team and the Climategate email that considered "Mike's trick" to "hide the decline"?
Dr. Judith Curry has decided she could no longer, in good conscience, sit on her hands & has waded into the fray. The response, from both sides, has been nothing short of spectacular and she has netted nearly 2500 comments (at this time) from her three part blog.
The link takes you to RealClimate and the meat of the matter, two graphs, one of which illustrates proxy tree ring data only (and the ending "decline" which does not agree with the team's mandate) and the second which splices thermometer data to the proxy data (which "hides the decline" and supports their position).
If you wish to visit Dr. Curry's original postings on her blog, the links are provided at the bottom of the article.
Once again, YESSSS!!! (and further to a question I asked in GHG 1 or 2 about manuscript submissions)
In the February 11, 2011 issue of Science Magazine, an editorial called Making Data Maximally Available has announced a new policy regarding requirements for authors, from the pen of Bruce Alberts, the Editor-in-Chief and two Deputy Editors.
The new Science policy:
Science’s policy for some time has been that “all data necessary to understand, assess, and extend the conclusions of the manuscript must be available to any reader of Science” (see Science/AAAS: Science Magazine: About the Journal: Information for Authors). Besides prohibiting references to data in unpublished papers (including those described as “in press”), we have encouraged authors to comply in one of two ways: either by depositing data in public databases that are reliably supported and likely to be maintained or, when such a database is not available, by including their data in the SOM.
However, online supplements have too often become unwieldy, and journals are not equipped to curate huge data sets. For very large databases without a plausible home, we have therefore required authors to enter into an archiving agreement, in which the author commits to archive the data on an institutional Web site, with a copy of the data held at Science. But such agreements are only a stopgap solution; more support for permanent, community-maintained archives is badly needed.
To address the growing complexity of data and analyses, Science is extending our data access requirement listed above to include computer codes involved in the creation or analysis of data.
To provide credit and reveal data sources more clearly, we will ask authors to produce a single list that combines references from the main paper and the SOM (this complete list will be available in the online version of the paper).
And to improve the SOM, we will provide a template to constrain its content to methods and data descriptions, as an aid to reviewers and readers.
We will also ask authors to provide a specific statement regarding the availability and curation of data as part of their acknowledgements, requesting that reviewers consider this a responsibility of the authors.
We recognize that exceptions may be needed to these general requirements; for example, to preserve the privacy of individuals, or in some cases when data or materials are obtained from third parties, and/or for security reasons. But we expect these exceptions to be rare.
As Willis notes:
This is indeed excellent news. It is a huge step, from Michael Mann claiming that to ask for his code was “intimidation” and he would not reveal it, to Science Magazine requiring code as a condition of publication.
More accountability. Who can argue with that?
Now, let's see it in practice and see how many other journals follow suit...
Ran across an article this morning, got me to thinking about my own university years & parallels with current discussion on including computer code with submitted papers.
First two years I was there I took a lot of computer science courses, some of them programming in different languages (BASIC, Pascal, C, Fortran, etc.). You were given an assignment and, in order to fulfill the requirements of that assignment, you needed to include an algorithm, a printout of the code and output and a copy of the code had to be uploaded to the professor or TA server. The code also needed to be commented, wherein you would describe what that particular subroutine was doing, including descriptions of the variables, where the input was coming from, what was happening to it & where the output was going. Pretty standard stuff, really.
Failure to follow any of these three basic tenets resulted in a failed assignment. Period. It didn't matter if the output was correct, you needed to illustrate how you got there.
Fast forward to today's link and the following observations:
There are many good reasons to publish science source code, including:
* The paper usually does not include a full description of the algorithm: its parameters, a full list of the processing steps, the details of individual steps, and so on. Even if the description is complete, it may not be accurate.
* The code may not perform the intended algorithm. In other words, it may contain bugs. Almost all software does, after all. Distributing your code allows these bugs to be discovered and fixed.
* By making your code available, you allow other scientists to see, criticise, and learn from it, just as you do by describing the rest of your method. Publication, in the broadest sense, is key to scientific progress.
* Quite separately: if you release your code under an open-source license, you allow other scientists to reuse and adapt it for their own work.
The article goes on to cover a climate related case study & reaches the following conclusion:
This report included a good algorithmic description, and has been accompanied by source code. We greatly welcome both of these departures from the norm, as setting a good example and following the report’s own recommendation. These facts also allow us to illustrate particular reasons why code release is important, and why science software skills should be improved.
The four separate bugs – in the description, in the code, in the configuration, and in the expectation of the reader – are, in this case, trivial and unimportant – they do not affect the broad results of the report in any way. However, each is characteristic of problems with science software which can be more serious, and which are impossible to discover unless code is released.
By releasing the code, and opening it to general review and criticism, the report has allowed us to illustrate and discuss general problems in science software development, and to work towards resolving them. In this way, science software, and science in general, benefits from publication.
This sums up very nicely why code should be included in manuscript submissions.
I must confess major surprise that this particular case of including the code is by far the exception rather than the rule.
Basically, the authors have noted an increase in temperature of 0.7 degrees in Finland over the 20th century. The model-based "claim of climate-alarmists the world over" of "both droughts and floods are expected to intensify" is refuted, once again, by real world observation.
In light of the above, once again, we have another part of the planet that does not behave as climate alarmists say it should; and, in this case, that misbehavior resides in Finland's hydrological responses to global warming. What is more, the misbehavior occurs at both ends of the available moisture spectrum. At the high end, where flooding may occur, there has been no change in the magnitude of flows that can lead to that unwelcome phenomenon. And at the low end, where droughts may occur, there has actually been an increase in flow magnitude; and that increase either acts to prevent or leads to less frequent and/or less severe episodes of this other unwelcome phenomenon.
Bad planet. Bad!
In the second analysis (modeling surface air temps over the Arctic Ocean), the authors
assessed how well the current day state-of-the-art reanalyses and CGCMs [coupled global climate models] are reproducing the annual mean, seasonal cycle, variability and trend of the observed SAT [surface air temperature] over the Arctic Ocean for the late 20th century (where sea ice changes are largest).
From the analysis:
...not only does it appear that state-of-the-art climate models have a long way to go before they can adequately simulate even the past climate of the Arctic Ocean (much less predict its future), we have the word of the six scientists who evaluated them in this study that their creators have made "no obvious improvement" in the models' simulation ability since the time of the Third Assessment Report several years earlier.
Do ya think they're using the wrong parameters? Hmmm?
Thirdly, an analysis of 16 (!) models that attempt to simulate Arctic cloud cover & sea ice:
Hence, they say that they were "forced to ask how the GCM simulations produce such similar present-day ice conditions in spite of the differences in simulated downward longwave radiative fluxes?"
Answering their own question, the three researchers state that "a frequently used approach" to resolving this problem "is to tune the parameters associated with the ice surface albedo" to get a more realistic answer. "In other words," as they continue, "errors in parameter values are being introduced to the GCM sea ice components to compensate simulation errors in the atmospheric components."
In consequence of the above findings, the three researchers conclude that "the thinning of Arctic sea ice over the past half-century can be explained by minuscule changes of the radiative forcing that cannot be detected by current observing systems and require only exceedingly small adjustments of the model-generated radiation fields," and, therefore, that "the results of current GCMs cannot be relied upon at face value for credible predictions of future Arctic sea ice."
Huh? Introducing errors to compensate for other errors?
The six scientists from NASA's Goddard Institute for Space Studies report finding what they describe as "unexpected significant disagreements at the pixel level as well as between long-term and spatially averaged aerosol properties." In fact, they say that "the only point on which both datasets seem to fully agree is that there may have been a weak increasing tendency in the globally averaged aerosol optical thickness (AOT) over the land and no long-term AOT tendency over the oceans." As a result, the bottom line for the NASA scientists is quite succinct: "our new results suggest that the current knowledge of the global distribution of the AOT and, especially, aerosol microphysical characteristics remains unsatisfactory." And since this knowledge is indispensable "for use in various assessments of climate and climate change," it would appear that current assessments of greenhouse-gas forcing of climate made by the very best models in use today may be of very little worth in describing the real world of nature.
Manuscript subtitle: We still can't predict future climate responses at low and high latitudes, which constrains our ability to forecast changes in atmospheric dynamics and regional climate.
As to what Rind's analysis of the climate modeling enterprise suggests about the future, he writes that "real progress will be the result of continued and newer observations along with modeling improvements based on these observations," which is a conclusion we can readily endorse, as it clearly and rightly indicates that modeling improvements should be based on "continued and new observations," which must provide the basis for evaluating all model implications. So difficult will this task be, however, that he says "there is no guarantee that these issues will be resolved before a substantial global warming impact is upon us." However, because of the large uncertainties -- and unknowns -- surrounding many aspects of earth's complex climatic system, there is also no guarantee there even will be any "substantial global warming impact" due to a doubling, or more, of the air's CO2 content. And this fact suggests to us that the massive world-economy-altering measures that are being promoted by Al Gore and James Hansen to "solve" a "climate crisis" that may not even exist are preposterously premature and, therefore, ill-advised at best and actually dangerous in the extreme.
Wonder what their colleague at Goddard, James Hansen, thought about these results. Bet the lunch room was rather...chilly.
First, a couple of quotes, just to establish tone:
My point is that until you test, really test your model by comparing the output to reality in the most exacting tests you can imagine, you have nothing more than a complicated toy of unknown veracity. And even after extensive testing, models can still be wrong about the real world. That’s why Boeing still has test flights of new planes, despite using the best computer models that billion$ can buy, and despite the fact that modeling airflow around a plane is orders of magnitude simpler than the modeling global climate.
Call me crazy, but when your results represent the output of four computer models, which are fed into a fifth computer model, whose output goes to a sixth computer model, which is calibrated against a seventh computer model, and then your results are compared to a series of different results from the fifth computer model but run with different parameters, in order to demonstrate that flood risks have changed from increasing GHGs … well, when you do that, you need to do more than wave your hands to convince me that your flood risk results are not only a valid representation of reality, but are in fact a sufficiently accurate representation of reality to guide our future actions.
First, emphasis mine. Second, WTF?
The observation about test flights should send the point home...
Nature magazine, a couple of weeks back, attempted to explain the widespread flooding in SW England & Wales during the fall of 2000 with, you guessed it, computer models.