Wednesday, January 20, 2010
Blind spot on climate research
The continuing defensive "point of view" reaction from the climate science community on the leaked East Anglia Climatic Research Unit emails is not surprising. See the recent editorial in Nature (Nature 463, 269, January 21, 2010) which is reproduced below followed by counter points. www.nature.com/nature/journal/v463/n7279/full/463269a.html
Climate of Suspicion ... with Sceptics waiting to pounce on any scientific uncertainties, researchers need a sophisticated strategy for communication.
Climate science, like any active field of research, has some major gaps in understanding ... Yet the political stakes have grown so high in this field, and the public discourse has become so heated, that climate researchers find it hard to talk openly about those gaps. The small coterie of individuals who deny humanity's influence on climate will try to use any perceived flaw in the evidence to discredit the entire picture. So how can researchers honestly describe the uncertainty in their work without it being misconstrued?
The emails leaked last year from the Climatic Research Unit of East Anglia, UK, painted a picture of scientists grappling with this question, somtimes awkwardly. Some of the researchers' online discussion reflected a pervasive climate of suspicion - their sense that any findings they released to the public could and would be distorted by sceptics.
Over the years, the climate community has acquired some hard won wisdom about treading this minefield. Perhaps the most important lesson is that researchers must be frank about their uncertainties and gaps in understanding - but without conveying the message that nothing is known or knowable. They must emphasize that - although many holes remain to be filled - there is little uncertainty about the overall conclusions: greenhouse gas emissions are rising sharply, they are very likely to be the cause of recent global warming and precipitation changes, and the world is on a trajectory that will shoot far past 2 deg C of warming unless emissions are cut substantially. Researchers should also emphasize that cities and countries can begin to prepare for the effects of climate change through both mitigation and adaptation, even though they do not know the exact course of the changes.
The United Nations Intergovernmental Panel on Climate Change (IPCC) has taken this approach in its ongoing series of assessment reports, and it has done an admirable job of highlighting the important conclusions while acknowledging the caveats. It has made some errors, such as its use of questionable data about the retreat of Himalayan glaciers but these mistakes are exceeding rare in the reports that can total more than 1,000 pages, a testament to the IPCC's rigorous peer-review process.
No matter how evident climate change becomes, however, other factors will ultimately determine whether the public accepts the facts. Empirical evidence shows that people tend to react to reports on issues such as climate change according to their personal values. Those who favour individualism are more likely to reject evidence of climate change and calls to restrict emissions. And, the messenger matters perhaps just as much as the message. People have more trust in experts - and scientists - when the speaker shares their values. The climate-research community would thus do well to use a diverse set of voices, from different backgrounds, when communicating with policy makers and the public. And scientists should be careful not to disparage those on the other side of a debate: a respectful tone makes it easier for people to change their minds if they share something in common with that other side.
As comforting as it may be to think that the best evidence will eventually convince the public on its own, climate scientists can no longer afford to make that naive assumption: people consider many factors beyond facts when making decisions. Even as climate science advances, it will be just as important to invest in research on how best to communicate environmental risks. Otherwise scientific knowledge will not have the role it should have in shaping of public policy.
Comment: Science or Special Interests?
The IPCC is culpable not just in fiddling with the data - but also in trying to manipulate public opinion using models that are not capable of informing on long term warming of the planet. The urgency for cap and tax and other massive programs is driven by the long term model projections of SRES market scenarios for 2000-2100. The small, select and closed group of lead IPCC scientists take the output from the suite of scenarios and assign subjective probabilities of the projections and ranking as likely, more likely, highly likely and so on. The models all assume that CO2 is the major forcing producing a range of temperature rise of 1.5-5 deg C with so called likely (sic) case being 4 deg C.
Thermodynamics assures that the physical properties of CO2 make it a GHG. But how much warming is actually caused by CO2 and other GHGs? The significance of the GHGs are not provided by model forecasts but require empirical validation - testing against a null hypothesis (in the IPCC modeling there is no null hypothesis). IPCC even say that their long term projections have poor understanding of the feedbacks and processes operating over long decadal time scales. However, this is lost in the dumbing down of the science for the propaganda message to the general public, mainstream media, advocates and policymakers - done to generate a sea tide of public support. Unable to quantify probabilities or validity of the claims IPCC carry out a Delphi type of polling among a select group of IPCC scientists and arbitrarily giving self assigned probabilities as something like - likely -80%, highly likely - 90%, mostly highly likely - 95%. This is subject to huge confirmation and selection bias, is not verifiable - there is no way to validate the forecasts. Nevertheless, the urgency for policy is said to be based on the so called "scientific consensus." Just where (and why) is the disconnect in the scientific process? IPCC make a leap to further their beliefs and clothe it behind improper use of the scientific process.
The models fail to explain lack of warming during the past century, specifically the 1940s - mid 1970s and 1998 to present (as Kevin Trenberth anguished about in his CRU emails.) In addition, there is no direct empirical evidence confirming that CO2 Granger causes warming of the earth; in fact, millennial data show that past temperature rises have consistently preceded rather than followed CO2 concentration increases. Therefore, the (theoretical) models do not support (demonstrate) the conjecture that CO2 climate forcing is in fact the major factor. That the temperature anomaly is not explained means the models are under- or mis-specified and missing key forcings - for example, solar cycles, aerosols, carbon/ soot, atmospheric moisture, clouds and earth albedo - or radiation back into space. If the claimed attribution is poor - go back and do the research, get the data and update and improve the proposition. Don't avoid criticism, but use it to improve scientific knowledge. As Henri Poincare said, "the physicist who has just given up one of his hypotheses should, on the contrary, rejoice, for he has found an unexpected opportunity of discovery. If it (his cherished hypothesis) is not verified, it is because there is something unexpected and extraordinary about it, because we are on the point of finding something unknown and new! Has the hypothesis thus rejected been sterile? Far from it! It may be even said that is has rendered more service than would a true hypothesis."
The proposed models are not hypotheses, but conjectures - and there is a big difference. Even though one cannot demonstrate causation or proof models can help move the discussion and provide insights. And, we know correlation does not demonstrate causality. In mathematics a conjecture is an unproven proposition - call it a pre-theorem which may appear or is assumed to be correct but has not yet been demonstrated. In the case of anthropogenic global warming, the physical models can be used to test the hypothesis - how well the model describes the data over the range of the data ("in sample"): 1) this cannot illuminate cause and effect, 2) cannot reliably be used with confidence far outside the range of the data - the mechanism of the hypothesis may not hold over long projections over time and space ("out of sample"), but on the other hand, 3) can lead to improved understanding and allow direct or indirect inference linking premise to conclusion. Unfortunately IPCC's "urgency for policy prescription" is based on the 100 year model projections of warming of 1.5-5 deg C with most of the "damage" coming near the end of the 100 year period. However, the IPCC contradict themselves saying that their decadal forecasts are not reliable.
The IPCC program has been subject to extensive political meddling and motivations and exhibits a number of critical cognitive biases - for one, confirmation bias - make the outcome fit your beliefs, and secondly - selection bias - for example, emphasis data that supports your position. And a big one - bias and misuse of "trust the expert." The UNFCC convention (1993) mandate to IPCC and member countries on climate change was based on circular logic - faulty syllogism: "... the ultimate object ... is to achieve stabilization of GHG concentrations in the atmosphere ... to prevent dangerous anthropogenic interference with the climate system guided by inter alia taking precautionary measures to anticipate, prevent or minimize causes of (anthropogenic) climate change and to mitigate it adverse effects .. lack of scientific certainty should not be used as a reason for postponing .. (corrective) measures. The conclusion is the premise and all "precautionary measures" must be taken. This gives carte blanche to do anything one wishes at any cost if it fits your purpose, even though the scientific data may not sufficiently allow attribution of cause and effect.
The issue is not whether one believes or not believes (there is no place in the scientific process for branding those who question as skeptics - indeed, skeptics should be rewarded, not castigated. The issue is - this is a policy matter.
IPCC models - policy models, not theories
Rene Magritte, the surrealist made a famous painting of a pipe titled "Ceci n'est pas une pipe" (this is not a pipe). Of course, it's a painting of a pipe. The image is a model seen through our minds lens ... how well this fits your idea of a pipe depends on your own lens or world view of a pipe and not looking through the eyes of the painter.
IPCCs probabilities are not statistical tests of scientific hypotheses about whether a model describes an outcome (... the IPCC assigned probabilities are truly nonsense). The model outputs are not capable of being verified or falsified by testing a hypothesis. The defining of values for GHG "forcings" in the IPCC work is science ... applying these in a nonverifiable way to long term forecasts of warming of the earth's temperature is not science. It is computer code ... and computer code is not science. The models do not explain important departures from the theory as in the 1940-s through mid 1970s and in the past decade.
Policy models are not required to be "correct" models of actual working systems. They simply have to be deemed acceptable for making policy decisions. They are based on:
Selection of policy variables
Selection of one or more policy models to be tested
Development of the models, and
Evaluation of the model results
Evaluation of policy models does not have to involve testing of alternative policy prescriptions. The CBOs Douglas Elmendorf uses policy models for evaluation of Senate or House bills such as the proposed health care reform - to vet the assumptions and confirm costs and benefits claimed. However, he is not much concerned with evaluating other policy options or serious questioning of Congress' detailed assumptions although he may use comparators including no policy at all (this is this is the nonexistent "business as usual" case) to see how that would have played out giving a base line if nothing at all is done.
In the case of climate change we have 14 IPCC endorsed models running 40 SRES scenarios (4 story lines each having 10 scenarios) - depicting economic growth, maturing economies, population growth, etc. ... a universe of cases to cover all possible outcomes. So far this sounds an awful lot like macroeconomic forecasting.
Macro models use cost functions to put the outcome in a reference framework - income, jobs, trade, etc. The economist might conduct a few experiments fiddling with the observed data set upon which the control variables may change and then ask if the conditional forecast of the model knowing the value of the control variables are closer to the actual outcome for the variable of interest compared to the forecast. Not knowing the future makes using this kind of model pretty neat - you don't have to validate it.
As opposed to changes in the policy variable there can be changes in the policy function or rule. This is a structural change or break in which the value of the policy variable is chosen for each period and corresponds to changes in the parameter values of the equation for this rule. This is the well known "Lucas critique" correctly suggesting that there is parameter instability which could be due to parameter change in the behavior equations of the model.
Peseran has said there are three criteria for evaluation of policy models - relevance, consistency and adequacy. Relevance - does the model meet its required purpose? Consistency - what else is known that may be useful. Adequacy - the usual statistical measures of goodness of fit. Consistency and adequacy are important when building a model. But a generalized form of relevance is the most important thing for evaluation of the model. Dagum (1989) said "knowledge is of value only when it is put to use," and Marshak said "knowledge is useful it helps to make the best decisions." Also, Keynes said about Alfred Marshall, "Marshall arrives early at the point of view that the bare bones of economic theory is not worth much in themselves and do not carry one directly to any useful practical conclusions. The whole point lies in applying them to the interpretation of current economic life."
So they would say of economic policy modeling that the theory or model should be evaluated on the basis of the quality of the decisions made. If a model cannot be used to make decisions or is incomplete then it is simply an intermediate tool and not a complete model, and should not be treated as such. Nevertheless, it can bring insights to the decision making. To say that it is a description of the world is a huge leap of faith and begs refining of the model specification.
If the policy is defined before the model is completed and validated it is a policy model and should not be mislabeled as a scientific model.
If the prescription involves trillions of dollars of tax burdens, serious restrictions on economic growth resulting in restructuring of the world economy, it is intellectually dishonest to hide this behind the label of consensus (which in any case must be verifiable) what are in fact nothing more than policy models, with the purpose of promoting your world view of a pipe.
If the policies are being pushed by those who don't give a damn about the science but are only looking after their own agendas, and especially personal gain, they are more than intellectually dishonest, they are doing the work of the devil.
Labeling something as consensus does not give one permission to eschew the scientific process. And labeling those who have reasonable legitimate questions even though they may disagree with your world view of a pipe is a basic form of propaganda. Oxford historian Norman Davis outlined five basic rules of propaganda in “Europe – a History,” Oxford Press, 1996, pp 500-501):
Simplification - reducing all data to a simple confrontation between Good vs. Bad or Friend and Foe.
Disfiguration - discrediting the opposition by crude smears and parodies
Transfusion - manipulating the consensus values of the target audience for one's own end
Unanimity - presenting one's viewpoint as if it were the unanimous opinion of all right-thinking people; drawing the doubting individual into agreement by the appeal of "experts" and “star- performers,” by social pressure, and by ‘psychological contagion’
Orchestration- endlessly repeating the same message in different variations.
Danley Wolfe
January 20, 2010
Subscribe to:
Posts (Atom)