Global Warming and Model Risk

Jim responds with thoughts about how to think of non-GDP losses from Global Warming. You should read it. Several very smart people have brought up Indur Goklany to me now, so I’ll not comment till I become very familar with his work. One more thing about climate change:

In economics there is something called Knightian Uncertainty. In quant circles, it can be called “model risk.” In everyday circles, it can be called “you don’t know what the f*** you are talking about.” Depending on your line of work, you’ve probably been there. You see someone present a model, a presentation, a research idea, or a business investment plan, and there are all kinds of charts and diagrams and numbers and powerpoint. During the Q and A, if you are lucky, someone will say “What if you are wrong?”, and they’ll respond “well if the distribution is misspecified…” and hopefully they’ll be cut off “no, what if everything you have done is completely wrong. Where would that leave us?”

As someone doing financial engineering, it probably would have been helpful to have been asked that question more in the past decade. So I want to ask, “what if all these climate models are radically under-predicting black swams and other tail risk?” To get a sense, I used an idea from Weitzman’s paper on uncertainity and went to the IPPC “The Physical Science Basis” (chapter 10, box 10.2), and got a list of a dozen and a half models that have tried to predict the increase in temperatures. These are all peer-reviewed, and (at the time in 2007) considered the latest and best research from all the fields. How do they compare to each other:
pdf_agw_1
pdf_agw_2

I’m particularly interested in the second diagram. Note at the 50% likely event (.5 cumulative probability, on the Y-axis), most of them are bunched up at the 3 C (5.4 F) mark, with a few less than. On average, these models all predict the same thing. Now look at the .9 mark. This is the 10% unlikely to happen confidence interval. If we under-predicted tail risk, if there are speeding and acceleration mechanisms we did not anticipate, we’ll end up out here. And here the models are all over the place. You can pick any degree between 4 C (7.2 F) and 8.5 C (15.3 F)* and find a model to support it. There’s a bit of a mass around 5 C (9 F) increase, but not like on the average.

Now that is the 10% interval. Weitzman, when he crunches this chart (and another set of companion charts in Chapter 9, Table 9.3), finds the 5% interval at, on average, 7 C (12.6 F) and 1% interval at 10 C (18 F). I won’t go further into his implications of this for pricing global warming (see here for a good overview).

I find it very helpful in modeling, especially with tail risk, to use different models, implementations and assumptions carried out from different people and see how they relate to each other. Looking at this, it seems everyone is in agreement on average. But if things go worse, it can go way worse than expected. Now that we’ve just lived through an empirical experiment in how well the best modeling can predict tail risk, I tend to look closely at that 10% marker. And the uncertainty there has me worried.

I’m reading this as “on average” everyone is in agreement, but “if things go worse than planned” everything is up for grabs, presumably because everyone is looking at different things that could go wrong.

* – See what I meant by having the Fahrenheit unit there?**
** – I’ve started Infinite Summer, so get ready for lots blogging footnotes.

This entry was posted in Uncategorized. Bookmark the permalink.

9 Responses to Global Warming and Model Risk

  1. ramster says:

    If the 10% marker has you worried, the 1% marker should be freaking you (and everyone else) out. ALL the models converge at a 1% probability of a 10 degree C temperature increase. That’s a catastrophic outcome, that could kill billions. If someone said that there’s a 1% chance of a mile wide asteroid hitting the earth, we’d be spending trillions to stop it.

  2. csissoko says:

    Connecting “Knightian uncertainty” with the financial crisis is one of my pet peeves. I have a post commenting on yours here: http://syntheticassets.wordpress.com/2009/06/30/knightian-uncertainty-and-the-financial-crisis/

  3. Matt Frost says:

    While we’re at it, what about the risk that the climate won’t respond as planned to successful emission reduction efforts? Modeling the global response to stabilized emissions seems a qualitatively different thing from extrapolating along a BAU model.

    The risk of higher-than-expected temperatures is an incentive for action on the one hand, but it also means that a nasty temperature surprise could swamp even the best efforts at mitigation, leaving us both hotter and poorer. That would suck.

  4. Lilguy says:

    Let me start by saying that I accept the findings that the earth is warming, driven largely by manmade activities, notably CO2 emissions.

    But let me take a look at the models & Rortybomb’s comments:

    First, none of the models apparently allows any probability that the world’s temperatures will drop in the next hundred years or so. They all start at zero and go up. I presume this has to do with the assumptions on which the model is based.

    Second, there appears to be a skewness in the distribution toward the left (lower temperature increases) at the mid-point. That is, while most of the lines cluster/cross at about the 3.5C mid-point, there are more lines to the left than the right.

    Third, there is a worrisome (as in, “about the models”) sameness about the results at the near-100% cumulative probability (ie–extremely ultra-low probability) of about 10C. Like the failure to consider a temperature decrease, this outcome suggests that the modelers built features into their models that would prevent some much-higher “illogical” or “unrealistic” outcomes that are much higher.

    Two thoughts come out of that: First, I worry that the researchers built models to fit their preconceptions (the extremes suggest this) and, second, while a mean 3.5C forecast temperature increase is bad, these studies taken together suggest we are likely are more likely to be less than that median/mode. Taken together, it is no wonder that people who don’t want to change the way they live or do business give these kinds of forecasts such short shrift.

    • Greg M says:

      Could it just be that no thorough climate model forsees any chance of lower temperatures in the next century absent a one-off event (like a series of Krakatoa-sized eruptions, or much-better-than-expected efforts at human intervention in our climate)?

      Also, with these types of forecasting models, they get much less precise at the extreme margins. The fact that they all converge at 10 degrees does worry me. But in the opposite way that it worries you. I worry that modelers have an internal bias agaist results that are too high, and that we may have a systemic risk of underestimating the worst case scenario. If there’s a 1% chance of a 10 degree increase, is there not a 0.2% chance of a 12 degree increase? In fact, is this not just the sort of “model risk” Rortybomb was talking about? Once events happen that the models were 97% certain would NOT happen, we are, by definition, in territory where the models don’t really understand the forces in play. Positive feedback loops in the global ecosystem could mean we really don’t know where the ceiling is in a “worst case scenario”.

      These models are far more useful for discussing events between the 5% and 95% probability lines.

  5. Pingback: Will Dearman Lifestream » Daily Digest for July 1st, 2009

  6. Chris says:

    That graph that you’re discussing is the equilibrium climate sensitivity, is it not? That’s the expected change if the CO2 concentration doubles (from its pre-industrial value of 270 ppm) to 540 ppm AND stops there. But that’s not what’s going to happen. If you look at Figure 10.20 of Chapter 10 of the IPCC report, you’ll see that we’ll probably reach 540 ppm sometime between 2040 and 2060, and we may well increase another 50% by the end of the century. So while the difference between the 50% likely and 10% likely is important, it’s far from the whole story.

  7. ramster says:

    The models don’t necessarily all converge at 10 degrees. It just looks that way because the y-axis is linear. On a logarithmic y-axis, we might see that the various models have non-zero probabilities of temperatures above 10 degrees that we can’t see in the above plot. After all, the lines in the plot are about 1% thick so it’s pretty hard to identify 0.1%, let alone 0.01%.

  8. Pingback: Will Dearman Professional Musings » Interesting Observations in Finance from the Week of June 29

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s