The ABC has been running alarmist reports from a Nature paper that suggests methane released from the Arctic permafrost could cause $60 trillion worth of damage to the economy.
I’m watching Emma Alberici on Lateline, and surprisingly, she’s actually putting some criticisms of the paper to one of the authors. That’s not the Emma I expected to see.
Perhaps that is because real life is starting to fracture the simplistic fairytale version of climate change touted by any number of rent-seeking scientists. Climate change establishment figures like Richard Tol and Gavin Schmidt are prepared to criticise rubbish, and ABC researchers can access them on the Internet, including Twitter.
As the Arctic, in relatively recent history, has been much warmer and colder than it is at the moment, you’d have to say that the real historical model we have suggests there is nothing much to worry about, even if the temperature increase claims of the authors of this paper are borne out.
Chances are that they won’t be.
Steve McIntyre at Climate Audit has been looking at history too. In his case, historical models of CO2 forcing. One of the early 20th Century investigators of the phenomenon, Guy Callendar, developed a simple mathematical model which ignores things like water vapour forcing (which makes up the majority of the presumed temperature increase from CO2 emissions).
It turns out that Callendar’s model outperforms 10 out of the 12 climate models that McIntyre benchmarked it against, and ties with the other two.
In today’s post, I’ll describe Callendar’s formula in more detail. I’ll also present skill scores for global temperature (calculated in a conventional way) for all 12 CMIP5 RCP4.5 models for 1940-2013 relative to simple application of the Callendar formula. Remarkably, none of the 12 GCM’s outperform Callendar and 10 of 12 do much worse.
Not only that, but it suggests that global temperature may be less sensitive to CO2 emissions than the IPCC reports suggest.
So why are we spending so much money on “sophisticated” climate models which have less skill than the most basic ones? And why are we basing so much government policy on them?
There are a number of reasons for this, but the strongest one is that we humans like the idea of certainty, and we like the idea we can actually know the future.
Denser, less comprehensible models convince us that we are getting more predictability and certainty, when the opposite is often the case.
That’s certainly my experience in building financial and business models. But I’ve rarely found the results to be much better than back of the envelope calculations, apart from when it comes to persuading bankers to part with their cash.
So, when I talk about skill in a model, I’m not talking about its ability to predict, but to persuade. Why should climate change be any different?
Oh, and my back of the envelope calculations are generally pretty good!