Curry’s Dilemma

Further, as the IPCC says:

“We moderate our likelihood assessment and report likely ranges rather than the very likely ranges directly implied by these studies in order to account for residual sources of uncertainty including sensitivity to EOF truncation and analysis period (e.g., Ribes and Terray, 2013).”

That is, they multiplied uncertainty by a factor of 1.36, thus substantially expanding the uncertainty range to account for any additional uncertainty relating to the methods used.  The models, note, only over estimate recent temperature trends by 18%, half the expansion of the uncertainty range – and that overestimation has been eliminated from the attributi

jwalsh @13, it would be nice if you in fact let the IPCC explain, rather than cutting them of in mid explanation.

To start with, as shown in Fig 10.4 below, the models are used to determine relative contribution but are scaled to match actual temperature increases.  Thus if a model shows an anthropogenic temperature increase of 0.8 C, and a total increase of 0.7 C, then the anthropogenic increase is scaled by 0.65/0.7 to determine the anthropogenic contribution.  Thus any tendency to over estimate the temperature trend is eliminated as a factor in determining attribution.  All that remains is the relative responsiveness to particular forcings.  With respect to that, it is well known that the combined natural forcings from 1951-2010 are slightly negative, or neutral at best.

on by scaling in any event.

Finaly, they go on to say:

“The assessment is supported additionally by a complementary analysis in which the parameters of an Earth System Model of Intermediate Complexity (EMIC) were constrained using observations of near-surface temperature and ocean heat content, as well as prior information on the magnitudes of

forcings, and which concluded that GHGs have caused 0.6°C to 1.1°C (5 to 95% uncertainty) warming since the mid-20th century (Huber and Knutti, 2011);

an analysis by Wigley and Santer (2013), who used an energy balance model and RF and climate sensitivity estimates from AR4, and they concluded that there was about a 93% chance that GHGs caused a warming greater than observed over the 1950–2005 period; and earlier detection and attribution studies assessed in the AR4 (Hegerl et al., 2007b).”

 

Yes, it goes on into further detail.  But I fundamentally stand by my orginal assessment. The 10.5 graph was primarily derivative of a very small group of papers discussing model outputs.  Therefore, I think a statement like “The green bar shows the amount of warming caused by human greenhouse gas emissions during that time.” is potentially misleading. The green bar is derived from climate model outputs.

And I don’t think “while Judith Curry from Georgia Tech represented the opinions of 2–4% of climate experts that we could be responsible for less than half of that warming.” is well supported by evidence, or the IPCC here.  In the same section, they go on to say that..

“We conclude, consistent with Hegerl et al. (2007b),
that more than half of the observed increase in GMST from 1951 to
2010 is very likely due to the observed anthropogenic increase in GHG
concentrations”

 

Very Likely, in IPCC parlance, is 90-100%.  And that’s if you agree with their conclusions there. And not every climate scientist does.  But I certainly wouldn’t want anyone to take my word for it. Verheggen et al. 2014 asked a number of climate scientists to provide a figure for attribution and roughly 2/3 rds reported above 50% anthropogenic.  The remainder either less or uncertain.  Setting aside method criticisms for the paper itself (close enough for this purpose), how does one reconcile this to the 2-4% estimate?  For that matter, where does 2-4% come from? Not from any study I have read.  Were too many climate scientists unaware of the CMIP5 and other model(s) results?

For his part, Schmidt referenced the most recent IPCC report. The IPCC summarises the latest and greatest climate science research, so there is no better single source. The figure below from the IPCC report illustrates why 96–97% of climate science experts and peer-reviewed research agree that humans are the main cause of global warming.

 

IPCC attribution statements redux: A response to Judith Curry

Filed under:

— gavin @ 27 August 2014

I have written a number of times about the procedure used to attribute recent climate change (here in 2010, in 2012 (about the AR4 statement), and again in 2013 after AR5 was released). For people who want a summary of what the attribution problem is, how we think about the human contributions and why the IPCC reaches the conclusions it does, read those posts instead of this one.

The bottom line is that multiple studies indicate with very strong confidence that human activity is the dominant component in the warming of the last 50 to 60 years, and that our best estimates are that pretty much all of the rise is anthropogenic.



The probability density function for the fraction of warming attributable to human activity (derived from Fig. 10.5 in IPCC AR5). The bulk of the probability is far to the right of the “50%” line, and the peak is around 110%.
If you are still here, I should be clear that this post is focused on a specific claim Judith Curry has recently blogged about supporting a “50-50? attribution (i.e. that trends since the middle of the 20th Century are 50% human-caused, and 50% natural, a position that would center her pdf at 0.5 in the figure above). She also commented about her puzzlement about why other scientists don’t agree with her. Reading over her arguments in detail, I find very little to recommend them, and perhaps the reasoning for this will be interesting for readers. So, here follows a line-by-line commentary on her recent post. Please excuse the length.

 

G . It is worth pointing out that there can be no assumption that natural contributions must be positive – indeed for any random time period of any length, one would expect natural contributions to be cooling half the time.  thisn is not right natural variation may be 50% over thousands of years  but not over 100 years furthermore IPCC only estimated 0.1 degree change and is obviously more

 

 

G Is expert judgment about the structural uncertainties in a statistical procedure associated with various assumptions that need to be made different from ‘making things up’? Actually, yes – it is.

 

This is very confused. The basis of the AR5 calculation is summarised in figure 10.5:


Figure 10.5 IPCC AR5
The best estimate of the warming due to anthropogenic forcings (ANT) is the orange bar (noting the 1???? uncertainties). Reading off the graph, it is 0.7±0.2ºC (5-95%) with the observed warming 0.65±0.06 (5-95%). The attribution then follows as having a mean of ~110%, with a 5-95% range of 80–130%. This easily justifies the IPCC claims of having a mean near 100%, and a very low likelihood of the attribution being less than 50% (p < 0.0001!). Note there is no ‘downweighting’ of any argument here – both statements are true given the numerical distribution. However, there must be some expert judgement to assess what potential structural errors might exist in the procedure. For instance, the assumption that fingerprint patterns are linearly additive, or uncertainties in the pattern because of deficiencies in the forcings or models etc. In the absence of any reason to think that the attribution procedure is biased (and Judith offers none), structural uncertainties will only serve to expand the spread. Note that one would need to expand the uncertainties by a factor of 3 in both directions to contradict the first part of the IPCC statement. That seems unlikely in the absence of any demonstration of some huge missing factors.

The 50-50 argument

There are multiple lines of evidence supporting the 50-50 (middle tercile) attribution argument. Here are the major ones, to my mind.

Sensitivity

The 100% anthropogenic attribution from climate models is derived from climate models that have an average equilibrium climate sensitivity (ECS) around 3C. One of the major findings from AR5 WG1 was the divergence in ECS determined via climate models versus observations. This divergence led the AR5 to lower the likely bound on ECS to 1.5C (with ECS very unlikely to be below 1C).

Judith’s argument misstates how forcing fingerprints from GCMs are used in attribution studies. Notably, they are scaled to get the best fit to the observations (along with the other terms). If the models all had sensitivities of either 1ºC or 6ºC, the attribution to anthropogenic changes would be the same as long as the pattern of change was robust. What would change would be the scaling – less than one would imply a better fit with a lower sensitivity (or smaller forcing), and vice versa (see figure 10.4).

She also misstates how ECS is constrained – all constraints come from observations (whether from long-term paleo-climate observations, transient observations over the 20th Century or observations of emergent properties that correlate to sensitivity) combined with some sort of model. The divergence in AR5 was between constraints based on the transient observations using simplified energy balance models (EBM), and everything else. Subsequent work (for instance by Drew Shindell) has shown that the simplified EBMs are missing important transient effects associated with aerosols, and so the divergence is very likely less than AR5 assessed.

If true climate sensitivity is only 50-65% of the magnitude that is being simulated by climate models, then it is not unreasonable to infer that attribution of late 20th century warming is not 100% caused by anthropogenic factors, and attribution to anthropogenic forcing is in the middle tercile (50-50).

The IPCC’s attribution statement does not seem logically consistent with the uncertainty in climate sensitivity.

This is related to a paper by Tung and Zhou (2013). Note that the attribution statement has again shifted to the last 25 years of the 20th Century (1976-2000). But there are a couple of major problems with this argument though. First of all, Tung and Zhou assumed that all multi-decadal variability was associated with the Atlantic Multi-decadal Oscillation (AMO) and did not assess whether anthropogenic forcings could project onto this variability. It is circular reasoning to then use this paper to conclude that all multi-decadal variability is associated with the AMO.

The second problem is more serious. Lewis’ argument up until now that the best fit to the transient evolution over the 20th Century is with a relatively small sensitivity and small aerosol forcing (as opposed to a larger sensitivity and larger opposing aerosol forcing). However, in both these cases the attribution of the long-term trend to the combined anthropogenic effects is actually the same (near 100%). Indeed, one valid criticism of the recent papers on transient constraints is precisely that the simple models used do not have sufficient decadal variability!

Climate variability since 1900

From HadCRUT4:

HadCRUT4

The IPCC does not have a convincing explanation for:

  • warming from 1910-1940
  • cooling from 1940-1975
  • hiatus from 1998 to present

The IPCC purports to have a highly confident explanation for the warming since 1950, but it was only during the period 1976-2000 when the global surface temperatures actually increased.

The absence of convincing attribution of periods other than 1976-present to anthropogenic forcing leaves natural climate variability as the cause – some combination of solar (including solar indirect effects), uncertain volcanic forcing, natural internal (intrinsic variability) and possible unknown unknowns.

This point is not an argument for any particular attribution level. As is well known, using an argument of total ignorance to assume that the choice between two arbitrary alternatives must be 50/50 is a fallacy.

 

I gave a basic attribution for the 1910-1940 period above. The 1940-1975 average trend in the CMIP5 ensemble is -0.01ºC/decade (range -0.2 to 0.1ºC/decade), compared to -0.003 to -0.03ºC/decade in the observations and are therefore a reasonable fit. The GHG driven trends for this period are ~0.1ºC/decade, implying that there is a roughly opposite forcing coming from aerosols and volcanoes in the ensemble.

? NATURAL VARIABILITY AS WELL?

The situation post-1998 is a little different because of the CMIP5 design, and ongoing reevaluations of recent forcings (Schmidt et al, 2014;Huber and Knutti, 2014). Better information about ocean heat content is also available to help there, but this is still a work in progress and is a great example of why it is harder to attribute changes over small time periods.

 

In the GCMs, the importance of internal variability to the trend decreases as a function of time. For 30 year trends, internal variations can have a ±0.12ºC/decade or so impact on trends, for 60 year trends, closer to ±0.08ºC/decade.

 

For an expected anthropogenic trend of around 0.2ºC/decade, the signal will be clearer over the longer term. Thus cutting down the period to ever-shorter periods of years increases the challenges and one can end up simply cherry picking the noise instead of seeing the signal.

The main relevant deficiencies of climate models are:

  • climate sensitivity that appears to be too high, probably associated with problems in the fast thermodynamic feedbacks (water vapor, lapse rate, clouds)
  • failure to simulate the correct network of multidecadal oscillations and their correct phasing
  • substantial uncertainties in aerosol indirect effects
  • unknown and uncertain solar indirect effects

The sensitivity argument is irrelevant (given that it isn’t zero of course). Simulation of the exact phasing of multi-decadal internal oscillations in a free-running GCM is impossible so that is a tough bar to reach! There are indeed uncertainties in aerosol forcing (not just the indirect effects) and, especially in the earlier part of the 20th Century, uncertainties in solar trends and impacts. Indeed, there is even uncertainty in volcanic forcing. However, none of these issues really affect the attribution argument because a) differences in magnitude of forcing over time are assessed by way of the scales in the attribution process, and b) errors in the spatial pattern will end up in the residuals, which are not large enough to change the overall assessment.

Nonetheless, it is worth thinking about what plausible variations in the aerosol or solar effects could have. Given that we are talking about the net anthropogenic effect, the playing off of negative aerosol forcing and climate sensitivity within bounds actually has very little effect on the attribution, so that isn’t particularly relevant. A much bigger role for solar would have an impact, but the trend would need to be about 5 times stronger over the relevant period to change the IPCC statement and I am not aware of any evidence to support this (and much that doesn’t).

In regard to the 50/50 argument

by Judith Curry Pick one:a)  Warming since 1950 is predominantly (more than 50%)  caused by humans.b)  Warming since 1950 is predominantly caused by natural processes.

When faced with a choice between a) and b),  I respond:  ‘I can’t choose, since i think the most likely split between natural and anthropogenic causes to recent global warming is about 50-50?.  Gavin thinks I’m ‘making things up’, so I promised yet another post on this topic.

The issue here is of the likelihood of human induced CO2 production causing the global warming detected from 1950 to 2014.

The basis of this argument is that CO2 increasing at 1.3 ppm a year [2.07 for last decade] from a base of  312 to now 400 PPM  is all human induced  and that this should cause a rise in average global temperature of 0.2 degrees a decade.

The attribution of the warming is made from assumptions [G and Curry] from models and the models are all programmed to input 0.2 degrees rise a decade [the rise that “must “occur when CO2 is going up at this rate”]. ” the climate models ‘detect’ AGW by comparing natural forcing simulations with anthropogenically forced simulations.

Gavin writes

The basis of the AR5 calculation is summarised in figure 10.5:

Figure 10.5 IPCC AR5
and herein lie a number of problems

Firstly anthropogenic global warming is really GHG [greenhouse gas] warming as humans are supposed to make all of the excess GHG. This is a lot more than the observed warming over this time as 0.2 degrees a decade for 64 years is 1.28 degrees. Strangely this is 130% of the observed warming that has occured, Guess the models did not predict the pause after all.

Secondly CO2 levels are increasing per decade from 0.75 ppm to 2.07 ppm but the models were set with the lower levels. At the same time as we should be seeing an increase in temperature rise we instead have a pause.

Thirdly anthropogenic global warming [ANT]  is still put at greater than 100%, ie 110 %, after taking off the supposed negative aerosol effect [OA], which is so unknown that the error bars are bigger than the guesstimate.This is where Gavin obtains his 110% likely range  of Anthropogenic warming that he attributes to the IPCC.  This is 1.28 degrees minus largest guesstimate with a straight face for aerosol effect.

Fourthly Natural Variation gets a guernsey with the ridiculously low figure of 0.1 degree over 64 years  either way, no guesswork here. Judith’s point that AO and PO oscillations and multidecadal waves  which may go in 60 ,80 or 100 year cycles is completely ignored by saying that Natural variation should be ignored over a long time as it reverts to the mean. In the time frame given there is every possibility that natural variation, possibly in the order of 0.2 degrees a decade, Could be happening, but this could mean that the 1990’s rise was not caused by humans at all.  In a different context on another matter Gavin himself said   “in framing this as a binary choice, it gives implicit (but invalid) support to the idea that each choice is equally likely. That this is invalid reasoning should be obvious by simply replacing 50% with any other value and noting that the half/half argument could be made independent of any data.”

Fifthly, as kindly pointed out by Tom Curtis at Skeptical science to the maths challenged Russ R statement  that   G = +0.9±0.4°C and OA = -0.25±0.35°C. So, by simple math, ANT = 0.65± 0.75°C. So, the PDF would would be centered around 100% (not 110%) of the observed warming with (5-95%) uncertainty of ± 115%.  [see first comment at blog , Judith,  Russ R. at 06:27 AM on 16 September, 2014 ] , that would be a giant variability due to the OA uncertainty range if correct

 

 

 

 

 

 

The best estimate of the warming due to anthropogenic forcings (ANT) is the orange bar (noting the 1???? uncertainties). Reading off the graph, it is 0.7±0.2ºC (5-95%) with the observed warming 0.65±0.06 (5-95%). The attribution then follows as having a mean of ~110%, with a 5-95% range of 80–130%. This easily justifies the IPCC claims of having a mean near 100%, and a very low likelihood of the attribution being less than 50% (p < 0.0001!). Note there is no ‘downweighting’ of any argument here – both statements are true given the numerical distribution. However, there must be some expert judgement to assess what potential structural errors might exist in the procedure. For instance, the assumption that fingerprint patterns are linearly additive, or uncertainties in the pattern because of deficiencies in the forcings or models etc. In the absence of any reason to think that the attribution procedure is biased (and Judith offers none), structural uncertainties will only serve to expand the spread. Note that one would need to expand the uncertainties by a factor of 3 in both directions to contradict the first part of the IPCC statement. That seems unlikely in the absence of any demonstration of some huge missing factors.