Curry’s Dilemma

Further, as the IPCC says:

“We moderate our likelihood assessment and report likely ranges rather than the very likely ranges directly implied by these studies in order to account for residual sources of uncertainty including sensitivity to EOF truncation and analysis period (e.g., Ribes and Terray, 2013).”

That is, they multiplied uncertainty by a factor of 1.36, thus substantially expanding the uncertainty range to account for any additional uncertainty relating to the methods used.  The models, note, only over estimate recent temperature trends by 18%, half the expansion of the uncertainty range – and that overestimation has been eliminated from the attributi

jwalsh @13, it would be nice if you in fact let the IPCC explain, rather than cutting them of in mid explanation.

To start with, as shown in Fig 10.4 below, the models are used to determine relative contribution but are scaled to match actual temperature increases.  Thus if a model shows an anthropogenic temperature increase of 0.8 C, and a total increase of 0.7 C, then the anthropogenic increase is scaled by 0.65/0.7 to determine the anthropogenic contribution.  Thus any tendency to over estimate the temperature trend is eliminated as a factor in determining attribution.  All that remains is the relative responsiveness to particular forcings.  With respect to that, it is well known that the combined natural forcings from 1951-2010 are slightly negative, or neutral at best.

on by scaling in any event.

Finaly, they go on to say:

“The assessment is supported additionally by a complementary analysis in which the parameters of an Earth System Model of Intermediate Complexity (EMIC) were constrained using observations of near-surface temperature and ocean heat content, as well as prior information on the magnitudes of

forcings, and which concluded that GHGs have caused 0.6°C to 1.1°C (5 to 95% uncertainty) warming since the mid-20th century (Huber and Knutti, 2011);

an analysis by Wigley and Santer (2013), who used an energy balance model and RF and climate sensitivity estimates from AR4, and they concluded that there was about a 93% chance that GHGs caused a warming greater than observed over the 1950–2005 period; and earlier detection and attribution studies assessed in the AR4 (Hegerl et al., 2007b).”

 

Yes, it goes on into further detail.  But I fundamentally stand by my orginal assessment. The 10.5 graph was primarily derivative of a very small group of papers discussing model outputs.  Therefore, I think a statement like “The green bar shows the amount of warming caused by human greenhouse gas emissions during that time.” is potentially misleading. The green bar is derived from climate model outputs.

And I don’t think “while Judith Curry from Georgia Tech represented the opinions of 2–4% of climate experts that we could be responsible for less than half of that warming.” is well supported by evidence, or the IPCC here.  In the same section, they go on to say that..

“We conclude, consistent with Hegerl et al. (2007b),
that more than half of the observed increase in GMST from 1951 to
2010 is very likely due to the observed anthropogenic increase in GHG
concentrations”

 

Very Likely, in IPCC parlance, is 90-100%.  And that’s if you agree with their conclusions there. And not every climate scientist does.  But I certainly wouldn’t want anyone to take my word for it. Verheggen et al. 2014 asked a number of climate scientists to provide a figure for attribution and roughly 2/3 rds reported above 50% anthropogenic.  The remainder either less or uncertain.  Setting aside method criticisms for the paper itself (close enough for this purpose), how does one reconcile this to the 2-4% estimate?  For that matter, where does 2-4% come from? Not from any study I have read.  Were too many climate scientists unaware of the CMIP5 and other model(s) results?

For his part, Schmidt referenced the most recent IPCC report. The IPCC summarises the latest and greatest climate science research, so there is no better single source. The figure below from the IPCC report illustrates why 96–97% of climate science experts and peer-reviewed research agree that humans are the main cause of global warming.

 

IPCC attribution statements redux: A response to Judith Curry

Filed under:

— gavin @ 27 August 2014

I have written a number of times about the procedure used to attribute recent climate change (here in 2010, in 2012 (about the AR4 statement), and again in 2013 after AR5 was released). For people who want a summary of what the attribution problem is, how we think about the human contributions and why the IPCC reaches the conclusions it does, read those posts instead of this one.

The bottom line is that multiple studies indicate with very strong confidence that human activity is the dominant component in the warming of the last 50 to 60 years, and that our best estimates are that pretty much all of the rise is anthropogenic.



The probability density function for the fraction of warming attributable to human activity (derived from Fig. 10.5 in IPCC AR5). The bulk of the probability is far to the right of the “50%” line, and the peak is around 110%.
If you are still here, I should be clear that this post is focused on a specific claim Judith Curry has recently blogged about supporting a “50-50? attribution (i.e. that trends since the middle of the 20th Century are 50% human-caused, and 50% natural, a position that would center her pdf at 0.5 in the figure above). She also commented about her puzzlement about why other scientists don’t agree with her. Reading over her arguments in detail, I find very little to recommend them, and perhaps the reasoning for this will be interesting for readers. So, here follows a line-by-line commentary on her recent post. Please excuse the length.

 

G . It is worth pointing out that there can be no assumption that natural contributions must be positive – indeed for any random time period of any length, one would expect natural contributions to be cooling half the time.  thisn is not right natural variation may be 50% over thousands of years  but not over 100 years furthermore IPCC only estimated 0.1 degree change and is obviously more

 

 

G Is expert judgment about the structural uncertainties in a statistical procedure associated with various assumptions that need to be made different from ‘making things up’? Actually, yes – it is.

 

This is very confused. The basis of the AR5 calculation is summarised in figure 10.5:


Figure 10.5 IPCC AR5
The best estimate of the warming due to anthropogenic forcings (ANT) is the orange bar (noting the 1???? uncertainties). Reading off the graph, it is 0.7±0.2ºC (5-95%) with the observed warming 0.65±0.06 (5-95%). The attribution then follows as having a mean of ~110%, with a 5-95% range of 80–130%. This easily justifies the IPCC claims of having a mean near 100%, and a very low likelihood of the attribution being less than 50% (p < 0.0001!). Note there is no ‘downweighting’ of any argument here – both statements are true given the numerical distribution. However, there must be some expert judgement to assess what potential structural errors might exist in the procedure. For instance, the assumption that fingerprint patterns are linearly additive, or uncertainties in the pattern because of deficiencies in the forcings or models etc. In the absence of any reason to think that the attribution procedure is biased (and Judith offers none), structural uncertainties will only serve to expand the spread. Note that one would need to expand the uncertainties by a factor of 3 in both directions to contradict the first part of the IPCC statement. That seems unlikely in the absence of any demonstration of some huge missing factors.

The 50-50 argument

There are multiple lines of evidence supporting the 50-50 (middle tercile) attribution argument. Here are the major ones, to my mind.

Sensitivity

The 100% anthropogenic attribution from climate models is derived from climate models that have an average equilibrium climate sensitivity (ECS) around 3C. One of the major findings from AR5 WG1 was the divergence in ECS determined via climate models versus observations. This divergence led the AR5 to lower the likely bound on ECS to 1.5C (with ECS very unlikely to be below 1C).

Judith’s argument misstates how forcing fingerprints from GCMs are used in attribution studies. Notably, they are scaled to get the best fit to the observations (along with the other terms). If the models all had sensitivities of either 1ºC or 6ºC, the attribution to anthropogenic changes would be the same as long as the pattern of change was robust. What would change would be the scaling – less than one would imply a better fit with a lower sensitivity (or smaller forcing), and vice versa (see figure 10.4).

She also misstates how ECS is constrained – all constraints come from observations (whether from long-term paleo-climate observations, transient observations over the 20th Century or observations of emergent properties that correlate to sensitivity) combined with some sort of model. The divergence in AR5 was between constraints based on the transient observations using simplified energy balance models (EBM), and everything else. Subsequent work (for instance by Drew Shindell) has shown that the simplified EBMs are missing important transient effects associated with aerosols, and so the divergence is very likely less than AR5 assessed.

If true climate sensitivity is only 50-65% of the magnitude that is being simulated by climate models, then it is not unreasonable to infer that attribution of late 20th century warming is not 100% caused by anthropogenic factors, and attribution to anthropogenic forcing is in the middle tercile (50-50).

The IPCC’s attribution statement does not seem logically consistent with the uncertainty in climate sensitivity.

This is related to a paper by Tung and Zhou (2013). Note that the attribution statement has again shifted to the last 25 years of the 20th Century (1976-2000). But there are a couple of major problems with this argument though. First of all, Tung and Zhou assumed that all multi-decadal variability was associated with the Atlantic Multi-decadal Oscillation (AMO) and did not assess whether anthropogenic forcings could project onto this variability. It is circular reasoning to then use this paper to conclude that all multi-decadal variability is associated with the AMO.

The second problem is more serious. Lewis’ argument up until now that the best fit to the transient evolution over the 20th Century is with a relatively small sensitivity and small aerosol forcing (as opposed to a larger sensitivity and larger opposing aerosol forcing). However, in both these cases the attribution of the long-term trend to the combined anthropogenic effects is actually the same (near 100%). Indeed, one valid criticism of the recent papers on transient constraints is precisely that the simple models used do not have sufficient decadal variability!

Climate variability since 1900

From HadCRUT4:

HadCRUT4

The IPCC does not have a convincing explanation for:

  • warming from 1910-1940
  • cooling from 1940-1975
  • hiatus from 1998 to present

The IPCC purports to have a highly confident explanation for the warming since 1950, but it was only during the period 1976-2000 when the global surface temperatures actually increased.

The absence of convincing attribution of periods other than 1976-present to anthropogenic forcing leaves natural climate variability as the cause – some combination of solar (including solar indirect effects), uncertain volcanic forcing, natural internal (intrinsic variability) and possible unknown unknowns.

This point is not an argument for any particular attribution level. As is well known, using an argument of total ignorance to assume that the choice between two arbitrary alternatives must be 50/50 is a fallacy.

 

I gave a basic attribution for the 1910-1940 period above. The 1940-1975 average trend in the CMIP5 ensemble is -0.01ºC/decade (range -0.2 to 0.1ºC/decade), compared to -0.003 to -0.03ºC/decade in the observations and are therefore a reasonable fit. The GHG driven trends for this period are ~0.1ºC/decade, implying that there is a roughly opposite forcing coming from aerosols and volcanoes in the ensemble.

? NATURAL VARIABILITY AS WELL?

The situation post-1998 is a little different because of the CMIP5 design, and ongoing reevaluations of recent forcings (Schmidt et al, 2014;Huber and Knutti, 2014). Better information about ocean heat content is also available to help there, but this is still a work in progress and is a great example of why it is harder to attribute changes over small time periods.

 

In the GCMs, the importance of internal variability to the trend decreases as a function of time. For 30 year trends, internal variations can have a ±0.12ºC/decade or so impact on trends, for 60 year trends, closer to ±0.08ºC/decade.

 

For an expected anthropogenic trend of around 0.2ºC/decade, the signal will be clearer over the longer term. Thus cutting down the period to ever-shorter periods of years increases the challenges and one can end up simply cherry picking the noise instead of seeing the signal.

The main relevant deficiencies of climate models are:

  • climate sensitivity that appears to be too high, probably associated with problems in the fast thermodynamic feedbacks (water vapor, lapse rate, clouds)
  • failure to simulate the correct network of multidecadal oscillations and their correct phasing
  • substantial uncertainties in aerosol indirect effects
  • unknown and uncertain solar indirect effects

The sensitivity argument is irrelevant (given that it isn’t zero of course). Simulation of the exact phasing of multi-decadal internal oscillations in a free-running GCM is impossible so that is a tough bar to reach! There are indeed uncertainties in aerosol forcing (not just the indirect effects) and, especially in the earlier part of the 20th Century, uncertainties in solar trends and impacts. Indeed, there is even uncertainty in volcanic forcing. However, none of these issues really affect the attribution argument because a) differences in magnitude of forcing over time are assessed by way of the scales in the attribution process, and b) errors in the spatial pattern will end up in the residuals, which are not large enough to change the overall assessment.

Nonetheless, it is worth thinking about what plausible variations in the aerosol or solar effects could have. Given that we are talking about the net anthropogenic effect, the playing off of negative aerosol forcing and climate sensitivity within bounds actually has very little effect on the attribution, so that isn’t particularly relevant. A much bigger role for solar would have an impact, but the trend would need to be about 5 times stronger over the relevant period to change the IPCC statement and I am not aware of any evidence to support this (and much that doesn’t).

In regard to the 50/50 argument

by Judith Curry Pick one:a)  Warming since 1950 is predominantly (more than 50%)  caused by humans.b)  Warming since 1950 is predominantly caused by natural processes.

When faced with a choice between a) and b),  I respond:  ‘I can’t choose, since i think the most likely split between natural and anthropogenic causes to recent global warming is about 50-50?.  Gavin thinks I’m ‘making things up’, so I promised yet another post on this topic.

The issue here is of the likelihood of human induced CO2 production causing the global warming detected from 1950 to 2014.

The basis of this argument is that CO2 increasing at 1.3 ppm a year [2.07 for last decade] from a base of  312 to now 400 PPM  is all human induced  and that this should cause a rise in average global temperature of 0.2 degrees a decade.

The attribution of the warming is made from assumptions [G and Curry] from models and the models are all programmed to input 0.2 degrees rise a decade [the rise that “must “occur when CO2 is going up at this rate”]. ” the climate models ‘detect’ AGW by comparing natural forcing simulations with anthropogenically forced simulations.

Gavin writes

The basis of the AR5 calculation is summarised in figure 10.5:

Figure 10.5 IPCC AR5
and herein lie a number of problems

Firstly anthropogenic global warming is really GHG [greenhouse gas] warming as humans are supposed to make all of the excess GHG. This is a lot more than the observed warming over this time as 0.2 degrees a decade for 64 years is 1.28 degrees. Strangely this is 130% of the observed warming that has occured, Guess the models did not predict the pause after all.

Secondly CO2 levels are increasing per decade from 0.75 ppm to 2.07 ppm but the models were set with the lower levels. At the same time as we should be seeing an increase in temperature rise we instead have a pause.

Thirdly anthropogenic global warming [ANT]  is still put at greater than 100%, ie 110 %, after taking off the supposed negative aerosol effect [OA], which is so unknown that the error bars are bigger than the guesstimate.This is where Gavin obtains his 110% likely range  of Anthropogenic warming that he attributes to the IPCC.  This is 1.28 degrees minus largest guesstimate with a straight face for aerosol effect.

Fourthly Natural Variation gets a guernsey with the ridiculously low figure of 0.1 degree over 64 years  either way, no guesswork here. Judith’s point that AO and PO oscillations and multidecadal waves  which may go in 60 ,80 or 100 year cycles is completely ignored by saying that Natural variation should be ignored over a long time as it reverts to the mean. In the time frame given there is every possibility that natural variation, possibly in the order of 0.2 degrees a decade, Could be happening, but this could mean that the 1990’s rise was not caused by humans at all.  In a different context on another matter Gavin himself said   “in framing this as a binary choice, it gives implicit (but invalid) support to the idea that each choice is equally likely. That this is invalid reasoning should be obvious by simply replacing 50% with any other value and noting that the half/half argument could be made independent of any data.”

Fifthly, as kindly pointed out by Tom Curtis at Skeptical science to the maths challenged Russ R statement  that   G = +0.9±0.4°C and OA = -0.25±0.35°C. So, by simple math, ANT = 0.65± 0.75°C. So, the PDF would would be centered around 100% (not 110%) of the observed warming with (5-95%) uncertainty of ± 115%.  [see first comment at blog , Judith,  Russ R. at 06:27 AM on 16 September, 2014 ] , that would be a giant variability due to the OA uncertainty range if correct

 

 

 

 

 

 

The best estimate of the warming due to anthropogenic forcings (ANT) is the orange bar (noting the 1???? uncertainties). Reading off the graph, it is 0.7±0.2ºC (5-95%) with the observed warming 0.65±0.06 (5-95%). The attribution then follows as having a mean of ~110%, with a 5-95% range of 80–130%. This easily justifies the IPCC claims of having a mean near 100%, and a very low likelihood of the attribution being less than 50% (p < 0.0001!). Note there is no ‘downweighting’ of any argument here – both statements are true given the numerical distribution. However, there must be some expert judgement to assess what potential structural errors might exist in the procedure. For instance, the assumption that fingerprint patterns are linearly additive, or uncertainties in the pattern because of deficiencies in the forcings or models etc. In the absence of any reason to think that the attribution procedure is biased (and Judith offers none), structural uncertainties will only serve to expand the spread. Note that one would need to expand the uncertainties by a factor of 3 in both directions to contradict the first part of the IPCC statement. That seems unlikely in the absence of any demonstration of some huge missing factors.

mosher

Steven Mosher says:

Some folks seem to be confused by my position, and Anthon’y post aims at fiinding agreement.

So, Let me state some things clearly

My position
1. Averaging Absolutes as goddard does is not the best method to use especially when records are missing.
A) it’s not the best method to calculate a global average
B) its not the best method to Assess the impact of adjustments.
2. IF you choose a method that requires long continuous records then you have to adjust for station changes
A) changes in location
B) changes in TOBS
C) changes in instrument.
3. The alternative to adjusting (#2) is to slice stations.
A) When a station moves, its a new fricking station because temperature is a function of SITING
B) When the instrument changes, its a new fricking station
C) when you change the time of observation, its a new station.
4. Another alternative to 2 is to pre select stations according to criteria of goodness.

On #1. The method of averaging absolutes is unreliable. Sometimes it will work, sometimes it will give you biases in both directions. deciding which method to use should be done with a systematic
study using synthetic data. This is not a skeptic versus warmist argument. This is a pure method
question.

On #2. This approach means that every adjustment you do will be subject to examination. You
will never ever get them all correct. Since adjustment codes are based on statistical models
you might be right 95% of the time, 5% you will be wrong. there are 40000 stations. Go figure
5% of that.

On # 3. This is my preferred approach versus #2. Why? because when the station changes its a new station. Its measuring something different. The person who changed my mind about this
was Willis. I used to like #2.

on #4 Im all for it. However, the choice of station rating must be grounded in field test.
Actual field test of what makes a site good and what disqualifies a site. Site rating needs to be objective ( based on measurable properties ) and not merely visual inspection. Humans need to taken out of rating or strict rating protocals must be established and tested.

Now, let the personal attacks commence. or you can look at 1-4 and say whether you agree or disagree.

Endogenous [Feedbacks] and Exogenous [Forcings] are a bit user defined.
To my way of thinking turning the light up or down, ie the sun increasing or decreasing output or moving towards or away from the earth is an obvious exogenous forcing.
Having an eclipse of the earth during the day turns off 100 million Hiroshima atomic bombs of heat during a typical 3 minute eclipse. enough eclipses [21.3] and we would solve our purported energy imbalance.
Other exogenous sources are harder to understand eg background radiation from the Universe.
One might allow volcanoes grudgingly along with the fact that the earth gives some small radiative forcing from its core heat.
The rest is all endogenous as far as I am concerned.
The soots, methane, water come from natural sources and variations as does CO2.
Plants absorb the most CO2 but also produce the most CO2 when they rot or are burnt as coal. Humans producing CO2 is as natural as cows producing methane. The earth has a great inbuilt capacity to stay normal despite the minute efforts of humans.
The sea has been the same alkalinity for over a billion years. It is kept that way by the relatively infinite constant amounts of water and minerals. Why is the Ph where it is? Because it cannot go anywhere else given the substrate composition

ushcn

angech (Comment #131480)  at Blackboard

but you still have goddarians out there.
Only a complete munchkin would argue that we should use the “raw” data in deference to quality controlled data. Steve Goddard is that munchkin, and it’s not surprising to see his surrogates here making the same stupid arguments here.

I repeat, I am not interested in Goddard or waiting for him.
Calling him a munchkin is a really good argument, Carrick, Thanks for including that brilliant riposte.Better than Tamino when on a losing argument.
Using raw data, real data, in deference to quality controlled model inferred “data”? [lets call it gloop] is better?
Note there are no tree rings or proxies in real data.
Lets note that all data is adjusted [rounded] at some level by a person or program.
There is nothing wrong with this and rounding can go up as well as down Mosher, it evens out.
Real data by a thermometer is pretty accurate. It was taken on the day, it was written down, it is still amazingly extant in “the true original value, it must be retrieved from DSI-3210?.

Now to correct some minor subterfuge.
The first USHCN datasets .
defined a network of 1219 stations in the contiguous United States.
24 of the 1,218 stations (about 2 percent) have complete data from the time they were established.
The initial USHCN daily data set contained a 138-station subset of the USHCN.
Even though there is supposed to be a network of 1218 stations from which the model is derived for most of its life since 1987 USHCN has used variations on a smaller critical subset to issue its temperature model, roughly the 138/1218 or 10 % of the stations [do not get picky on my maths].
Steven said “USHCN version 1 data comprise about 5% of station months, generally in the earliest years of the station records.”
This is not correct, if referring to USHCN which he seems to be though he may mean USHCN compared to all US CONUS.
Furthermore
“Monthly values calculated from GHCN-Daily are merged with the USHCN version 1 monthly data to form a more comprehensive dataset of serial monthly temperature and precipitation values for each HCN station”
Err no , USHCN is supposed to be worked out from its 1218 stations, infilled by surrounding non recognized stations when data is missing and then this is incorporated into GHCN, smaller to larger, not using the world data to fool the American data, surely, please.
I understand the reams of data , Steve, so when you josh around telling the less able people like myself to go and do the work that a highly trained person like yourself found almost too hard it is not even comedic, just sad and not helpful.
Lets have real history and explain we use models for science, but they are not real.

 

 

mosher

2011-09-23 10:03:27
grypogryposaurus@gmail…
173.69.6.13
Zeke left a comment.  He’s trustworthy.  I’ll fix his link and thank him.

2011-10-08
Kevin Cowtan, from York, UK. I’ve got a PhD in computational physics, and am a long standing post-doc with fellowship-in-the-pipeline working on computational methods development in X-ray crystallography.
maintain
2010-08-15 Robert Way
I am a Masters student at Memorial University of Newfoundland in Eastern Canada. at the University of Ottawa in Geography with a minor in Geomatics and Spatial Analysis.
My primary interests lie in paleoclimatology, remote sensing of techniques for glaciers and ice sheets, and ocean-atmospheric dynamics.
My 2 poster boys.

://climateaudit.org/2012/07/31/surface-stations/

://climateaudit.org/2012/07/31/surface-stations/
angech | July 7, 2014 at 10:59 pm | Reply

Judith, I and others I’m sure would like to do a more formal rebuttal of Zeke’s approach if allowed and only if well written and argued.
Mine would focus on 3 key points.
The first of adjustment of past temperatures from current ones.
The second of a possible flaw in TOBS as used.
The third on the number of what are referred to a Zombie stations
1. Zeke says this is incremental and unavoidable using current temperatures as the best guide and adjusting backwards.
“NCDC assumes that the current set of instruments recording temperature is
accurate, so any time of observation changes or PHA-adjustments are done
relative to current temperatures. Because breakpoints [TOBS] are detected through pair-wise comparisons, new data coming in may SLIGHTLY change the magnitude of recent adjustments by providing a more comprehensive difference series between neighboring stations.

When breakpoints are removed, the entire record prior to the breakpoint is
adjusted up or down depending on the size and direction of the breakpoint.
This means that slight modifications of recent breakpoints

The incremental changes add up to WHOPPING changes of over 1.5 degrees over 100 years to past records and 1.0 degree to 1930 records. Zeke says the TOBS changes at the actual times are only in range of 0.2 to 0.25 degrees. This would mean a cumulative change of 1.3 degrees colder in the distant past on his figures, everywhere.
Note he is only technically right to say this ” will impact all past temperatures at the station in question though a constant offset.”
But he is not changing the past 0.2 degrees.It alters all the past TOBS changes which cause the massive up to 1.5 degrees change in only 100 years.

angech (Comment #130582)

Zeke if a record high temp was recorded in Death Valley, Or Texas Or Alaska on one of the 1218 stations in 2010, It is by your own admission no longer a record on your system because it has had to be adjusted down by the dropping out of the warmer stations you mention.
Nick currently denies this over at WUWT even though he knows the record is being adjusted down.
In his eyes and yours one needs to correct the past records to maintain the purity of the current records under your adjustment system merely for mathematical predictions.
The folly of this perfectly correct mathematical approach is that in we live in a real life, not Maths and graphs world.
We cling to truth in real past records at individual sites, not wanting your attempt to do perfectly correct modelling of the US and world temperatures at a cost of throwing out the past which is what it does.
you are building the same giant clockwork device believing that the universe goes round the earth not the sun, rather than a simple model which incorporates the truth but is still usable for future projection.
The more you put in the more cumbersome and impractical and divorced from reality it becomes.

angech (Comment #129994)

So to be clear
there were “ 1218 real stations (USHCN) in the late 1980s
There are now [???] original real stations left-my guess half 609
There are [???] total real stations – my guess eyeballing 870
There are 161 new real stations , all in airports or cities added to the graph
There are 348 made up stations and 161 selected new stations.
The number of the original 1218 has to be kept like the stock exchange to have a mythical representative temperature or temperature anomaly over this number of sites.
Nobody has put up a new thermometer in rural USA in the last 30 years and none has considered using any of the rural thermometers of which possibly 3000 of the discarded 5782 cooperative network stations.
And all this is Steve Goddards fault.
Can someone confirm these figures are accurate and if so why any trust should be put in this Michael Mann like ensemble of real, adjusted real and computer infilling models.

Zeke (Comment #130058)

Mosh,

Actually, your explanation of adjusting distant past temperatures as a result of using reference stations is not correct. NCDC uses a common anomaly method, not RFM.

The reason why station values in the distant past end up getting adjusted is due to a choice by NCDC to assume that current values are the “true” values. Each month, as new station data come in, NCDC runs their pairwise homogenization algorithm which looks for non-climatic breakpoints by comparing each station to its surrounding stations. When these breakpoints are detected, they are removed. If a small step change is detected in a 100-year station record in the year 2006, for example, removing that step change will move all the values for that station prior to 2006 up or down by the amount of the breakpoint removed. As long as new data leads to new breakpoint detection, the past station temperatures will be raised or lowered by the size of the breakpoint.

An alternative approach would be to assume that the initial temperature reported by a station when it joins the network is “true”, and remove breakpoints relative to the start of the network rather than the end. It would have no effect at all on the trends over the period, of course, but it would lead to less complaining about distant past temperatures changing at the expense of more present temperatures changing.
.
angech,

As I mentioned in the original post, about 300 of the 1218 stations originally assigned to the USHCN in the late 1980s have closed, mostly due to volunteer observers dying or otherwise stopping reporting. No stations have been added to the network to make up for this loss, so there are closer to 900 stations reporting on the monthly basis today.
.
To folks in general,

If you don’t like infilling, don’t use infilled values and create a temperature record only from the 900 stations that are still reporting, or from all the non-infilled stations in each month. As the first graph in the post shows, infilling has no effect on CONUS-average temperatures.

angech (Comment #130074)

Carrack, your link to Moyhu showed Nick Stokes attempting to discredit SG with 6 diagrams talking about a spike in 2014 but all 6 graphs only went to 2000 why the heck is that.
Zeke has a post at SG where he admits that there are only 650 real stations out of 1218 . This is a lot less than only 918 that he alludes to above. Why would he say 650 to SG ( May 12th 3.00 pm) and instead #130058 at the Blackboard about 300 of the 1218 stations have closed down.
Can Zeke give clarity on the number of real stations (raw data) and the number of unreal stations using filled in Data in the 1218 stations.

Nick Stokes (Comment #130077)

angech (Comment #130074)
“6 diagrams talking about a spike in 2014 but all 6 graphs only went to 2000 why the heck is that.”

You’re not very good at reading graphs. The x axis is marked (by R) in years multiple of 20. The data shown is up to date.

“Zeke has a post at SG where he admits that there are only 650 real stations out of 1218 . This is a lot less than only 918 that he alludes to above.”

When I last looked a few weeks ago, in 2014 numbers reporting were Jan 891, Feb 883, Mar 883, and 645 for April. Many are staffed by volunteers and some reports are late. So 918 sounds right.

angech (Comment #130078)

Nick, I cannot understand your post It seems that you split your data into real and infilled sub groups,
There appear to be a large number of these infilled stations 1218 -650 = 668 according to Zeke at SG and here.
there are claims that the real data is not located in the right areas to be useful for graphing the areas due to differences in latitude and elevation.
The artificial sites at the best locations give a “true grid” for the 1218 “stations”.
One knows what the true readings for these artificial sites “should be” , put them in and the adjust the real sites to what the artificial sites say the temperature should be..Zeke says each month one takes the infilled data from the unreal stations. I guess it ” comes in” from the computer programme primed with a need to rise up as CO goes up otherwise known as Steven’s Climate Sensitivity factor which is being adjust downwards from 3.0 to 1.4 currently due to the satellite pause.
One then has to look for non climatic break points, AKA real data,behaving badly which has to be removed.
Fortunately when you do this the difference between the raw R1 data and the final F1 data is almost eliminated as Nick so elegantly shows. Bravo for the shell trick.

angech (Comment #130314)

Thank you Zeke for putting this post up. Hopefully it will result in greater openness and sharing of information though you may not be feeling this yet you are trying, which a lot of your colleagues do not want to do. The level of vitriol reflects the extreme importance of dong the data collection and models openly so all sides can feel confident that their arguments are on standard ground. As you know this is not the case at the moment and has not been the case for skeptics for a long time.
Incidentally Mosher described the principle that if site A is closer to site B than site C then site A is more likely to be similar to site B than C is a fundamental theorem of geo statistics at JC. 10.50 12/6/2014 asymmetric responses of arctic and Antarctic.
My question to you is that Robert Way has stated at Skeptical Science that this is not true when calculating the Arctic infilling as used in Cowtan and Way and my understanding is that this faulty principle may now be being used in your current Arctic infilling. Can you assure us if you use Steven’s fundamental principle or Robert Ways new improved principle.

angech (Comment #130316)

See “how global warming broke the thermometer records” by Kevin Cowtan at Skeptical science 25/4/2014 speaking of his and Robert Way’s finding that the Gisstemp conundrum was due to actual GCHN Arctic data and infilling showing a cooling “bias” when compared to their model only method.
This occurred supposedly by violating the assumption that neighbouring regions of the planet’s surface warm at a similar rate.

Zeke (Comment #130317)

angech,

The problem in the arctic is one of station density; Cowtan and Way actually discovered the problem with GHCN’s adjustments by comparing them to Berkeley’s results, which are more accurate for those stations given the denser network. There is always a challenge in very sparsely sampled areas of misclassifying abrupt changes (in this case an abrupt warming trend) as local biases rather than true regional effects. Larger station networks can help ameliorate this.
.
Will Nitschke,

Some sort of automated homogenization is necessary. We’ve been working on ways to test to ensure that homogenization is not introducing bias. The Williams at al paper makes a compelling case, for example: ftp://ftp.ncdc.noaa.gov/pub/da…..al2012.pdf

Our recent UHI paper also looks at this by redoing homogenization using only rural stations to detect breakpoints/trend biases.

The reason I suspect that the Amundsen-Scott results are a bug due to the extreme cold is that they are flagged as regional climatology outliers. I’ll suggest that the Berkeley team look into it in more detail next week.
.
JD Ohio,

Using that approach, observations are still within the 95% CI for models, though they are close to the bottom. As I mention in the article you reference, the next few years will be key in seeing if they become clearly inconsistent. I have an updated graph here: http://www.yaleclimatemediafor…..in-review/
.
For other folks: sorry for being slow in responding; other things in life have been pulling me away from blogging, and I’m about to head out on a camping trip with no internet access fo

Zeke (Comment #130839)

The exact number of real stations reporting each month in USHCN version 2 is shown in this figure: http://rankexploits.com/musing…..-Count.png

You can download all the raw station data yourself here: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2.5/

I really have no clue why people keep harping on this “exact number of active real stations” question when its trivial to answer…

Judith, I and others I’m sure would like to do a more formal rebuttal of Zeke’s approach if allowed and only if well written and argued.
Mine would focus on 3 key points.
The first of adjustment of past temperatures from current ones.
The second of a possible flaw in TOBS as used.
The third on the number of what are referred to a Zombie stations
2. TOBS  and break adjustments are made on stations which do not have data taken at the correct time.
The process is automated in the PHA.
Infilling is done on stations missing data, ie not correct time . Zombie stations have made up data, ie not correct time.
This means that potentially half the 1218 stations, the zombie and the ones missing data have an automatic cooling of the past done every day with the result of compounding past temperature altered levels.
This should not be allowed to happen.
Once a TOBS change has originally been made in the past eg 1900 should have been 0.2 warmer thern this altered estimate should stay forever and not be affected by future changes.
Judith, I and others I’m sure would like to do a more formal rebuttal of Zeke’s approach if allowed and only if well written and argued.
Mine would focus on 3 key points.
The first of adjustment of past temperatures from current ones.
The second of a possible flaw in TOBS as used.
The third on the number of what are referred to a Zombie stations
Going for a bike ride
3.  I will comment on Zeke’s and others obfuscation on this vital issue when I return.
Samples            Ponder this

Zeke (Comment #130839)   July 6th, 2014 at 12:22 pm
The exact number of real stations reporting each month in USHCN version 2 is shown in this figure: http://rankexploits.com/musing…..-Count.png
[HERE HE SHOWS AN OUT OF DATE GRAPH  check it out

You can download all the raw station data yourself here: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2.5/
No it is not labelled as raw data and it certainly does not give a number for the raw stations

I really have no clue why people keep harping on this “exact number of active real stations” question when its trivial to answer…

PLEASE, please  answer Zeke.
Trivial to answer but you refuse pointblank to give an answer.
How many real original stations giving data are there since 1987 out of 1218
How many new ones have been added
How many do not report each month
How many zombie stations were used in March 2014 now you have all the data in
Judith,  Chief,Fan, Joshua, Climate etc,
if Zeke is honestly presenting his case, which he is, why will he not answer a trivial question?

“angech The TOBS adjustment would be done once
the adjustment that would/could change on a daily basis is PHA.
wait for Monday. the entire process will be explained.
But wait and read what is coming out on Monday
Then if you don’t like what NCDC does with USHCN, we could just dump all of USHCN, dump all of the US and the answer wouldntt change much

Man are you dense. Zeke has a written a paper describing all this.
Personally, I dont look at USHCN and I dont use it. for years.
But I am able to read Zeke paper and suggest that you read it on monday.
Seriously you are descending to Goddarian levels of argument.

(Comment #130800) July 4th, 2014 at 9:48 pm
Still no count given [ever] of the exact number of active real stations and it is obvious no one will be giving one.
The silence is deafening.

Go to the ftp
download the data
do a count

personally, in my own work, I dont look at USHCN. My advice to NOAA is to drop GHCN-M and USHCN and just supply GHCN Daily

I am not interested in doing your homework.
you could find a million errors in USHCN and none of it matters to me because Im upstream. get that. Not doing your homework. dont care what your issues are. they are moot.

Zeke (Comment #130839)   July 6th, 2014 at 12:22 pm
I really have no clue why people keep harping on this “exact number of active real stations” question when its trivial to answer…

Steven Mosher (Comment #130831)July 6th, 2014 at 2:54 am
Man are you dense. Zeke has a written a paper describing all this.
Personally, I dont look at USHCN and I dont use it. for years.

Still no count given [ever] of the exact number of active real stations and it is obvious no one will be giving one.The silence is deafening.

Go to the ftp  download the data  do a count
personally, in my own work, I dont look at USHCN. My advice to NOAA is to drop GHCN-M and USHCN and just supply GHCN Daily
I am not interested in doing your homework.
you could find a million errors in USHCN and none of it matters to me because Im upstream. get that. Not doing your homework. dont care what your issues are. they are moot.

interesting that you could find a million errors in USHCN and not care, and that you dont care but have 51 posts here and no one of the triumvirate will give a count Stokes /Zeke/ Steven and have spent hours running away from it. Says something.

climate sensitivity or no climate sensitivity

The Antarctic is about to set a record for measured sea ice extent in the modern Satellite era. which means there is more ice at the South Pole in the last 34 years then ever before measured.

This is at the same time that CO2 levels have risen from 328 to 401 ppm, a rise of 2 ppm/year which has been put down to increased burning of fossil fuel.

CO2 is known to cause a warming effect in the atmosphere due to its absorption of Infra red radiation, as do all the other gases and water both as a gas and a vapor. The effect of an increase in CO2 levels is postulated to cause a 1 degree rise in temperature for a doubling of the CO level from 35 years ago.

This rise in temperature of the earth at sea surface level for a doubling of CO2 is referred to as the Climate Sensitivity [CS].  The Climate Sensitivity which is easy to define is in practice impossible to estimate or measure.

The reasons involve Natural Variation, which can also be looked at as not being able to measure the multiple effects of winds , waves, currents, forests, deserts, cloud and albedo to mention just some with precision hence the weather daily, weekly and seasonally cannot be fully predicted. Another is the presence of positive and negative feedbacks in the climate system which are even harder to work out.

Some people have stated that Climate Sensitivity cannot be negative, that there must be some positive increment to a forcing and any feedbacks must of necessity be less than the original input.

A measure of the degree of warming is logically that cold areas should warm up and areas of ice should melt.

Here is the conundrum. The Antarctic sea ice should be melting. At a Climate Sensitivity of 1  the world temperature should be 0.7 degrees warmer over the last 35 years and this should show in retreating Antarctic Sea Ice extent.

The fact that the area of Sea Ice is now 1.8 square million kilometers greater than the average over the last 35 years on its own would imply a negative Climate Sensitivity to CO2 increase.

There are arguments why this might not be correct.

We might be having a very, very long lasting natural variation in temperature which is overriding CS.
Temperature changes are different in the 2 hemispheres due to different land mass sizes.
Numerous other explanations have been attempted which fall over for 2 reasons.

The first is that all of them involve measures which are inherently counter intuitive. An example would be hot seas cause more clouds which cause more snow which cause more ice buildup. This logic loop is ultimately self defeating, two plausible ideas  are put together but the outcome fails due to the argument on climate sensitivity above. An input should not usually cause a bigger feedback than the input itself.

Other arguments include trade winds blowing faster put forward ten tears ago and trade winds blowing slower put forward last year. Ozone holes are another argument  which fails the logic test. Melting causing fresh water which is lighter and sinks allowing colder water to form more ice. Melting glaciers is another.

The second reason is that there are so many of these counter intuitive arguments still around with few in the close knit Antarctic community having the gumption to say no, this is wrong. Hence we have nearly 10 reasons for why there is more ice in the Antarctic. If even half of them were right this would mean that there should be 5 times as much ice in the Antarctic as there currently is.

This leads to the question then of is it possible that there are feedback loops that prevent our climate from  changing too drastically whatever the local  input. With the input of palaeontology it is obvious that the earth has had massive eons of life producing the fossil fuels in the first place, possibly for over a billion years. The earth’s atmosphere may originally have been devoid of Oxygen and CO2 [see stromatolites]. The upheavals of the earth’s crust have had super volcanoes and eras where burning coal would have produced more CO2 than mankind could ever produce. Yet we are here.

CO2 does warm the air, rising levels with no negative feedbacks should cause a rise in the earth’s temperature, yet one of the biggest, easiest to measure objective measurements says very plainly this is not happening.  There may be a bit of transfer of heat to the Northern Hemisphere for the North South divide to exist that is not yet understood. The most likely answer is that Climate sensitivity is a lot lower than most climate scientists are prepared to admit.

a doubling of CO
2
(which amounts to a forcing of 3.7 W/m2) would result in 1 °C global warming

 

 

humans are changing Earth’s climate. – Royal Society

humans are changing Earth’s climate. – Royal Society
This should have the proviso
” by increasing the CO2 concentration in the atmosphere”
and the follow up proviso
” causing the earth to warm more than it would naturally “.
The earth is warming naturally as it comes out of the last ice age.
The cause of why we have ice ages is still not clear despite good theories and is not a concern as we will all be dead many centuries before another and ice ages are not good.
The degree of variation of warmth of the atmosphere, earth surface and sea yearly has only been measured for a short time and quite large variations can occur from year to year.
The amount of variation in temperature the last 40 years is well within the bounds of natural variation [given the large amounts that have been shown to occur in a single year like 1997.] and of no cause for concern and no predictor of future temperatures on a century scale.
As understanding of causes of temperature changes are still in their infancy all we can say is that the earth is expected to get a little warmer overall in the next century and the next millennium.

In regard to CO2, the amount of distorting the facts is incredible. Most people including the Royal Society are aware that CO2 rises have followed the temperature rises not caused them.
This is due to more carbonate substrate dissolving into the warmer sea and the water giving off more of the CO2 it now contains.
In fact in the slight warming that we have had more of the CO2 rise might be just that, a natural response, and the fingerprint? of burning carbon unimportant.
Human CO2 remaining in the atmosphere for thousands of years is also emotive rubbish. How much will remain? after a few years virtually none.
The amount remaining is important, after all each breath we take has a molecule of CO2 breathed out by Julius Caesar [fact] but I am not going to say that he is the cause of CO2 increase /Climate change.
1 molecule in every ton is not important
When the earth cools again a little in the next 5,10 50 years as it will, the CO2 levels will fall.
The earth and sea are a giant buffer system that keeps a balance and the more CO2 we produce the more the earth will absorb.

I hope and trust that the Royal Society is right
humans can change the Earth’s climate. We already do on a micro climate scale. Rivers can be diverted and lakes drained, I live in an area dependent on irrigation.The three Gorges Dam is a monument to human ingenuity. But the big,big stuff is still out of reach.
Heck you have to drop 100 Hiroshima bombs a minute just to keep the planet’s warmth steady. [thanks Gavin].
I cannot see humans doing that for even an hour with all our resources so whatever way it wants to go it is just too big [for the sun] to worry about a few gnats on the surface producing a trace of a trace gas for a micro, micro second.

angech
Trade winds are slower due to global warming Vecchi
Trade winds are faster due to global warming England
now tell me again slowly these are the same?????
Paul S
The only thing which matters with regard consistency between Vecchi 2006 and England 2014 is that the observed trends in equivalent variables during the period of overlap are about the same, which they are.

Are equivalent variables the same thing as gobbledygook?
I am talking about trade winds not equivalent variables of whom knows what and you suggest I am changing the subject.
When you are wrong, dig a deeper hole.

I upset Mosher and am PNG. I would not dare comment on ” you think Mosher believes temperature changes over the past 10-15 years can be entirely explained by CO2 and volcanoes.”
He has written somewhere recently that CO2 and volcanoes may be enough for the last 150 years. He has CO2 is going up , Global air temp should go up and is spot on. My disagreement is that I can see a host of negative feedback factors Including buffering in the sea and perhaps Spencer’s extra clouds that has meant the temperature rise with the CO2 will be negated.
The natural temperature rise in the last 40 years is what has caused the upswell in CO2. Nature will use the excess CO2 to good effect and the CO2 levels will stop rising soon as the earth cools down for the next 40 years with the usual suspects all converting to Ice Age Alarmists instead

for Geoff and Karlo if a large number of cures exist for any one condition then none of them are valid.

Judith, a small comment on the latest in a long line of excuses for the pause.
In a medical setting we have a condition called plantar fasciitis.
It has possibly over 200 cures. Most risible but all with just a hint of why it might work when the others don’t.
One for instance is rolling a golf ball underfoot with the sore foot. There are operations, steroid injections, ultrasound, infrared, physio and chiropracty to name a few.
Sadly none of them work any better than the others other than for the true believers who happened to accidentally get better at the time of their particular treatment.
The only thing that works for nearly everyone is the passage of time.
The parallels to the climate debate are obvious.
There is a saying sort of equivalent to Occam’s razor here, that is, that if a large number of cures exist for any one condition then none of them are valid.
Hence the more explanations one has to have to explain the pause the more likely that none of them are right.
Which would mean I guess that natural climate and temperature fluctuations are the norm and chaotic enough to be beyond the scope of our current understanding, although we can recognize and predict the recurrent patterns of our daily and yearly cycles.
Worse, the more explanations one has to have to explain the pause the more of a “turtles all the way down ‘ mentality one has to develop in reverse as each new argument demolishes the old arguments and sets an even harder benchmark.
I am sure you could work this into a post but unfortunately you will be inundated with people’s medical problems and might miss the important argument being made here.

new

There is no missing heat in that scenario. The only heat that needs to be accounted for is the 0.5W/m2 imbalance at TOA because if that number is accurate then the only place it can be going is in the ocean.

A two part question on Energy imbalance .

Can we have a TOA imbalance of -0.5 W/m2? If not why not.

ANSWER we are neither a heat source or a heat sink

Energy in is energy out. In other words we cannot have a TOA imbalance because the TOA is where the energy in equals the energy out.

We can have a warmer atmosphere or ocean without having to violate that principal, but only if the input of energy [Sun] varies due to distance [Summer/Winter locally] North and south hemispheres depending on the elliptical orbit of the sun. Or due to intensity [solar Cycles]
In effect the temperature we have is  balance of  the energy in the ocean, land surface and air. In a mathematical model  where the air and sea remained fixed the amount of heating up, the amount of clouds, would run like a clock  and stay the same from 1 24 hour period to the next apart from the energy input.

In our world of currents and Coriolis forces and winds, erosion volcanoes etc where the heat is varies but if one area becomes hotter SOI, PDO, El Nino etc another becomes colder.

Adding CO2  to the air does not make the total energy in or out change one iota. It does modify where the heat is found and this should be more in the atmosphere [ Gates, Droedge, Mosher etc]. The air should be warmer Gates and when it isn’t for 16 years it is indeed a travesty for your argument and the IPCC.

What it implies is that the earth’s atmosphere is a lot more resistant to intemperate changes  than most people here are prepared to realise.

Over the course of a year the average solar radiation arriving at the top of the Earth’s atmosphere at any point in time is roughly 1366 watts per square metre[3][4] (see solar constant)

The Sun’s rays are attenuated as they pass through the atmosphere, thus reducing the irradiance at the Earth’s surface to approximately 1000 W /m2 for a surface perpendicular to the Sun’s rays at sea level on a clear day.

, the sunbeam hitting the ground at a 30° angle spreads the same amount of light over twice as much area

Ignoring clouds, the daily average irradiance for the Earth is approximately 250 W/m2 (i.e., a daily irradiation of 6 kWh/m2)

The insolation of the sun can also be expressed in Suns, where one Sun equals 1000 W/m2 at the point of arrival, with kWh/m2/day expressed as hours/day

  1. Making stuff up here as I go.
    R Gates, R Pielke on ocean heat.
    No one has said what the average heat of the 700-2000 meter level is but a wild guess would say in the tropics [sea surface temp up to 25 degrees centigrade] and Arctic ocean [sea surface temp 3 degrees C] that the deeper level would be about 3-5 degrees centigrade. I.e not much difference at all at depth.
    There are no deep hot ocean currents, only cold ones and colder ones.
    If the hot currents ever come to the surface they will cool it not heat it [See D Springer earlier on heat sinks and why you cannot heat a warmer body with a colder body] as they are cooler than the surface air . The second thing is heat conducts both ways so at anyone time the hot surface water is not only heating the air above but heating the water below. Which has a lot more molecules than air for the heat to transfer to hence the heat diffusion will for most purposes be downwards by large orders of magnitude.
    That is not to say that the surface water might not heat up to 30 degrees C or more in the tropics but it cannot make the air hotter than the water and the water will almost always be colder [please leave out objections like hot air over land blowing out to sea/ night time etc which are not relevant to this argument ] than the air.

  2. Pekka Pirilä TOA can be a term for the level where radiative energy out equals radiative energy in. Whether one pumps CO2 into the air or not does not change the amount of energy in by the sun , nor the energy radiated out which is at an equilibrium.
    Yes the air at surface levels can be warmer but not because the CO2 is trapping more heat. If that were the case the earth would get warmer and warmer, The AGW argument of climate sensitivity and you might as well argue a sensitivity of 30 degrees instead of 3 degrees.
    The reflective surface of the earth is a strange combination of solid liquid and gas. At some point incident radiation is stopped then emitted back.
    It matters not whether it is a solid metal spacecraft a meteor or moon they all radiate the heat back. If an atmosphere with increasing CO2 is hotter than one without Then somewhere else in the system becomes colder. ie if the CO2 radiates more heat back to space then the oceans and land will not heat up as much.
    Hence there is no radiative imbalance just a poor understanding on our part of the actual way the energy movements occur

  3. Sea levels rising reminds me I left a tap running overnight last year?
    Surely not.
    If the ocean measurements were reliable, which they are not yet proven to be, then the ocean heat content rising would be true but it would mean CO2 was not the cause as there has been a hiatus. What a conundrum.

The mystery of the melting Antarctic

The mystery of the melting Antarctic

Actually 3 mysteries in one.

The measurement of the actual amount of ice on land or at sea is extremely difficult. While we can measure the extent of the land cover very accurately the depth of the ice is very difficult to estimate. Complicating this is the possibility of snow cover which is not as dense as ice but adds to the difficulty in measuring the actual ice thickness and the amount of unfrozen water under the ice in rivers and lakes which does the same.

More complicated is the measurement at sea due to the difficulty in assessing ice edge boundaries and even more so when there are ice floes and packs with clear water in between. In the Arctic it is possible to get measurements by submarine  and icebreakers to give some idea of depth

Conventional measurement depends on using multiple yearly measurements of extent and depth from multiple sources  and combining the best estimates into a volume if ice with quite significant margins for error. The most inputs come from arctic ice measurement,  and are represented by PIOMAS. A second measurement is done by Cryosat 2. The estimated volumes differ quite markedly at times.

In the Antarctic  it is impossible to actually measure the depth accurately, hence a different method GRACE has been developed which works on estimating the gravitational differences  detected by 2 satellites to determine the mass of ice above the land contributing to the gravitational fields. The volume of ice estimated in this way is potentially extremely inaccurate  though  not inexact as it is very dependent on the coefficient in the formula to give the volume of ice. A smidge up and there is more ice in Antarctica, a smidge down and there is less ice in Antarctica.

At different times using the gravitational measurement  there have been suggestions of increasing ice volume in Antarctica but with further interpretation the GRACE measurements state that Antarctica is losing ice volume.

Hence the mystery. Antarctic sea ice has been increasing in the main for 30 years and is well above the average for the last 30 years. Ipso facto the Antarctic itself  should have been definitely colder than normal in recent times. Hence there must be more not less ice in Antarctica.

Sea ice extent is dependent more on the coldness of the water rather than the air temperature itself, hence the second mystery. Measurements of the Antarctic water temperature claim that it has been warming over recent years. This should have resulted in less sea ice extent as predicted by IPCC models.

The third mystery is how has the Antarctic been losing ice volume. This is unexplainable by theory and fact. The Antarctic is too cold for the ice to melt and evaporate from the surface. The glaciers are not getting smaller and shrinking back in from the coast. This would involve less calving  from the glaciers which would in turn be smaller.  As demonstrated by the recent Spirit of Mawson expedition there is more not less ice along most of the coast of Antarctica.

If an adequate explanation cannot be given for how the ice is mysteriously disappearing from Antarctica then attention should be turned to the degree of accuracy of the GRACE measurements and a readjustment of the coefficient done to correct it to the reality of more ice volume in Antarctica.