The Antarctic is about to set a record for measured sea ice extent in the modern Satellite era. which means there is more ice at the South Pole in the last 34 years then ever before measured.
This is at the same time that CO2 levels have risen from 328 to 401 ppm, a rise of 2 ppm/year which has been put down to increased burning of fossil fuel.
CO2 is known to cause a warming effect in the atmosphere due to its absorption of Infra red radiation, as do all the other gases and water both as a gas and a vapor. The effect of an increase in CO2 levels is postulated to cause a 1 degree rise in temperature for a doubling of the CO level from 35 years ago.
This rise in temperature of the earth at sea surface level for a doubling of CO2 is referred to as the Climate Sensitivity [CS]. The Climate Sensitivity which is easy to define is in practice impossible to estimate or measure.
The reasons involve Natural Variation, which can also be looked at as not being able to measure the multiple effects of winds , waves, currents, forests, deserts, cloud and albedo to mention just some with precision hence the weather daily, weekly and seasonally cannot be fully predicted. Another is the presence of positive and negative feedbacks in the climate system which are even harder to work out.
Some people have stated that Climate Sensitivity cannot be negative, that there must be some positive increment to a forcing and any feedbacks must of necessity be less than the original input.
A measure of the degree of warming is logically that cold areas should warm up and areas of ice should melt.
Here is the conundrum. The Antarctic sea ice should be melting. At a Climate Sensitivity of 1 the world temperature should be 0.7 degrees warmer over the last 35 years and this should show in retreating Antarctic Sea Ice extent.
The fact that the area of Sea Ice is now 1.8 square million kilometers greater than the average over the last 35 years on its own would imply a negative Climate Sensitivity to CO2 increase.
There are arguments why this might not be correct.
We might be having a very, very long lasting natural variation in temperature which is overriding CS.
Temperature changes are different in the 2 hemispheres due to different land mass sizes.
Numerous other explanations have been attempted which fall over for 2 reasons.
The first is that all of them involve measures which are inherently counter intuitive. An example would be hot seas cause more clouds which cause more snow which cause more ice buildup. This logic loop is ultimately self defeating, two plausible ideas are put together but the outcome fails due to the argument on climate sensitivity above. An input should not usually cause a bigger feedback than the input itself.
Other arguments include trade winds blowing faster put forward ten tears ago and trade winds blowing slower put forward last year. Ozone holes are another argument which fails the logic test. Melting causing fresh water which is lighter and sinks allowing colder water to form more ice. Melting glaciers is another.
The second reason is that there are so many of these counter intuitive arguments still around with few in the close knit Antarctic community having the gumption to say no, this is wrong. Hence we have nearly 10 reasons for why there is more ice in the Antarctic. If even half of them were right this would mean that there should be 5 times as much ice in the Antarctic as there currently is.
This leads to the question then of is it possible that there are feedback loops that prevent our climate from changing too drastically whatever the local input. With the input of palaeontology it is obvious that the earth has had massive eons of life producing the fossil fuels in the first place, possibly for over a billion years. The earth’s atmosphere may originally have been devoid of Oxygen and CO2 [see stromatolites]. The upheavals of the earth’s crust have had super volcanoes and eras where burning coal would have produced more CO2 than mankind could ever produce. Yet we are here.
CO2 does warm the air, rising levels with no negative feedbacks should cause a rise in the earth’s temperature, yet one of the biggest, easiest to measure objective measurements says very plainly this is not happening. There may be a bit of transfer of heat to the Northern Hemisphere for the North South divide to exist that is not yet understood. The most likely answer is that Climate sensitivity is a lot lower than most climate scientists are prepared to admit.
a doubling of CO
2 (which amounts to a forcing of 3.7 W/m2) would result in 1 °C global warming
June 26th, 2014 at 8:20 pm
Zeke if a record high temp was recorded in Death Valley, Or Texas Or Alaska on one of the 1218 stations in 2010, It is by your own admission no longer a record on your system because it has had to be adjusted down by the dropping out of the warmer stations you mention.
Nick currently denies this over at WUWT even though he knows the record is being adjusted down.
In his eyes and yours one needs to correct the past records to maintain the purity of the current records under your adjustment system merely for mathematical predictions.
The folly of this perfectly correct mathematical approach is that in we live in a real life, not Maths and graphs world.
We cling to truth in real past records at individual sites, not wanting your attempt to do perfectly correct modelling of the US and world temperatures at a cost of throwing out the past which is what it does.
you are building the same giant clockwork device believing that the universe goes round the earth not the sun, rather than a simple model which incorporates the truth but is still usable for future projection.
The more you put in the more cumbersome and impractical and divorced from reality it becomes.
June 6th, 2014 at 4:01 am
So to be clear
there were “ 1218 real stations (USHCN) in the late 1980s
There are now [???] original real stations left-my guess half 609
There are [???] total real stations – my guess eyeballing 870
There are 161 new real stations , all in airports or cities added to the graph
There are 348 made up stations and 161 selected new stations.
The number of the original 1218 has to be kept like the stock exchange to have a mythical representative temperature or temperature anomaly over this number of sites.
Nobody has put up a new thermometer in rural USA in the last 30 years and none has considered using any of the rural thermometers of which possibly 3000 of the discarded 5782 cooperative network stations.
And all this is Steve Goddards fault.
Can someone confirm these figures are accurate and if so why any trust should be put in this Michael Mann like ensemble of real, adjusted real and computer infilling models.
Zeke (Comment #130058)
June 7th, 2014 at 11:45 am
Mosh,
Actually, your explanation of adjusting distant past temperatures as a result of using reference stations is not correct. NCDC uses a common anomaly method, not RFM.
The reason why station values in the distant past end up getting adjusted is due to a choice by NCDC to assume that current values are the “true” values. Each month, as new station data come in, NCDC runs their pairwise homogenization algorithm which looks for non-climatic breakpoints by comparing each station to its surrounding stations. When these breakpoints are detected, they are removed. If a small step change is detected in a 100-year station record in the year 2006, for example, removing that step change will move all the values for that station prior to 2006 up or down by the amount of the breakpoint removed. As long as new data leads to new breakpoint detection, the past station temperatures will be raised or lowered by the size of the breakpoint.
An alternative approach would be to assume that the initial temperature reported by a station when it joins the network is “true”, and remove breakpoints relative to the start of the network rather than the end. It would have no effect at all on the trends over the period, of course, but it would lead to less complaining about distant past temperatures changing at the expense of more present temperatures changing.
.
angech,
As I mentioned in the original post, about 300 of the 1218 stations originally assigned to the USHCN in the late 1980s have closed, mostly due to volunteer observers dying or otherwise stopping reporting. No stations have been added to the network to make up for this loss, so there are closer to 900 stations reporting on the monthly basis today.
.
To folks in general,
If you don’t like infilling, don’t use infilled values and create a temperature record only from the 900 stations that are still reporting, or from all the non-infilled stations in each month. As the first graph in the post shows, infilling has no effect on CONUS-average temperatures.
June 8th, 2014 at 5:26 am
Carrack, your link to Moyhu showed Nick Stokes attempting to discredit SG with 6 diagrams talking about a spike in 2014 but all 6 graphs only went to 2000 why the heck is that.
Zeke has a post at SG where he admits that there are only 650 real stations out of 1218 . This is a lot less than only 918 that he alludes to above. Why would he say 650 to SG ( May 12th 3.00 pm) and instead #130058 at the Blackboard about 300 of the 1218 stations have closed down.
Can Zeke give clarity on the number of real stations (raw data) and the number of unreal stations using filled in Data in the 1218 stations.
June 8th, 2014 at 6:00 am
angech (Comment #130074)
“6 diagrams talking about a spike in 2014 but all 6 graphs only went to 2000 why the heck is that.”
You’re not very good at reading graphs. The x axis is marked (by R) in years multiple of 20. The data shown is up to date.
“Zeke has a post at SG where he admits that there are only 650 real stations out of 1218 . This is a lot less than only 918 that he alludes to above.”
When I last looked a few weeks ago, in 2014 numbers reporting were Jan 891, Feb 883, Mar 883, and 645 for April. Many are staffed by volunteers and some reports are late. So 918 sounds right.
June 8th, 2014 at 6:07 am
Nick, I cannot understand your post It seems that you split your data into real and infilled sub groups,
There appear to be a large number of these infilled stations 1218 -650 = 668 according to Zeke at SG and here.
there are claims that the real data is not located in the right areas to be useful for graphing the areas due to differences in latitude and elevation.
The artificial sites at the best locations give a “true grid” for the 1218 “stations”.
One knows what the true readings for these artificial sites “should be” , put them in and the adjust the real sites to what the artificial sites say the temperature should be..Zeke says each month one takes the infilled data from the unreal stations. I guess it ” comes in” from the computer programme primed with a need to rise up as CO goes up otherwise known as Steven’s Climate Sensitivity factor which is being adjust downwards from 3.0 to 1.4 currently due to the satellite pause.
One then has to look for non climatic break points, AKA real data,behaving badly which has to be removed.
Fortunately when you do this the difference between the raw R1 data and the final F1 data is almost eliminated as Nick so elegantly shows. Bravo for the shell trick.
June 12th, 2014 at 4:10 am
Thank you Zeke for putting this post up. Hopefully it will result in greater openness and sharing of information though you may not be feeling this yet you are trying, which a lot of your colleagues do not want to do. The level of vitriol reflects the extreme importance of dong the data collection and models openly so all sides can feel confident that their arguments are on standard ground. As you know this is not the case at the moment and has not been the case for skeptics for a long time.
Incidentally Mosher described the principle that if site A is closer to site B than site C then site A is more likely to be similar to site B than C is a fundamental theorem of geo statistics at JC. 10.50 12/6/2014 asymmetric responses of arctic and Antarctic.
My question to you is that Robert Way has stated at Skeptical Science that this is not true when calculating the Arctic infilling as used in Cowtan and Way and my understanding is that this faulty principle may now be being used in your current Arctic infilling. Can you assure us if you use Steven’s fundamental principle or Robert Ways new improved principle.
June 12th, 2014 at 4:53 am
See “how global warming broke the thermometer records” by Kevin Cowtan at Skeptical science 25/4/2014 speaking of his and Robert Way’s finding that the Gisstemp conundrum was due to actual GCHN Arctic data and infilling showing a cooling “bias” when compared to their model only method.
This occurred supposedly by violating the assumption that neighbouring regions of the planet’s surface warm at a similar rate.
June 12th, 2014 at 9:18 am
angech,
The problem in the arctic is one of station density; Cowtan and Way actually discovered the problem with GHCN’s adjustments by comparing them to Berkeley’s results, which are more accurate for those stations given the denser network. There is always a challenge in very sparsely sampled areas of misclassifying abrupt changes (in this case an abrupt warming trend) as local biases rather than true regional effects. Larger station networks can help ameliorate this.
.
Will Nitschke,
Some sort of automated homogenization is necessary. We’ve been working on ways to test to ensure that homogenization is not introducing bias. The Williams at al paper makes a compelling case, for example: ftp://ftp.ncdc.noaa.gov/pub/da…..al2012.pdf
Our recent UHI paper also looks at this by redoing homogenization using only rural stations to detect breakpoints/trend biases.
The reason I suspect that the Amundsen-Scott results are a bug due to the extreme cold is that they are flagged as regional climatology outliers. I’ll suggest that the Berkeley team look into it in more detail next week.
.
JD Ohio,
Using that approach, observations are still within the 95% CI for models, though they are close to the bottom. As I mention in the article you reference, the next few years will be key in seeing if they become clearly inconsistent. I have an updated graph here: http://www.yaleclimatemediafor…..in-review/
.
For other folks: sorry for being slow in responding; other things in life have been pulling me away from blogging, and I’m about to head out on a camping trip with no internet access fo
July 6th, 2014 at 12:22 pm
The exact number of real stations reporting each month in USHCN version 2 is shown in this figure: http://rankexploits.com/musing…..-Count.png
You can download all the raw station data yourself here: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2.5/
I really have no clue why people keep harping on this “exact number of active real stations” question when its trivial to answer…
Judith, I and others I’m sure would like to do a more formal rebuttal of Zeke’s approach if allowed and only if well written and argued.
Mine would focus on 3 key points.
The first of adjustment of past temperatures from current ones.
The second of a possible flaw in TOBS as used.
The third on the number of what are referred to a Zombie stations
2. TOBS and break adjustments are made on stations which do not have data taken at the correct time.
The process is automated in the PHA.
Infilling is done on stations missing data, ie not correct time . Zombie stations have made up data, ie not correct time.
This means that potentially half the 1218 stations, the zombie and the ones missing data have an automatic cooling of the past done every day with the result of compounding past temperature altered levels.
This should not be allowed to happen.
Once a TOBS change has originally been made in the past eg 1900 should have been 0.2 warmer thern this altered estimate should stay forever and not be affected by future changes.
Judith, I and others I’m sure would like to do a more formal rebuttal of Zeke’s approach if allowed and only if well written and argued.
Mine would focus on 3 key points.
The first of adjustment of past temperatures from current ones.
The second of a possible flaw in TOBS as used.
The third on the number of what are referred to a Zombie stations
Going for a bike ride
3. I will comment on Zeke’s and others obfuscation on this vital issue when I return.
Samples Ponder this
Zeke (Comment #130839) July 6th, 2014 at 12:22 pm
The exact number of real stations reporting each month in USHCN version 2 is shown in this figure: http://rankexploits.com/musing…..-Count.png
[HERE HE SHOWS AN OUT OF DATE GRAPH check it out
You can download all the raw station data yourself here: ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2.5/
No it is not labelled as raw data and it certainly does not give a number for the raw stations
I really have no clue why people keep harping on this “exact number of active real stations” question when its trivial to answer…
PLEASE, please answer Zeke.
Trivial to answer but you refuse pointblank to give an answer.
How many real original stations giving data are there since 1987 out of 1218
How many new ones have been added
How many do not report each month
How many zombie stations were used in March 2014 now you have all the data in
Judith, Chief,Fan, Joshua, Climate etc,
if Zeke is honestly presenting his case, which he is, why will he not answer a trivial question?
July 6th, 2014 at 2:54 am
“angech The TOBS adjustment would be done once
the adjustment that would/could change on a daily basis is PHA.
wait for Monday. the entire process will be explained.
But wait and read what is coming out on Monday
Then if you don’t like what NCDC does with USHCN, we could just dump all of USHCN, dump all of the US and the answer wouldntt change much
Man are you dense. Zeke has a written a paper describing all this.
Personally, I dont look at USHCN and I dont use it. for years.
But I am able to read Zeke paper and suggest that you read it on monday.
Seriously you are descending to Goddarian levels of argument.
(Comment #130800) July 4th, 2014 at 9:48 pm
Still no count given [ever] of the exact number of active real stations and it is obvious no one will be giving one.
The silence is deafening.
Go to the ftp
download the data
do a count
personally, in my own work, I dont look at USHCN. My advice to NOAA is to drop GHCN-M and USHCN and just supply GHCN Daily
I am not interested in doing your homework.
you could find a million errors in USHCN and none of it matters to me because Im upstream. get that. Not doing your homework. dont care what your issues are. they are moot.
Zeke (Comment #130839) July 6th, 2014 at 12:22 pm
I really have no clue why people keep harping on this “exact number of active real stations” question when its trivial to answer…
Steven Mosher (Comment #130831)July 6th, 2014 at 2:54 am
Man are you dense. Zeke has a written a paper describing all this.
Personally, I dont look at USHCN and I dont use it. for years.
Still no count given [ever] of the exact number of active real stations and it is obvious no one will be giving one.The silence is deafening.
Go to the ftp download the data do a count
personally, in my own work, I dont look at USHCN. My advice to NOAA is to drop GHCN-M and USHCN and just supply GHCN Daily
I am not interested in doing your homework.
you could find a million errors in USHCN and none of it matters to me because Im upstream. get that. Not doing your homework. dont care what your issues are. they are moot.
interesting that you could find a million errors in USHCN and not care, and that you dont care but have 51 posts here and no one of the triumvirate will give a count Stokes /Zeke/ Steven and have spent hours running away from it. Says something.