Category Archives: Uncategorized

jch and hubris not that I have any

angech says:

Your comment is awaiting moderation.

dikranmarsupial says: December 1, 2016 at 3:55 pm
“I don’t see how making inductive inferences is inherently subjective”.

Quoting re induction and its usage,
“Inductive reasoning (as opposed to deductive reasoning ) is reasoning in which the premises are viewed as supplying strong evidence for the truth of the conclusion. While the conclusion of a deductive argument is certain, the truth of the conclusion of an inductive argument is probable, based upon the evidence given.
the premises of an inductive logical argument indicate some degree of support (inductive probability) for the conclusion but do not entail it; that is, they suggest truth but do not ensure it.
Unlike deductive arguments, inductive reasoning allows for the possibility that the conclusion is false, even if all of the premises are true.”

Subjectivity is built in to inductive inferences, It is akin to cherrypicking in that the inference chosen, even if true in itself, may not be true when a subjective inference is tied to it. It is therefore impossible to see the subjective element when one makes an inductive inference.
You have to step back a little.

Broadly speaking,
“there are two views on Bayesian probability that interpret the probability concept in different ways. According to the objectivist view, the rules of Bayesian statistics can be justified by requirements of rationality and consistency and interpreted as an extension of logic.[1][6] According to the subjectivist view, probability quantifies a “personal belief”.
In probability theory and statistics, Bayes’ theorem (alternatively Bayes’ law or Bayes’ rule) describes the probability of an event, based on prior knowledge of conditions that might be related to the event.
The sequential use of Bayes’ formula: when more data become available, calculate the posterior distribution using Bayes’ formula; subsequently, the posterior distribution becomes the next prior.

ATTP says rightly “The idea then is that you can combine the different lines of evidence, and use Bayesian inference, to determine a range for the ECS.””

dikranmarsupial says: November 30, 2016 at 4:06 pm
“The real problem with objective Bayes (in this particular case) is that we do have prior knowledge, and we need to include it into our analysis if we want to be objective in the sense of maximizing correspondence to observed reality. However doing this in an objective (in the sense of “not influenced by…”) manner is rather difficult, but that is not a good reason for ignoring the prior knowledge.”
No problem if following the method above

Thorsten Mauritsen says: November 30, 2016 at 9:18 am
“the less careful reader is easily lead to believe that the study has actually constrained ECS”.
The whole purpose is to develop a method to actually constrain the ECS

Victor Venema says: November 30, 2016 at 1:09 pm
“Fully agree with James Annan. There is no such thing as an “objective” Bayesian method.”
It is possible, though your view would count against it in an objective Bayesian method.

dikranmarsupial says: November 30, 2016 at 4:17 pm
ATTP “Why would one not take advantage of that aspect of Bayesian inference?
The place where “uninformative” priors are really useful is in expressing the knowledge that there is some quantity that you know don’t know anything about, so the effect of that uncertainty is properly accounted for in the uncertainty in the conclusions of the analysis”
-A prior event is either informative or uninformative.
If uninformative it is irrelevant.
It has no place in any assessment and it is not an uncertainty.
Prior events on the other hand are full of uncertainty and are very important to Bayesian assessment. They have to be included and as new information comes to light clarifying the uncertainty a new, more rigorous and tighter ECS range can be established.
Despite ATTP and yourself refusing to consider impossible a priors the Bayesian approach handles this with no fuss.They do not matter.
Take a negative ECS or a 20C positive ECS. Apply the concept. No past evidence of either? scrub them and add in a positive ECS 0.1 and a high ECS 12 C . Still not possible move up into the real ranges. Will it be 3C?
Who knows but the method if used by Nic or Stevens, Sherwood, Bony and Webb does not care about the starting point. Every bit of informative evidence improves the prior towards the right range]. As Dikran says “an objective method that incorporates the laws of physics, observations of paleoclimate and modern observations is needed”
This is scientific practical and another bit of the puzzle. embrace it. Point out to Nic which bits you feel he has missed out and incorporate them as well. If you do not like his ratings that very salient observation often made to me by Mosher springs to mind. Do the work yourself.

JCH says:    March 31, 2017 at 10:38 pm    “To me, the very most important thing is… who was the first person to speculate that 2017 has the stuff to become the 4th warmest year in a row?”
A tautology really. A bit like those guys who continually predict that the stock market is going to crash and then are right, heroes once every twenty years. Except no-one is happy with them. Let me guess, one of them would be JCH.Please do not send out a prize.
“I do think that we should avoid attacking those who present alternative views;”‘
“I’m certainly in favor of  criticizing what others say when it’s clear that what they’re presenting is not consistent with the best evidence available today.”
Criticizing is different to attacking but not always perceived the correct way by those on the end of the criticism.
I would love to engage with and dissent from JCH on his view that 2017 will be the warmest on record.There are several caveats, which record, which medium air or ocean, but I have learned from past experience in a warming world not to be too sanguine. The moment one commits to these notions is the moment one loses. So I will watch with interest and hope the outcome of this year and remind JCH, if he is wrong, next year here perhaps? Or appear and get shellacked. My very modest and wrong  observation would be that given the fall in ocean temperature from 2015/2016 there would have to be a marked El Nino to have any hope at all of this year coming within a cooee of 206 or 2015.

jch helps

Thanks JCH. It is easier for you I think as you have done more work on it but it is a very complicated scenario when one tries to get into it.
Surface emits 169 to space
atmospheric window – a. 40 to space is from the surface
[some infrared radiation from the cloud tops and land-sea surface pass directly to space without intermediate absorption and re-emission. A large gap in the absorption spectrum of water vapor, the main greenhouse gas, is most important in the dynamics of the window. Other gases, especially carbon dioxide and ozone, partly block transmission.]
*b. clouds emit 30 to space  the picture shows 30 from clouds, 40 from the surface overall window of 70.
to add to  239
Sun delivers 341     not absorbed – 102    sw delivered for absorption – 239
I think:     SW – 78,  LW – 97    – Not sure how you get this figure, The sun is very bright and I would have thought would be more shortwave though cold tongue edges of the sun plasma might go out far enough to emit some long wave if it cooled sufficiently. SW might reflect more than LW so may be the major part of reflected light.
The 78 [SW/LW] absorbed by the atmosphere at varying levels does some reheating on its way back out, Makes the atmosphere hotter but technically never reaches the land or ocean.
LW – 97 This seems to be the figure for thermals 17 and latent heat 80. I agree this would be emitted as LW but it would seem that these are energy packets that are transported high up before they emit and hence probably do not contribute much to rewarming. I do not know how you would account for them in terms of the energy going back to the ocean or land. My gut feeling is if they are emitting high enough they should not be counted as reheating the surface although they obviously do lead to reheating of the air locally on the way out and raise the overall temp which probably leads to more uptake and release of energy by CO2 lower down.
Techniucally not part of the 333 back radiation effect though as said contribute to making it happen.
absorbed – 175     [Should be 161 absorbed at surface]
LW to space – (30) [[This is  LW clouds emit 30 to space see above]
and  atmospheric window – a. 40 to space is from the surface
back to surface – 145   [should be 91 if amended figures OK]

lw absorbed 333/145 = 2.3 times         [? 333/91 = 3.7]
Will stop here as I do not think the model is trying to show an energy imbalance at all, just trying to balance input and output as no CO2 increase is postulated.
Thank you for the figures and trying to work it out as well, I am still struggling with it.


angech says:

Your comment is awaiting moderation.

Andrew Dodds says:
“”angech I quoted direct numbers for the volcanic vs. solar heat fluxes. Then you come back with a direct lie that I said there was ‘no natural heat effect’. Please correct that.”

Did I misinterprete the line that followed?
“If volcanic heating was non-negligable, you’d expect to see convection cells with upwelling over the mid-ocean ridges, where the majority of volcanic heat enters the oceans. You don’t.”

Leto, “But because this tiny quantity of heat is on the ABC side of the ledger”. Yes, there is an ABC side. There are a lot of non negligible issues which though small may add up to concerns.

Andrew raises an interesting point however re TOA radiative imbalance. The amount of energy coming in is supposed to equal the energy going out. An increase in GHG means there should be a temporary imbalance. Yet if the earth is putting out an extra [Geothermal heat]: 47 TW should there not be a radiative imbalance the other way?
Incoming solar energy: 173,000 TW. Hence Outgoing should equal 173,047 TW. Is this ever taken into account?
I would feel this should make any balancing, as in ECS, much more rapid than what people are actually saying it is. Mosher? ATTP?


The ‘airborne fraction’ (atmospheric increase in CO2 concentration/fossil fuel emissions) . From 1959 to the present, the airborne fraction has averaged 0.55. the terrestrial biosphere and the oceans together have consistently removed 45% of fossil CO2 for the last 45 years.

.”Basically, in equilibrium, the amount of dissolved inorganic carbon (DIC) in the ocean determines the partial pressure of CO2 and, hence, the atmospheric CO2 concentration via Henry’s Law”
This law works both ways.
In other words the partial CO2 pressure determines the DIC as well.
Which is important for this discussion..

The amount of extra CO2 added to the atmosphere by human activity, while significant, and lets say cumulative to some degree, is still a small fraction of the total atmospheric CO2 720 GT and the 137 times greater DIC [136,800 GT of CO2.]
[ we’re dealing with a coupled system, so if you add new material to one of the reservoirs, it will rise in all reservoirs]
Atmospheric CO2 720 GT at 400 PPM, which is 1/182 of the ocean equivalent.
If you increased to 560 ppm  1008 GT, a 25% increase the amount of CO2 in the ocean would have to increase by 2.5% OR  3420 GT.
At 30 GT a year human contribution that would take 100 years and providing that the DIC did stay in solution and not precipitate out in part.
Did a mere 600 GT raise the DIC of the oceans from
120 ppm rise suggests a

. However, there is a more formal way to show this. I recently worked through the ocean carbonate chemistry. It turns out that there is a factor called the Revelle factor, which is simply the ratio of the fractional change in atmospheric CO2, to the fractional change in total Dissolved Inorganic Carbon (DIC) in the oceans:

R = \dfrac{\Delta pCO_2/pCO_2}{\Delta DIC/DIC}.

The Revelle factor is about 10, which means that the fractional change in atmospheric CO2 will be about 10 times bigger than the fractional change in DIC. What this tells you straight away is that you can’t change the amount of CO2 in the oceans without also change the amount in the atmosphere; stabilising emissions will not stabilise concentrations.

The residual airborne fraction increases from about 15% for emissions of 100s of GtC (we’ve already emitted 600 GtC) to almost 30% if we were to emit as much as 5000 GtC.

Now, maybe if the fractional change in DIC is small enough, the fractional change in pCO_2 might also be small enough to essentially stabilise concentrations. However, we know the quantities in the various reservoirs, and we’ve already emitted enough CO2 to change the DIC by 1 – 2%, and – hence – the atmospheric CO2 concentration by 10 – 20%. If we stabilise emissions, we could easily change the DIC by a further 1 – 2%. In fact, we have sufficient fossil fuels to change it by more than 10% and, therefore, enough to change the atmospheric concentration by more than 100% (i.e., to, at least, double atmospheric CO2).

There is, however, something I’m slightly glossing over, so will try to clarify a little more. The above is based on an equilibrium calculation. In other words, it is the changes once the system has retained a quasi-steady equilibrium. Our emissions are continually pushing the system out of equilibrium and so the fractional change in atmospheric CO2 is actually greater than what the Revelle factor would suggest. Given what we’ve already emitted, we would expect about 20% of our emissions to remain in the atmosphere, but it’s currently more like 45%. This is because the timescale for ocean invasion is > 100 years, and so the system hasn’t yet had time to return to equilibrium.


hypergeometric says:
“”he internal variability effect?  Is it buried within the albedo effect, or the partial of outgoing longwave with respect to temperature?”

As you imply Internal variability is due to a multitude of factors. Some one off, some repeatable, some cyclical. If we knew enough about the actual causes to model them correctly we could remove some of them from the larger Natural Variability uncertainty range.

Your two examples show why there may not be a paradox to Willard. The atmospheric changes with 2 different GHG both increasing but only one with pure absorptive/ radiative properties, the other, H2O, being unique in that it causes increasing reflectivity with increasing concentration [decreased albedo] means that it is not a simple case of gas absorption emission physics with positive feedbacks but a complex reducing external input as internal energy retention goes up.
It is this dynamic that enables one to propose two Climate sensitivities. One with high variability at a low CO2 level and one with reducing/reduced CS when CO2 levels double.

The insistence that the CS plus positive feedback stays relatively the same at all CO2 levels is not held by most scientists ie people here have argued for some variability of CS with different conditions as a reason for it being hard to pin down, but most assume it must be relatively the same at different levels of increasing CO2. Take away this assumption and the paradox disappears.

Thanks for the link to your site, looks interesting.

Length of time

Victor talks about the length of the data at his linked post. One comment is
” With “at least 17 years” Santer does not say that 17 years is enough, only that less is certainly not enough. So also with Santer you can call out people looking at short-term trends longer than 17 years.”
He then states” Sometimes it is argued that to compute climate trends you need at least 30 years of data, that is not a bad rule of thumb and would avoid a lot of nonsense”
but it is a rule of thumb only.
30 year periods are ideal for claiming global warming, just long enough to see “significant ” trends but too long for anyone to ever claim a “pause”.
So game over. Define a length of time longer than your oppositions argument and you cannot lose.
Pause, what pause?
But using the same logic one could say we need 100 years, what then of global warming?
By the same implacable logic[see  the anti-hiatus of the last 10 years]  the trends are now too short and become statistically insignificant.

angech says:

Your comment is awaiting moderation.

Willard says:
” This is essentially the point. It is paradoxical to argue for high sensitivity to internally-driven warming AND low sensitivity to externally-driven warming” .

Yes. The question though is whether Climate Sensitivity to CO2 is restricted to just the known response to CO2 doubling or whether other factors, other GHG such as water vapour are linked in such a way that Climate Sensitivity to CO2 itself is amplified or damped by the effect of the temperature change on the other volatile GHG, water.

The result of that is that one might have a different high or low sensitivity to CO2 doubling depending on the amount of water vapor at a specific temperature. This could possibly* lead to large swings in temperature in a low CO2 [our] world but a much more damped response at higher levels of CO2[and temperature].

*unlikely but removes a paradox.


A question on the nature of clouds and albedo. Does all water vapor reflect SW or does there need to be an aggregate size with a boundary to give refection? The reason is that clouds may be a misnomer for water vapor in general. That is that the albedo effect may be directly correlated with the amount of water molecules in the air. Hence there would be a cloud effect even when there are no clouds. What could be more important is just the humidity level itself which I presume is workable out by the satellites.
I presume this has been investigated but would value your input.


Where customers answer questions and share ideas about our products and services

How To Troubleshoot NBN Issues

by Technical Support ?15-08-2014 12:11 PM – edited ?03-08-2016 09:55 AM

Brief Version

  • Power off your T-Gateway modem, wait for 30 seconds and then power it back on. Give it 2 minutes to reconnect to the internet.
  • Power off your your NBN NTD, by turning it off at the power point on the wall. Wait for 30 seconds and then power it back on. Give it 2 minutes to reconnect to the internet.
  • Reset the T-Gateway to factory default settings. To do this get a paperclip, or something similar, and press and hold it into the reset hole at the back of the T-Gateway, keep in pressed in for 15 seconds. Give it 2-5 minutes to reconnect to the internet. Alternatively you may prefer to do this via the modems interface
  • Call to report the fault on 1800 TFIBRE (1800 834 273).
    Your Broadband service is working normally. You are connected online.



The radiative forcing due to clouds and water vapor

From “The radiative forcing due to clouds and water vapor V. Ramanathan and Anand Inamdar
Center for Atmospheric Sciences, Scripps Institution of Oceanography,
As the previous chapters have noted, the climate system is forced by a number of factors, e.g., solar impact, the greenhouse effect, etc. For the greenhouse effect, clouds,
water vapor, and CO2 are of the utmost importance. the data needed to understand two fundamental issues in radiative forcing of climate: cloud forcing and atmospheric greenhouse effect. Clouds  reduce  the  absorbed  solar  radiation  by  48  W  m2 while enhancing the greenhouse effect by 30 W m2 and therefore clouds cool the global surface–atmosphere system by 18 W m2 on average. The mean value of C is several times the 4 W m2 heating expected from doubling of CO2 and thus Earth would probably be substantially warmer without clouds.”
I take these authors to be saying that clouds and water vapor should be considered in Radiative Forcing, not ignored. That they give a negative feedback according to the best science has to offer and that this needs to be taken into account in assessing ECS.
How long something resides in the atmosphere is different to how much stuff is residing at any one time in the atmosphere which is the basis on which RF of the atmosphere needs to be assessed.
Just asserting one can ignore it does not mean one can ignore it.

U3A Talk

This is a talk on life as well as on common medical controversies.
People are generally well meaning.
Treat others as we would like ourselves to be treated is a motto that most if us work by.
So where does it go wrong?”
Well I guess we all like to be treated in different ways.

I like to think on the alternatives in life, in the choices we make, why we make them and the consequences both obvious and hidden.

Coeliac disease
Screening tests.
The big ones here are Prostate Cancer, Breast Cancer and Bowel Cancer. Why screen at all?

A silly question which carries a sting in the tale.

To detect cancer, and other diseases early enough to be able to do something about them.
Disease exist in our community that develop over time and cause serious health problems.
Tests exist to detect these diseases.
Simple enough.
It has nothing to do with preventing such diseases.
Having a screening test does absolutely nothing to prevent a disease occurring or even worse being present when you do the procedure and being missed.

Who has screening tests. People at risk.

Family History . Population History. Occupational History , Exposure History.

A screening test has to be useful, It has to  have a high sensitivity and high specificity. This means that it is able to detect problems early enough [sensitivity], and accurately enough [specificity].

It has to diagnose conditions that are dangerous, reasonably common and hopefully treatable without causing problems of it’s own worse that what one is trying to treat, in other words it has to be safe.

Screening can by done by endoscopy, faecal and urine sampling, Blood tests and Radiological procedures such as X-Ray, CT , Bone scan and MRI to mention a few.

Pitfalls of testing.

Cost , the equipment is all there  but some are very expensive. New tests in particular. Cheapest but still not cheap are the Urine testing for sugar protein and blood

Time, Some tests like 24 hour ECG’s and bone scans can take days to do

Patients often have limited time themselves to do the test in.



Loss of results

Wrong results

Statistical use of the results.

Endoscopy, looking inside the body through a tube, is a good way of detecting cancers. Lung, throat, Stomach, bladder  and bowel cancer can be detected this way.

Who here volunteers to have a colonoscopy every 6 months? or a bronchoscopy.


Would you have all of them every 5 years? What would be a good age to start.

Not many , Reasons  discomfort, embarrassment, time , risk

Having a procedure often involves having an anaesthetic, The risk of dying from an anaesthetic increases with age and other medical conditions but can occur in healthy young people as well. Somewhere between 1 in 5,00 to 1 in 10,000. As well as the risk of the procedure itself. Perforation of the bowel and  bladder, rupture of the oesophagus, damage to the vocal cords, aspiration and pneumonia, Urethral stricture with a urethroscopy .

wrong diagnosis, no diagnosis, missed diagnosis and lost specimen.

Import of the result and use of statistics. This is one that I have great difficulty in understanding.

Say that a mammogram will diagnose a breast cancer with 95% accuracy.

You have a patient who comes to you with a positive result from a screening procedure.

What should you tell her. As the patient if you are told it has a 95% accuracy what does this mean.

Murky waters lie ahead. The false positive paradox is a statistical result where false positive tests are more probable than true positive tests, occurring when the overall population has a low incidence of a condition and the incidence rate is lower than the false positive rate. When the incidence, the proportion of those who have a given condition, is lower than the test’s false positive rate, even tests that have a very low chance of giving a false positive in an individual case will give more false than true positives overall.[2] So, in a society with very few infected people—fewer proportionately than the test gives false positives—there will actually be more who test positive for a disease incorrectly and don’t have it than those who test positive accurately and do. The paradox has surprised many

Low-incidence population

of people
Infected Uninfected Total
(true positive)
(false positive)
(false negative)
(true negative)
Total 20 980 1000

Now consider the same test applied to population B, in which only 2% is infected. The expected outcome of 1000 tests on population B would be:Infected and test indicates disease (true positive)1000 × 2/100 = 20 people would receive a true positiveUninfected and test indicates disease (false positive)1000 × 100 – 2/100 × 0.05 = 49 people would receive a false positiveThe remaining 931 tests are correctly negative.

In population B, only 20 of the 69 total people with a positive test result are actually infected. So, the probability of actually being infected after one is told that one is infected is only 29% (20/20 + 49) for a test that otherwise appears to be “95% accurate”.

A tester with experience of group A might find it a paradox that in group B, a result that had usually correctly indicated infection is now usually a false positive. The confusion of the posterior probability of infection with the prior probability of receiving a false positive is a natural error after receiving a life-threatening test result.

As breast cancer in the general female population is a relatively low incidence disease the outcome of a false pasotive is quote high.

Herein another risk, traumatising people with false positives and equivocal results, one does not like to rule out cancer conclusively if some doubt is present leads to up to 15% of women who have a screening mammogram having to undergo further procedures , usually invasive biopsies  with pain discomfort, bruising and rarely infection.

Not to mention the 5% who have a false negative, also not as common as it sounds.

Finally  Cancers are incredibly small, incredibly fast growing and have usually been present for 6-18 months before getting big enough to be detected. So one may have a bowel or breast cancer present at the time of screening, have it missed due to its size and present with a large cancer 6 months later.

Finally removal of the cancer does not guarantee that the cancer has been successfully treated. Cancers can spread [metastasize] before removal. Melanoma is an example.

In the other hand BCC usually grows locally only but very aggressively, Some poor souls it does metastasize. SCC  has a slightly higher spread rate than BCC which is why it needs removal and follow up.


Prostate Cancer is an enigma which is treated in an ageist and sexist manner by most people and the medical profession. It has none of the media appeal of breast cancer,  the young mother cut down in her prime with dependent children and husband.

Instead an older man has a blood test at the insistence of his wife and gets told his PSA is up.  Still the hope of a wrong diagnosis. Multiple other causes. He may have an infected prostate, very high reading 20+ , treatable with antibiotics. He may have BPH [benign prostatic hyperplasia] an example of the word benign not really meaning what you think it does. At least it is not cancer and after 15 hours of agony not being able to pass urine a catheter is inserted by a first year resident, hopefully in the right place with no perforation and one is on the way to a TURP  or onion peeling from the inside with the risk of loss of sexual function, not that it matters for an old guy anyway.

Next he has a trans rectal biopsy with 12 needles taking samples then he is told the bad news.

Bad news? We are doing nothing for you. Most men with prostate cancer are so old they die with it not because of it. Prostate cancers can be very slow growing [Which type have I got? We don’t know] Your life span is to short  to worry about it. The treatments cause loss of sexual function, baldness and you will go blind.

Only joking.

The facts are that prostate cancer is treatable, One can have surgery, radiotherapy or a combination.

Prostate cancer does spread rather early so  prevent developing cancer or to
Males generally get cancer later than females

I would like to present an over view of the screening dilemma.

I will reference my talk to certain illnesses and conditions that are common in the community and a few that are not. Along the way I will mention several misconceptions with these conditions. I welcome relevent questions but will deal with most in the breaks or afterwards.

Medicine is concerned with the health of individuals first and  then with health of populations. It consists of both diagnosis and treatment of medical conditions. Originally these were of the body but those of the psyche then followed. As new treatments and medications were developed medicine became an enlarging field with more expectations of keeping people healthy.

Means of diagnosis improved. At first these were only applied to people with illnesses, but then the ability to look for problems ahead of time became apparent and screening was born.

Screening is a form of diagnosis that came late to medicine. Tests on urine for protein and blood were possible. Blood tests began being used at the start of the 19th century and X-Rays were developed.

Progress was slow so much that the routine testing strips we know use routinely with 10 tests on were still a 2 strip novelty 40 years ago ..

CXRays for TB detection were among the first general screening projects undertaken.

This dreaded disease, still present today, was reduced a hundred fold with detection treatment and isolation. Screening was abolished in the 1970’s in Australia. for 3 reasons which are still valid today. Cost, Radiation exposure [side effects] and near elimination of the condition. One funny twist of fate due to litigation and over investigation a large percentage of the population still have CXR’s for other reasons which equate to a de facto screening program.

The modern health problems are those of living longer, diabetes cancer and heart disease [stroke]. These conditions all increase with age and all reduce life expectancy greatly. They have an enormous impact at any time but more so when the sufferers are young.

Ao to screening.

The ideal screening test is something non invasive, very reliable easy to do and producing a treatment outcome beneficial to the patient.

Bowel Cancer.

A sample of poo, well 2 or 3 actually a day apart preferably after avoiding meat in the diet an








I did say “Only 37 of 58 sources list raw data”.
“37 of 58 represents the fraction of data sets”,
yes that is exactly what I said.

” not the fraction of raw data,”
I did not make that claim.

“You have denied for years that there are more than 5000 stations in the world ”

I have denied the number of active stations in the world.
There is a difference. Take
“International Surface Temperature Initiative (ISTI), . This release in its recommended form consists of over 30 000 individual station records, some of which extend over the past 300 years.”
this gives 30 thousand stations which are mostly inactive or extinct.
Station locations existing within the last 300 years with at least 1 month of data are used in GHCN-M version 3 (a).
Station locations during the periods 1871–1900 (b), 1931–1960 (c), 1961–1990 (d), and 1991–2013 (e) are also shown.

Or take
(GHCN-M) dataset in 1992, more than 6000 stations. A second version of GHCN-M, containing 7280 stations in 1997 in 2011, a third version of GHCN-M.
which says makes Routine updates for about 2000 stations are made on a daily basis.

NASA’s GISS dataset 6000 stations.
A bit of tight squeeze as  Since version 2, GHCN-M has been a major component in the GISS data set. A bit hard to fit 7280 stations into 6000 but as only 2000 are active I guess you can ignore the rest.

the United Kingdom produced a first release of its CRUTEM product in the late 1980s. Today, a global dataset of over 6000 stations is still maintained in its fourth iteration. Since it also includes GHCN I guess it might have only 2000 active stations as well.

mathematically 2000active stations in the world seems to confirm  my position that “You have denied for years that there are more than 5000 stations in the world ”
Today, GHCN-D provides daily maximum and minimum temperature for nearly 30 000 stations. Although more stations exist on the daily scale
Given the historical nature of data creation, sharing, and rescue, there are many cases where a single station exists in multiple data sources.
the duplicate records do not necessarily have identical temperature values for the same station even though they are based upon the same fundamental measurements.
There are 194,367 station records used
Although the preference is to have data as raw as possible, there are times where such data do not exist, or have not been provided to the databank. Therefore pre-processed data are accepted
GHCN-D was selected to be the highest priority, or target dataset, and the monthly dataset derived from it is the starting point for the merge process. GHCN-D is regularly reconstructed, usually every weekend, from its 25-plus data sources to ensure it is generally in sync with its growing list of constituent sources
the U.S.-based Cooperative Observer Program (COOP) Summary of the Day data set. These sources provide data for more than 2500 stations worldwide, and they remain the primary sources for updates to version 3.
[25] CLIMAT bulletins transmitted via the Global Telecommunication System (GTS) provide data each month for approximately 1400 GHCN-M stations in more than 125 countries and territories.
Locations of the approximately 2300 GHCN-M stations for which data are routinely available.


Dr Brian Cluney

Stuart Park Surgery, 1/5 Westralia St Stuart Park.

Dear Brian,

I am the older brother of William [Bill] and live in Shepparton, Victoria, semi retired GP. Bill lives with his partner Janine in an upstairs house at Tong Luck St, Rapid Creek and attends your practice. He has been having a lot of difficulty with back and leg pain in the last 6 months which has been causing a lot of concern to him and to his 92 year old mother, Nancy, who currently lives in the Gold Coast but had been a Darwin resident for 55 years.

Bill has had 3 major accidents in his life. At 17 yo in Adelaide he was hit by a car and thrown onto the windscreen. Not sure if knocked out or severity of other injuries.

His major accident was in 1982 when his father rolled a Ute on the way to Broome, WA throwing Bill out. He was flown to Royal Perth Hospital deeply unconscious and remained in a coma for 6 weeks with head injuries. He slowly recovered but had severe right sided weakness of his  arm and leg  needing prolonged rehabilitation. He was also left with permanent double vision. The main problem however was altered personality [? frontal lobe]  and a marked decrease in mental cognition.

After several years he returned to work as a wharfie with his father on the Darwin Wharf , and also as a health and safety officer. Some years after that ??1987 he had his third accident when he fell 22 feet down a ship’s hold. The result of this was severe bilateral torn rotator cuffs which put an end to his wharf career and caused him to be in severe pain for quite a few years. He did see surgeons at Darwin Hospital, not sure of he had operations.

He tried to work spasmodically since then but has basically been unemployed reliant on a pension set up by his mother from his motor car accident .

Bill is very hard to get history out of because of his poor memory. He tends to make light of his problems due to a combination of poor memory and possibly a frontal lobe injury affect. He is difficult to treat because he forgets instructions and lacks motivation. Worse he has a strong belief in alternative and unusual medicine and is immune to reason.

Despite this he has an easygoing personality and is always willing to help out others. I have left him to his own devices in the past as his mother and sister, Jennifer Lee, had mainly been involved with his supervision.

The problem currently is repeated attacks of severe leg and back pain which make it very difficult to get up and down the stairs at home for the last 6 months. I saw the CT back you arranged which showed the L5 disc protrusion and  nerve root possible compression and the funny little comment re bubbles in the L3/4 canal.

He went to the Darwin Hospital Casualty last night [Sunday]  with right upper inner leg pain down to the knee. The doctor on call tried to get him admitted but there were no neurological symptoms and the consultant opted for referral to physio at the Hospital and a non urgent Orthopaedic review which could take months. He suggested that I contact you to ask if you could also refer him to the Orthopaedic unit as this might help speed assessment up.

Also a care plan for physiotherapy if it is helping.

I realise the problems inherent in managing back pain, it is Australia wide. Bill is 8 years younger and Dad had both prostatomegaly and bowel cancer late in life. Bill is only 57. Bill does not normally complain of pain and incapacity without reason and it is a worry that this has been going so long. There may be some arthritis from his accidents, there may be a disc compression problem if he has neurological symptoms and the doctor at Darwin Hospital did say something about some narrowing of the right hip joint.

I wondered  if a PSA and a bone scan might help rule out any other causes if his back pain continues and you feel they are worth doing. I hope to get up to Darwin in May to assess his situation and hope we can speak then.