Joshua

angech says:

Your comment is awaiting moderation.

Joshua says: April 30, 2017 at 1:39 pm
=={ The odds of it happening once are 100%.}==
What? The odds of a 1,000 year rain happening one in the last few years in the UK is 100%?”
The odds of it happening per year in the preceding 1000 years are 0.1% per year.
Once it happens, in any year, the odds of it happening in that year are 100%.
?? Had to happen? Why did a 1,000 year rain have to happen [a second time] over any given period of 1,000 years?
Again the odds of a second event in the next 1000 years were 0.1% per year, and yes it did happen again so 100%.
A little read of the Hitchhikers Guide Infinity drive explains this paradox.
While the chances are remote they will happen at some stage given enough time. The point is the circumstances actually did happen thus that particular probability is now 1 [100%].

“Bear in mind, you are dealing with to someone whose knowledge of statistics is extremely limited, but I’m hoping you could provide a simple explanation your thinking.”
I doubt that your knowledge of statistics is limited given your interest in the topic of CC over this number of years. Statistics is funny stuff and is often misused by both sides when it should not be.

See Willard’s excellent summation April 30, 2017 at 2:42 pm
“”Doc’s point is that there’s a reading whereby attribution is trivial. Split the Earth in an infinity of regions and ask: what are the odds that a unique future event happens in one of its regions by luck. The probability that the unique event eventually happens converges toward 100%.The chances it happens at a specific spot and not elsewhere is almost 0%.”
However he called it a trick, when the trick is the reverse, ascribing random rare events to causation, see Taleb “Fooled by Randomness”

CO2 in Seawater: Equilibrium, Kinetics, Isotopes, 1st Edition Zeebe   &    Wolf-Gladrow   have some wonderful stuff on our topic though hard to copy and paste.
It backs up ATTP and his presentation of equations.
The money line is page 41  1.2.4  total alkalinity and charge balance
“Sillen [1961] argued that considering the origin of the ocean, we might say that the ocean is the result of a gigantic acid base titration in which acids that have leached out from the interior of the earth are titrated with bases that have formed from the weathering of primary rock ”

That is the essence of what I have been saying.
In an authoritative text book.

Side tracking  on permeability of sediments [undefined] and stating that the seafloor is covered with impermeable clay flies in the face of Broecker  “The high-CaCO3 sediments that drape the oceans’ ridges and plateaus typically have ?90% CaCO3 and a water-free density of 1 g cm?3.”
The  amount of CaCO3 available for dissolution in such a sediment is 72 g cm?2.
This amount could neutralize 6.3×1017 mol of fossil fuel CO2. This amount exceeds the combined oceanic inventory of dissolved CO2? (1.6×1017 mol) and of dissolved VHBO3? (0.8×1017 mol). It is comparable to the amount of recoverable fossil fuel carbon.”
In other words the first 8cm or so of the available usable sediment
is not covered by impermeable clay, it is usable now.
It has the capacity to soak up 4 times the amount of CO2 already present in the ocean as DIC without blinking.
Only then would it be used up under an impermeable blanket as Dikran puts it.
It would be available to soak up all the fossil fuel that can theoretically be discovered.
And we still would not have an acid sea of consequence for the carbonic acid itself in equilibrium with the available CaCO3 is the buffer of the sea.”We may be on the same page at times.
Dikran Marsupial says:    November 8, 2016 at 10:23 am
” Now I am not claiming that there is no transfer of CaCO3 from the ocean floor to the deep ocean,” but my understanding is that there isn’t very much. BTW note that sedimentary rocks such as limestone tend not to form in the deep ocean because of the dissolution process that Marco mentions, so it isn’t clear that there is an abundence of CaCO3 in the rocks forming the ocean floor to begin with (I suspect the Atlantic ocean floor is mostly baslt, i.e. silicate rock).

So what is your evidence for an unlimited supply of CaCO3 from the ocean floor?
Oh, BTW, thanks for putting up the second and third pictures. Your computer skills are way above mine. The second one shows the calcareous ooze right on top of the mid ocean ridges and further out under the red clay.

Confirming my point “There are bucket loads of carbonate in the earth’s crust, every subsurface volcanic eruption exposes more to the ocean” despite your assertion to the contrary.
Dikran Marsupial says:    November 9, 2016 at 9:56 am    “. I don’t think this is necessarily correct.”

You also said, incorrectly.
“the reason that land based volcanoes give off a lot of CO2 is because they are largely erupting magma formed from subducted crust from shallower oceans that has carbonate sediments.”

The magma is the same, molten basalt under the sea and on land.
Any component of “subducted crust from shallower oceans that has carbonate sediments” can only be present in the ocean ridge volcanoes under the sea. That is where the subduction takes place.Subducted crust is melted,gone, not crust no more.
Land based volcanoes give off extra CO2 because  the magma passes through ordinary crust with layers of chalk limestone marble etc, carbonate sediments from 2 billion years of non subducted crust formation of sediments from shallower oceans.

Finally, you state
“this one states that calcereous oozes are scarce below 5000m (i.e. below the CCD),”
the statement should read but doesn’t “recent calcareous oozes” as in the next sentence
[wonder why Dikran didn’t mention that one]  it says ancient calcareous oozes at greater depths if moved by sea floor spreading.
By the picture accompanying these calcereous oozes are abundant not scarce you would agree.
The term scarce applies to CaCO3 deposits on the red clay surface below the CCD
.””That said: I think the hard part isn’t learning d(x^2)/dx = 2x which kids can memorize just as easily as A=?r2. It’s understanding “slope” = “rate” and understanding what to do with rates etc.””   Lucia Maths and need to get a book written

Plot trinity James one of 3 in the 5 series plot involves death in a plane but only being a computer simulation write whole novel then unpin/save heroine at end.

dikranmarsupial
“Also the mud in the abyssal ocean has no carbonates because it is below the CCD,”

Mud in the abyssal ocean with no carbonates.
because it is below the CCD.

Chemistry and knowledge and assertion.
“Mollusks from many different groups live in the deep sea. Our shell-makers can be found at all depth levels of the ocean bottom; no limit is known on the depths at which they can live. Mollusks have been found in the deepest point of all oceans, the Challenger Deep in the Marianas Trench, at 11,022 m (about 36,000 feet) depth”
I presume the 11,022 is well below the CCD.
I presume the molluscs have shells.
Therefore your contention that the CCD is the last word on the chemistry is wrong.
I point out to you the white cliffs of Dover Possibly thousands of meters deep CaCO3.
I point out sea shells in the highest Himalayas just to remind both of us of the vast, interminable  aeons that the earths crust has been forming and deforming.
The earths crust, even under the sea, is not just some miserable thin layer deposited in the last 10,000 years.
It is a vast amalgam of CaCo3 deposited over 2 billion years, crushed, serpentined, vapourised frozen, glaciated, vented, heated and pressured into all sorts of minerals and deposits.
It is intermixed with less important stuff for our argument like basalt etc which occur in bigger percentages at greater depths.
The sea sits on the crust. It has input from the earth’s crust not just from the bottom but from every drib and drab where water comes into contact with land as it drains back into the oceans. Wind blows dust particles, some of them CaCo3 into the sea. The volcanic ocean floor rips up this buried crust and periodically exposes great swathes of billion year old CaCO3 to the ravages of the abyssal deep waters [despite the mud layer].
Vast undersea rivers carry silt and debris down to the abyssal depths daily. Bacteria and worms live in the silt and mud, yes even at those depths and burrow and break their way through it exposing rich veins of CaCO3 to the water.
It might look quiet in a nautilus for a week but over a decade the floor is a vibrant freeway of activity, not a quilted protective blanket of CaCO3 less mud.

The salient points surely are the supersaturated CaCO3 and pH of 8.1 of the whole ocean overall. How did it get that way?
The Ca part of the mix did not form from the CO2\H2O acid pathway.
It is there because the ocean sits in and on a pitcher of earth which has a CaCo3 matrix which has formed over billions of years,
The ocean is alkaline because of the dissolved earth chemicals in it and available to it at this particular temperature, earth size and water volume. They cannot dissolve out and leave us with pure water which would be acidic with the level of CO2 in the air.
The dissolved salts and CaCO3 are innumerably more abundant and available in the earths crust than all the CO2 that nature and Humanity can produce. The balance is robust, not delicate and is much more a feature of how much CO2 [DIC] is present in water due to the CaCO3  putting it into the atmosphere or stopping it from being absorbed into the sea than a simple current small oversupply.

I am not arguing AGW, or being obstreperous, I am trying to understand the pH conundrum better.
Some of these ideas must make sense

angech says:

Your comment is awaiting moderation.

“During 2013 and 2014, only 4 of 69,406 authors of peer-reviewed articles on global warming, 0.0058% or 1 in 17,352, rejected AGW. Thus, the consensus on AGW among publishing scientists is above 99.99%, verging on unanimity.”
Not Surprising is it.

How many theology texts would repudiate the existence of god?
How many authors actually wrote peer reviewed articles on global cooling?
How many authors wrote climate papers with no consideration of either point of view?

“The U.S. House of Representatives holds 40 times as many global warming rejecters as are found among the authors of scientific articles.
More impressive is that the number 160 out of 238 That is over 67% against compared to 0.01% of publishing consensus scientists. so 670,000 times as many global warming rejecters as are found among the authors of scientific articles.
Presume the maths is right?

where is the surface/what is the surface.

where is the surface/what is the surface.

Jim D | April 19, 2017     “The way to cancel a radiative imbalance is surface warming.”
On fire today Jim D, and making a very pertinent though unintended point.
The earth does not have a defined surface. Unlike say the moon where with low gravity and virtually no atmosphere, no lakes of water. the TOA is virtually the same as the surface.
The surface of the earth is actually a multi layer, multi media surface, 99.99997% of the surface atmosphere is below 100 km (62 mi; 330,000 ft), the Kármán line. By international convention, this marks the beginning of space.
Personally I think a definition of the surface of a planet/asteroid/black body etc should be those parts that are capable of receiving, reflecting or absorbing radiation.
Such a definition would reduce the surface of the moon to a few mm of depth whereas on earth the surface would be by definition 100 to 100.01 km thick as it would also include the depth to which it can penetrate the ocean say 100 meters for 99.99997 percent of the incident energy.
The point I am making is that the surface warming is just not at the surface commonly referred to by Jim D and most here but included the atmosphere itself.

 

99.99997% is below 100 km (62 mi; 330,000 ft), the Kármán line. By international convention, this marks the beginning of space

jch and hubris not that I have any

angech says:

Your comment is awaiting moderation.

dikranmarsupial says: December 1, 2016 at 3:55 pm
“I don’t see how making inductive inferences is inherently subjective”.

Quoting re induction and its usage,
“Inductive reasoning (as opposed to deductive reasoning ) is reasoning in which the premises are viewed as supplying strong evidence for the truth of the conclusion. While the conclusion of a deductive argument is certain, the truth of the conclusion of an inductive argument is probable, based upon the evidence given.
the premises of an inductive logical argument indicate some degree of support (inductive probability) for the conclusion but do not entail it; that is, they suggest truth but do not ensure it.
Unlike deductive arguments, inductive reasoning allows for the possibility that the conclusion is false, even if all of the premises are true.”

Subjectivity is built in to inductive inferences, It is akin to cherrypicking in that the inference chosen, even if true in itself, may not be true when a subjective inference is tied to it. It is therefore impossible to see the subjective element when one makes an inductive inference.
You have to step back a little.

Broadly speaking,
“there are two views on Bayesian probability that interpret the probability concept in different ways. According to the objectivist view, the rules of Bayesian statistics can be justified by requirements of rationality and consistency and interpreted as an extension of logic.[1][6] According to the subjectivist view, probability quantifies a “personal belief”.
In probability theory and statistics, Bayes’ theorem (alternatively Bayes’ law or Bayes’ rule) describes the probability of an event, based on prior knowledge of conditions that might be related to the event.
The sequential use of Bayes’ formula: when more data become available, calculate the posterior distribution using Bayes’ formula; subsequently, the posterior distribution becomes the next prior.

ATTP says rightly “The idea then is that you can combine the different lines of evidence, and use Bayesian inference, to determine a range for the ECS.””

dikranmarsupial says: November 30, 2016 at 4:06 pm
“The real problem with objective Bayes (in this particular case) is that we do have prior knowledge, and we need to include it into our analysis if we want to be objective in the sense of maximizing correspondence to observed reality. However doing this in an objective (in the sense of “not influenced by…”) manner is rather difficult, but that is not a good reason for ignoring the prior knowledge.”
No problem if following the method above

Thorsten Mauritsen says: November 30, 2016 at 9:18 am
“the less careful reader is easily lead to believe that the study has actually constrained ECS”.
The whole purpose is to develop a method to actually constrain the ECS

Victor Venema says: November 30, 2016 at 1:09 pm
“Fully agree with James Annan. There is no such thing as an “objective” Bayesian method.”
It is possible, though your view would count against it in an objective Bayesian method.

dikranmarsupial says: November 30, 2016 at 4:17 pm
ATTP “Why would one not take advantage of that aspect of Bayesian inference?
The place where “uninformative” priors are really useful is in expressing the knowledge that there is some quantity that you know don’t know anything about, so the effect of that uncertainty is properly accounted for in the uncertainty in the conclusions of the analysis”
-.
-A prior event is either informative or uninformative.
If uninformative it is irrelevant.
It has no place in any assessment and it is not an uncertainty.
Prior events on the other hand are full of uncertainty and are very important to Bayesian assessment. They have to be included and as new information comes to light clarifying the uncertainty a new, more rigorous and tighter ECS range can be established.
Despite ATTP and yourself refusing to consider impossible a priors the Bayesian approach handles this with no fuss.They do not matter.
Take a negative ECS or a 20C positive ECS. Apply the concept. No past evidence of either? scrub them and add in a positive ECS 0.1 and a high ECS 12 C . Still not possible move up into the real ranges. Will it be 3C?
Who knows but the method if used by Nic or Stevens, Sherwood, Bony and Webb does not care about the starting point. Every bit of informative evidence improves the prior towards the right range]. As Dikran says “an objective method that incorporates the laws of physics, observations of paleoclimate and modern observations is needed”
This is scientific practical and another bit of the puzzle. embrace it. Point out to Nic which bits you feel he has missed out and incorporate them as well. If you do not like his ratings that very salient observation often made to me by Mosher springs to mind. Do the work yourself.

JCH says:    March 31, 2017 at 10:38 pm    “To me, the very most important thing is… who was the first person to speculate that 2017 has the stuff to become the 4th warmest year in a row?”
A tautology really. A bit like those guys who continually predict that the stock market is going to crash and then are right, heroes once every twenty years. Except no-one is happy with them. Let me guess, one of them would be JCH.Please do not send out a prize.
“I do think that we should avoid attacking those who present alternative views;”‘
“I’m certainly in favor of  criticizing what others say when it’s clear that what they’re presenting is not consistent with the best evidence available today.”
Criticizing is different to attacking but not always perceived the correct way by those on the end of the criticism.
I would love to engage with and dissent from JCH on his view that 2017 will be the warmest on record.There are several caveats, which record, which medium air or ocean, but I have learned from past experience in a warming world not to be too sanguine. The moment one commits to these notions is the moment one loses. So I will watch with interest and hope the outcome of this year and remind JCH, if he is wrong, next year here perhaps? Or appear and get shellacked. My very modest and wrong  observation would be that given the fall in ocean temperature from 2015/2016 there would have to be a marked El Nino to have any hope at all of this year coming within a cooee of 206 or 2015.

jch helps

Thanks JCH. It is easier for you I think as you have done more work on it but it is a very complicated scenario when one tries to get into it.
Surface emits 169 to space
atmospheric window – a. 40 to space is from the surface
[some infrared radiation from the cloud tops and land-sea surface pass directly to space without intermediate absorption and re-emission. A large gap in the absorption spectrum of water vapor, the main greenhouse gas, is most important in the dynamics of the window. Other gases, especially carbon dioxide and ozone, partly block transmission.]
*b. clouds emit 30 to space  the picture shows 30 from clouds, 40 from the surface overall window of 70.
———————————————
to add to  239
==========================
Sun delivers 341     not absorbed – 102    sw delivered for absorption – 239
===============
I think:     SW – 78,  LW – 97    – Not sure how you get this figure, The sun is very bright and I would have thought would be more shortwave though cold tongue edges of the sun plasma might go out far enough to emit some long wave if it cooled sufficiently. SW might reflect more than LW so may be the major part of reflected light.
The 78 [SW/LW] absorbed by the atmosphere at varying levels does some reheating on its way back out, Makes the atmosphere hotter but technically never reaches the land or ocean.
LW – 97 This seems to be the figure for thermals 17 and latent heat 80. I agree this would be emitted as LW but it would seem that these are energy packets that are transported high up before they emit and hence probably do not contribute much to rewarming. I do not know how you would account for them in terms of the energy going back to the ocean or land. My gut feeling is if they are emitting high enough they should not be counted as reheating the surface although they obviously do lead to reheating of the air locally on the way out and raise the overall temp which probably leads to more uptake and release of energy by CO2 lower down.
Techniucally not part of the 333 back radiation effect though as said contribute to making it happen.
———
absorbed – 175     [Should be 161 absorbed at surface]
LW to space – (30) [[This is  LW clouds emit 30 to space see above]
and  atmospheric window – a. 40 to space is from the surface
————————
back to surface – 145   [should be 91 if amended figures OK]
================
recycle:

lw absorbed 333/145 = 2.3 times         [? 333/91 = 3.7]
Will stop here as I do not think the model is trying to show an energy imbalance at all, just trying to balance input and output as no CO2 increase is postulated.
Thank you for the figures and trying to work it out as well, I am still struggling with it.

 

angech says:

Your comment is awaiting moderation.

Andrew Dodds says:
“”angech I quoted direct numbers for the volcanic vs. solar heat fluxes. Then you come back with a direct lie that I said there was ‘no natural heat effect’. Please correct that.”

Did I misinterprete the line that followed?
“If volcanic heating was non-negligable, you’d expect to see convection cells with upwelling over the mid-ocean ridges, where the majority of volcanic heat enters the oceans. You don’t.”

Leto, “But because this tiny quantity of heat is on the ABC side of the ledger”. Yes, there is an ABC side. There are a lot of non negligible issues which though small may add up to concerns.

Andrew raises an interesting point however re TOA radiative imbalance. The amount of energy coming in is supposed to equal the energy going out. An increase in GHG means there should be a temporary imbalance. Yet if the earth is putting out an extra [Geothermal heat]: 47 TW should there not be a radiative imbalance the other way?
Incoming solar energy: 173,000 TW. Hence Outgoing should equal 173,047 TW. Is this ever taken into account?
I would feel this should make any balancing, as in ECS, much more rapid than what people are actually saying it is. Mosher? ATTP?

airborne

The ‘airborne fraction’ (atmospheric increase in CO2 concentration/fossil fuel emissions) . From 1959 to the present, the airborne fraction has averaged 0.55. the terrestrial biosphere and the oceans together have consistently removed 45% of fossil CO2 for the last 45 years.

.”Basically, in equilibrium, the amount of dissolved inorganic carbon (DIC) in the ocean determines the partial pressure of CO2 and, hence, the atmospheric CO2 concentration via Henry’s Law”
This law works both ways.
In other words the partial CO2 pressure determines the DIC as well.
Which is important for this discussion..

The amount of extra CO2 added to the atmosphere by human activity, while significant, and lets say cumulative to some degree, is still a small fraction of the total atmospheric CO2 720 GT and the 137 times greater DIC [136,800 GT of CO2.]
[ we’re dealing with a coupled system, so if you add new material to one of the reservoirs, it will rise in all reservoirs]
Atmospheric CO2 720 GT at 400 PPM, which is 1/182 of the ocean equivalent.
If you increased to 560 ppm  1008 GT, a 25% increase the amount of CO2 in the ocean would have to increase by 2.5% OR  3420 GT.
At 30 GT a year human contribution that would take 100 years and providing that the DIC did stay in solution and not precipitate out in part.
Did a mere 600 GT raise the DIC of the oceans from
120 ppm rise suggests a

. However, there is a more formal way to show this. I recently worked through the ocean carbonate chemistry. It turns out that there is a factor called the Revelle factor, which is simply the ratio of the fractional change in atmospheric CO2, to the fractional change in total Dissolved Inorganic Carbon (DIC) in the oceans:

R = \dfrac{\Delta pCO_2/pCO_2}{\Delta DIC/DIC}.

The Revelle factor is about 10, which means that the fractional change in atmospheric CO2 will be about 10 times bigger than the fractional change in DIC. What this tells you straight away is that you can’t change the amount of CO2 in the oceans without also change the amount in the atmosphere; stabilising emissions will not stabilise concentrations.

The residual airborne fraction increases from about 15% for emissions of 100s of GtC (we’ve already emitted 600 GtC) to almost 30% if we were to emit as much as 5000 GtC.

Now, maybe if the fractional change in DIC is small enough, the fractional change in pCO_2 might also be small enough to essentially stabilise concentrations. However, we know the quantities in the various reservoirs, and we’ve already emitted enough CO2 to change the DIC by 1 – 2%, and – hence – the atmospheric CO2 concentration by 10 – 20%. If we stabilise emissions, we could easily change the DIC by a further 1 – 2%. In fact, we have sufficient fossil fuels to change it by more than 10% and, therefore, enough to change the atmospheric concentration by more than 100% (i.e., to, at least, double atmospheric CO2).

There is, however, something I’m slightly glossing over, so will try to clarify a little more. The above is based on an equilibrium calculation. In other words, it is the changes once the system has retained a quasi-steady equilibrium. Our emissions are continually pushing the system out of equilibrium and so the fractional change in atmospheric CO2 is actually greater than what the Revelle factor would suggest. Given what we’ve already emitted, we would expect about 20% of our emissions to remain in the atmosphere, but it’s currently more like 45%. This is because the timescale for ocean invasion is > 100 years, and so the system hasn’t yet had time to return to equilibrium.

 

hypergeometric says:
“”he internal variability effect?  Is it buried within the albedo effect, or the partial of outgoing longwave with respect to temperature?”

As you imply Internal variability is due to a multitude of factors. Some one off, some repeatable, some cyclical. If we knew enough about the actual causes to model them correctly we could remove some of them from the larger Natural Variability uncertainty range.

Your two examples show why there may not be a paradox to Willard. The atmospheric changes with 2 different GHG both increasing but only one with pure absorptive/ radiative properties, the other, H2O, being unique in that it causes increasing reflectivity with increasing concentration [decreased albedo] means that it is not a simple case of gas absorption emission physics with positive feedbacks but a complex reducing external input as internal energy retention goes up.
It is this dynamic that enables one to propose two Climate sensitivities. One with high variability at a low CO2 level and one with reducing/reduced CS when CO2 levels double.

The insistence that the CS plus positive feedback stays relatively the same at all CO2 levels is not held by most scientists ie people here have argued for some variability of CS with different conditions as a reason for it being hard to pin down, but most assume it must be relatively the same at different levels of increasing CO2. Take away this assumption and the paradox disappears.

Thanks for the link to your site, looks interesting.

Length of time

Victor talks about the length of the data at his linked post. One comment is
” With “at least 17 years” Santer does not say that 17 years is enough, only that less is certainly not enough. So also with Santer you can call out people looking at short-term trends longer than 17 years.”
He then states” Sometimes it is argued that to compute climate trends you need at least 30 years of data, that is not a bad rule of thumb and would avoid a lot of nonsense”
but it is a rule of thumb only.
30 year periods are ideal for claiming global warming, just long enough to see “significant ” trends but too long for anyone to ever claim a “pause”.
So game over. Define a length of time longer than your oppositions argument and you cannot lose.
Pause, what pause?
But using the same logic one could say we need 100 years, what then of global warming?
By the same implacable logic[see  the anti-hiatus of the last 10 years]  the trends are now too short and become statistically insignificant.

angech says:

Your comment is awaiting moderation.

Willard says:
” This is essentially the point. It is paradoxical to argue for high sensitivity to internally-driven warming AND low sensitivity to externally-driven warming” .

Yes. The question though is whether Climate Sensitivity to CO2 is restricted to just the known response to CO2 doubling or whether other factors, other GHG such as water vapour are linked in such a way that Climate Sensitivity to CO2 itself is amplified or damped by the effect of the temperature change on the other volatile GHG, water.

The result of that is that one might have a different high or low sensitivity to CO2 doubling depending on the amount of water vapor at a specific temperature. This could possibly* lead to large swings in temperature in a low CO2 [our] world but a much more damped response at higher levels of CO2[and temperature].

*unlikely but removes a paradox.

clouds

SoD,
A question on the nature of clouds and albedo. Does all water vapor reflect SW or does there need to be an aggregate size with a boundary to give refection? The reason is that clouds may be a misnomer for water vapor in general. That is that the albedo effect may be directly correlated with the amount of water molecules in the air. Hence there would be a cloud effect even when there are no clouds. What could be more important is just the humidity level itself which I presume is workable out by the satellites.
I presume this has been investigated but would value your input.

HOME BROADBAND WI-FI CONTENT SHARING DVANCED Your Broadband service is working

Where customers answer questions and share ideas about our products and services

How To Troubleshoot NBN Issues

by Technical Support ?15-08-2014 12:11 PM – edited ?03-08-2016 09:55 AM

Brief Version

  • Power off your T-Gateway modem, wait for 30 seconds and then power it back on. Give it 2 minutes to reconnect to the internet.
  • Power off your your NBN NTD, by turning it off at the power point on the wall. Wait for 30 seconds and then power it back on. Give it 2 minutes to reconnect to the internet.
  • Reset the T-Gateway to factory default settings. To do this get a paperclip, or something similar, and press and hold it into the reset hole at the back of the T-Gateway, keep in pressed in for 15 seconds. Give it 2-5 minutes to reconnect to the internet. Alternatively you may prefer to do this via the modems interface http://10.0.0.138
  • Call to report the fault on 1800 TFIBRE (1800 834 273).
    Your Broadband service is working normally. You are connected online.
    Wi-Fi

    Ethernet/Wired

    USB
    DECT

The radiative forcing due to clouds and water vapor

From “The radiative forcing due to clouds and water vapor V. Ramanathan and Anand Inamdar
Center for Atmospheric Sciences, Scripps Institution of Oceanography,
As the previous chapters have noted, the climate system is forced by a number of factors, e.g., solar impact, the greenhouse effect, etc. For the greenhouse effect, clouds,
water vapor, and CO2 are of the utmost importance. the data needed to understand two fundamental issues in radiative forcing of climate: cloud forcing and atmospheric greenhouse effect. Clouds  reduce  the  absorbed  solar  radiation  by  48  W  m2 while enhancing the greenhouse effect by 30 W m2 and therefore clouds cool the global surface–atmosphere system by 18 W m2 on average. The mean value of C is several times the 4 W m2 heating expected from doubling of CO2 and thus Earth would probably be substantially warmer without clouds.”
I take these authors to be saying that clouds and water vapor should be considered in Radiative Forcing, not ignored. That they give a negative feedback according to the best science has to offer and that this needs to be taken into account in assessing ECS.
How long something resides in the atmosphere is different to how much stuff is residing at any one time in the atmosphere which is the basis on which RF of the atmosphere needs to be assessed.
Just asserting one can ignore it does not mean one can ignore it.

U3A Talk

This is a talk on life as well as on common medical controversies.
People are generally well meaning.
Treat others as we would like ourselves to be treated is a motto that most if us work by.
So where does it go wrong?”
Well I guess we all like to be treated in different ways.

I like to think on the alternatives in life, in the choices we make, why we make them and the consequences both obvious and hidden.

Diabetes
Coeliac disease
Vaccinations.
Screening tests.
The big ones here are Prostate Cancer, Breast Cancer and Bowel Cancer. Why screen at all?

A silly question which carries a sting in the tale.

To detect cancer, and other diseases early enough to be able to do something about them.
Disease exist in our community that develop over time and cause serious health problems.
Tests exist to detect these diseases.
Simple enough.
It has nothing to do with preventing such diseases.
Having a screening test does absolutely nothing to prevent a disease occurring or even worse being present when you do the procedure and being missed.

Who has screening tests. People at risk.

Family History . Population History. Occupational History , Exposure History.

A screening test has to be useful, It has to  have a high sensitivity and high specificity. This means that it is able to detect problems early enough [sensitivity], and accurately enough [specificity].

It has to diagnose conditions that are dangerous, reasonably common and hopefully treatable without causing problems of it’s own worse that what one is trying to treat, in other words it has to be safe.

Screening can by done by endoscopy, faecal and urine sampling, Blood tests and Radiological procedures such as X-Ray, CT , Bone scan and MRI to mention a few.

Pitfalls of testing.

Cost , the equipment is all there  but some are very expensive. New tests in particular. Cheapest but still not cheap are the Urine testing for sugar protein and blood

Time, Some tests like 24 hour ECG’s and bone scans can take days to do

Patients often have limited time themselves to do the test in.

Discomfort

Danger

Loss of results

Wrong results

Statistical use of the results.

Endoscopy, looking inside the body through a tube, is a good way of detecting cancers. Lung, throat, Stomach, bladder  and bowel cancer can be detected this way.

Who here volunteers to have a colonoscopy every 6 months? or a bronchoscopy.

Questions

Would you have all of them every 5 years? What would be a good age to start.

Not many , Reasons  discomfort, embarrassment, time , risk

Having a procedure often involves having an anaesthetic, The risk of dying from an anaesthetic increases with age and other medical conditions but can occur in healthy young people as well. Somewhere between 1 in 5,00 to 1 in 10,000. As well as the risk of the procedure itself. Perforation of the bowel and  bladder, rupture of the oesophagus, damage to the vocal cords, aspiration and pneumonia, Urethral stricture with a urethroscopy .

wrong diagnosis, no diagnosis, missed diagnosis and lost specimen.

Import of the result and use of statistics. This is one that I have great difficulty in understanding.

Say that a mammogram will diagnose a breast cancer with 95% accuracy.

You have a patient who comes to you with a positive result from a screening procedure.

What should you tell her. As the patient if you are told it has a 95% accuracy what does this mean.

Murky waters lie ahead. The false positive paradox is a statistical result where false positive tests are more probable than true positive tests, occurring when the overall population has a low incidence of a condition and the incidence rate is lower than the false positive rate. When the incidence, the proportion of those who have a given condition, is lower than the test’s false positive rate, even tests that have a very low chance of giving a false positive in an individual case will give more false than true positives overall.[2] So, in a society with very few infected people—fewer proportionately than the test gives false positives—there will actually be more who test positive for a disease incorrectly and don’t have it than those who test positive accurately and do. The paradox has surprised many

Low-incidence population

Number
of people
Infected Uninfected Total
Test
positive
20
(true positive)
49
(false positive)
69
Test
negative
0
(false negative)
931
(true negative)
931
Total 20 980 1000

Now consider the same test applied to population B, in which only 2% is infected. The expected outcome of 1000 tests on population B would be:Infected and test indicates disease (true positive)1000 × 2/100 = 20 people would receive a true positiveUninfected and test indicates disease (false positive)1000 × 100 – 2/100 × 0.05 = 49 people would receive a false positiveThe remaining 931 tests are correctly negative.

In population B, only 20 of the 69 total people with a positive test result are actually infected. So, the probability of actually being infected after one is told that one is infected is only 29% (20/20 + 49) for a test that otherwise appears to be “95% accurate”.

A tester with experience of group A might find it a paradox that in group B, a result that had usually correctly indicated infection is now usually a false positive. The confusion of the posterior probability of infection with the prior probability of receiving a false positive is a natural error after receiving a life-threatening test result.

As breast cancer in the general female population is a relatively low incidence disease the outcome of a false pasotive is quote high.

Herein another risk, traumatising people with false positives and equivocal results, one does not like to rule out cancer conclusively if some doubt is present leads to up to 15% of women who have a screening mammogram having to undergo further procedures , usually invasive biopsies  with pain discomfort, bruising and rarely infection.

Not to mention the 5% who have a false negative, also not as common as it sounds.

Finally  Cancers are incredibly small, incredibly fast growing and have usually been present for 6-18 months before getting big enough to be detected. So one may have a bowel or breast cancer present at the time of screening, have it missed due to its size and present with a large cancer 6 months later.

Finally removal of the cancer does not guarantee that the cancer has been successfully treated. Cancers can spread [metastasize] before removal. Melanoma is an example.

In the other hand BCC usually grows locally only but very aggressively, Some poor souls it does metastasize. SCC  has a slightly higher spread rate than BCC which is why it needs removal and follow up.

 

Prostate Cancer is an enigma which is treated in an ageist and sexist manner by most people and the medical profession. It has none of the media appeal of breast cancer,  the young mother cut down in her prime with dependent children and husband.

Instead an older man has a blood test at the insistence of his wife and gets told his PSA is up.  Still the hope of a wrong diagnosis. Multiple other causes. He may have an infected prostate, very high reading 20+ , treatable with antibiotics. He may have BPH [benign prostatic hyperplasia] an example of the word benign not really meaning what you think it does. At least it is not cancer and after 15 hours of agony not being able to pass urine a catheter is inserted by a first year resident, hopefully in the right place with no perforation and one is on the way to a TURP  or onion peeling from the inside with the risk of loss of sexual function, not that it matters for an old guy anyway.

Next he has a trans rectal biopsy with 12 needles taking samples then he is told the bad news.

Bad news? We are doing nothing for you. Most men with prostate cancer are so old they die with it not because of it. Prostate cancers can be very slow growing [Which type have I got? We don’t know] Your life span is to short  to worry about it. The treatments cause loss of sexual function, baldness and you will go blind.

Only joking.

The facts are that prostate cancer is treatable, One can have surgery, radiotherapy or a combination.

Prostate cancer does spread rather early so  prevent developing cancer or to
Males generally get cancer later than females

I would like to present an over view of the screening dilemma.

I will reference my talk to certain illnesses and conditions that are common in the community and a few that are not. Along the way I will mention several misconceptions with these conditions. I welcome relevent questions but will deal with most in the breaks or afterwards.

Medicine is concerned with the health of individuals first and  then with health of populations. It consists of both diagnosis and treatment of medical conditions. Originally these were of the body but those of the psyche then followed. As new treatments and medications were developed medicine became an enlarging field with more expectations of keeping people healthy.

Means of diagnosis improved. At first these were only applied to people with illnesses, but then the ability to look for problems ahead of time became apparent and screening was born.

Screening is a form of diagnosis that came late to medicine. Tests on urine for protein and blood were possible. Blood tests began being used at the start of the 19th century and X-Rays were developed.

Progress was slow so much that the routine testing strips we know use routinely with 10 tests on were still a 2 strip novelty 40 years ago ..

CXRays for TB detection were among the first general screening projects undertaken.

This dreaded disease, still present today, was reduced a hundred fold with detection treatment and isolation. Screening was abolished in the 1970’s in Australia. for 3 reasons which are still valid today. Cost, Radiation exposure [side effects] and near elimination of the condition. One funny twist of fate due to litigation and over investigation a large percentage of the population still have CXR’s for other reasons which equate to a de facto screening program.

The modern health problems are those of living longer, diabetes cancer and heart disease [stroke]. These conditions all increase with age and all reduce life expectancy greatly. They have an enormous impact at any time but more so when the sufferers are young.

Ao to screening.

The ideal screening test is something non invasive, very reliable easy to do and producing a treatment outcome beneficial to the patient.

Bowel Cancer.

A sample of poo, well 2 or 3 actually a day apart preferably after avoiding meat in the diet an