You state that «based on this logarithmic relationship (still valid today) Broecker
assumes a climate sensitivity of 0.3 ºC warming for each 10 % increase in CO2 concentration, which amounts to 2.2 ºC warming for CO2 doubling.»
Based on this logarithmic relationship (still valid today) Broecker
assumes a climate sensitivity of 0.3 ºC warming for each 10 % increase in CO2 concentration, which amounts to 2.2 ºC warming for CO2 doubling.
That uncertainty is represented in the latest crop of global climate models, which
assume a climate sensitivity of anywhere from about 3 to 8 degrees F.
All risk numbers given above
assume the climate sensitivity pdfs to be truncated at 10K, if necessary.
Assuming a climate sensitivity of 0.7 K / W / m ^ 2, this would contribute less than 0.06 C of the estimated 0.6 C mean global warming between the Maunder Minimum and the middle of last century, before significant anthropogenic contributions could be involved.»
As noted earlier, our main conclusions are insensitive to the precise details of the forcing estimates used, the volcanic scaling assumptions made, and the precise
assumed climate sensitivity.
Furthermore, if the U.S. reduced its CO ₂ emissions by 100 % it would only avert 0.137 °C of temperature rise by 2100, according to this (
assuming a climate sensitivity of 3 °C).
Occam's razor would have told Hansen from the start that since the observations only showed half as much warming as his models had predicted, his models were obviously wrong (
assumed climate sensitivity = too high).
Figure 3 shows the same records, with the addition of the results from the average models from the Forster study, the results that the models were calculated to have on average, and the results if
we assume a climate sensitivity of 3.0 W / m2 per doubling of CO2.
«Lets start by
assuming climate sensitivity is low burning coal and petroleum is bad!
Let me rephrase, paleoclimate implies that a person would be stupid to
assume a climate sensitivity of zero.
This has become one the biggest mysteries and most controversial issues in climate science today, throwing doubt over
the assumed climate sensitivity to CO2.
All risk numbers given above
assume the climate sensitivity pdfs to be truncated at 10K, if necessary.
So if
the assumed climate sensitivity is high by a factor of two or even three (as it now appears likely), the projected warming by 2100 will only be an imperceptible 0.3 to 1.3 C, and really nothing to worry about at all.
Not exact matches
Michael Mann, a meteorology professor at Penn State who was not involved with the study, said it's «speculative» but «plausible» that global
climate models have been underestimating
climate sensitivity by
assuming too much cloud glaciation.
So in order to constrain the
climate sensitivity from the paleo - data, we need to find a period under which our restricted subsystem is stable — i.e. all the boundary conditions are relatively constant, and the
climate itself is stable over a long enough period that we can
assume that the radiation is pretty much balanced.
Hi, I don't mean to turn this into yet another sceptic thread, but I've read in another site that there apparently are doubts about current models
assuming that
climate sensitivity is constant.
Our conclusions related to the «over-estimated» response could be dealt with using a 2x decrease in the
assumed forcing, or a 2x decrease in the
climate sensitivity, or a 2x increase in the low frequency change in the proxies (although not all at once!).
Hansen's model
assumed a rather high
climate sensitivity of 4.2 °C for a doubling of CO2.
But as far as I can tell, most sceptics don't flat out deny greenhouse gas warming, but they incorporate their «extra» forcing by
assuming a lower
climate sensitivity.
The real «equilibrium
climate sensitivity,» which is the amount of global warming to be expected for a doubling of atmospheric CO2, is likely to be about 1 °C, some three times smaller than most models
assumed.
All this discussion of the Schmittner et al paper should not distract from the point that Hansen and others (including RichardC in # 40 and William P in # 24) try to make: that there seems to be a significant risk that
climate sensitivity could be on the higher end of the various ranges, especially if we include the slower feedbacks and take into account that these could kick in faster than generally
assumed.
Note that the observational approach needs to
assume a constant
climate sensitivity between different states, whereas perturbed physics ensembles don't (though you still need to understand what feedback processes are important between different
climate states to have confidence in the results).
He did explain clearly to me why
climate sensitivity is, as he calculates it, one degree (here's how: twin each CO2 molecule, two for one, with nothing else changing;
assuming all else is held constant, he's right; not in the real world, but in theory, correct).
Note that (somewhat confusingly) she *
assumes * the attribution is 100 % in her papers on estimating
climate sensitivity.
Most discussions of the
climate sensitivity in the literature implicitly
assume that these are fixed.
This kind of forecast doesn't depend too much on the models at all — it is mainly related to the
climate sensitivity which can be constrained independently of the models (i.e. via paleo -
climate data), moderated by the thermal inertia of the oceans and
assuming the (very likely) continuation of CO2 emissions at present or accelerated rates.
Of course, these evaluations rely on the models being able to mimic the
sensitivity of the real
climate system and
assume that paleoclimatic reconstructions of the temperature do adequately describe the past
climate variations.
They perform a probability calculation
assuming that any of the
climate sensitivities in the IPCC range are equally likely.
Just to follow - up on John Finn's question (# 10), if one puts in a rough value for the emissivity of the earth (whatever that might be), so one is no longer
assuming it is a perfect blackbody, then does the resulting estimate for
climate sensitivity correspond to what one would expect in the absence of any feedback effects?
Additionally, they take the ratio of temperature change to CO2 change in the ice core record and
assume that is the
climate sensitivity of
climate to CO2 as opposed to the other way around.
But Annan and Hargreaves have argued that this gives too much weight to very high values of
climate sensitivity — after all, it makes no sense to
assume as a prior that
climate sensitivity of 3 deg.
Transient
climate sensitivity: The global mean surface - air temperature achieved when atmospheric CO2 concentrations achieve a doubling over pre-industrial CO2 levels increasing at the
assumed rate of one percent per year, compounded.
I
assume you are using a
climate sensitivity of 3 degrees, but the amplification values seem a little high.
We should not
assume that the
climate sensitivity is constant either, unless there are studies suggesting it is.
Assuming a 50 - 50 chance that
climate sensitivity is at or below this value, we thus have a 50 - 50 chance of holding warming below 2C if cumulative emissions are held to a trillion tonnes.
If
climate senstivity to CO2 is eventually shown (rather than just
assumed) to be close to the
sensitivity to solar, I think a case can then be made that the GHG attribution should be equal or higher than the solar attribution, despite the large uncertainty in our knowledge of the increase in solar forcing.
Cox et al.'s calculations of the equilibrium
climate sensitivity used a key metric which was derived from the Hasselmann model and
assumed a constant C:.
The study of «
climate sensitivity» traditionally
assumes no change in ice sheets and no GHG feedbacks.
If a doubling of CO2 resulted in a temperature increase of approximately 1 K before any non-Planck feedbacks (before water vapor, etc.), then
assuming the same
climate sensitivity to the total GHE, removing the whole GHE would result in about a (setting the TOA / tropopause distinction aside, as it is relatively small relative to the 155 W / m2 value) 155/3.7 * 1 K ~ = 42 K. Which is a bit more than 32 or 33 K, though I'm not surprised by the difference.
Your estimates of
climate sensitivity come from the IPCC, which
assumes that aerosols will continue to provide a very strong cooling effect that offsets about half of the warming from CO2, but you are talking about time frames in which we have stopped burning fossil fuels, so is it appropriate to continue to
assume the presence of cooling aerosols at these future times?
I
assume the programmer of a model can adjust parameters and / or code which then will at some point will produce an output which can be interpreted as adjusted
climate sensitivity.
Indeed, this was found to be true for any of several different published volcanic forcing series for the past millennium, regardless of the precise geometric scaling used to estimate radiative forcing from volcanic optical depth, and regardless of the precise
climate sensitivity assumed.
[Response:
Climate models (GCMs) calculate their climate sensitivities, they don't assum
Climate models (GCMs) calculate their
climate sensitivities, they don't assum
climate sensitivities, they don't
assume them.
A good start might be a piece which justifies the high
climate sensitivity which is
assumed by the authors of the report and also apparently by
climate models.
Since 1990, observed sea level has followed the uppermost uncertainty limit of the Intergovernmental Panel on
Climate Change (IPCC) Third Assessment Report (TAR), which was constructed by assuming the highest emission scenario combined with the highest climate sensitivity and adding an ad hoc amount of sea - level rise for «ice sheet uncertainty&raqu
Climate Change (IPCC) Third Assessment Report (TAR), which was constructed by
assuming the highest emission scenario combined with the highest
climate sensitivity and adding an ad hoc amount of sea - level rise for «ice sheet uncertainty&raqu
climate sensitivity and adding an ad hoc amount of sea - level rise for «ice sheet uncertainty» (1).
They result in different predicted atmospheric CO2 levels by year 2100, which are then used together with the
assumed 2xCO2
climate sensitivity, to arrive at a net GH warming expected by year 2100.
Assuming a constant external forcing, different models would show different surface temperature change and so the
climate sensitivity of different models would also be different.
That would be the General Fluid Dynamics Laboratory which produced the lower «
climate sensitivity» range, (Manabe) which was «averaged» with the much higher GISS estimate to produce a high end estimate that was
assumed to be real science, when it was actually an average of WAGs.
However, this method
assumes that the observed change in temperature since pre-industrial times is primarily a response to anthropogenic forcings, that all the other anthropogenic forcings are well quantified, and that the
climate sensitivity parameter (Section 6.1) predicted by the GCM is correct (Rodhe et al., 2000).