Not exact matches
I understand the argument that past projections are based
on estimated future forcings which can change, but this amounts to the same things as tuning
hindcasts and declaring matching a
hindcast to observations as a validation of your
model.
This is of course one big reason why climate science has focussed
on this particular metric — because the
models can do a reliable and credible (validated through
hindcasting recent and paleo climates) job at it!
Come
on, you KNOW the
models can
hindcast if the right parameters are put in but they are pretty atrocius at forecasting.
Arnost's link to the
Model E
hindcast also illustrates how GCMs rely
on volcanic aerosols to create inter-annual variability.
Climate
model simulations confirm that an Ice Age can indeed be started in this way, while simple conceptual
models have been used to successfully «
hindcast» the onset of past glaciations based
on the orbital changes.
Multiple
models that all
hindcast well when turned loose
on the future diverge quickly, which doesn't give one a warm and fuzzy feeling.
If the
models show a lack of skill and need tuning with respect to predicting (in
hindcast) even the current climate statistics
on multi-decadal time scales (much less than CHANGES in climate statistics), they are not ready to be used as robust projection tools for the coming decades.
Model estimates of temperatures prior to 2005 are a «
hindcast» using known past climate influences, while temperatures projected after 2005 are a «forecast» based
on a estimate of how things might change.
doi: 10.1007 / s00382 -012-1313-4 who report quite limited predictive skill in two regions of the oceans
on the decadal time period, but no regional skill elsewhere, when they conclude that «A 4 -
model 12 - member ensemble of 10 - yr
hindcasts has been analysed for skill in SST, 2m temperature and precipitation.
Models fail to
hindcast past climate
on local, regional and global level: http://joannenova.com.au/2012/05/we-cant-predict-the-climate-
on-a-local-regional-or-continental-scale/
Particularly valuable is perhaps research related to difficulties in avoiding confirmatory bias in testing
models through
hindcasting, when the
model builders have some, perhaps only qualitative, knowledge
on the data to be used for testing already when they develop the
model.
Leif would be right if he could propose an alternative
model based
on internal solar dynamics alone that performs better than mine in
hindcasting the solar patterns.
With respect to confidence in the future based
on hindcasts of the past, I would only say that even with The Perfect
Model (tm) the
hindcasts can only be as good as the data they are given.
And of course, the predictions are based
on computer
models which can not forecast or
hindcast with any accuracy.
We don't even have the data needed to intelligently initialize the
models we have got, and those
models almost certainly have a completely inadequate spatiotemporal resolution
on an insanely stupid, non-rescalable gridding of a sphere... the ongoing failure of the GCMs to actually predict or
hindcast anything at all particularly accurately outside of the reference interval.»
You know, all those
model «
hindcasts» that were validated based
on HadCrut and all the proxy studies that were calibrated
on HadCrut.
The sensitivity of the
models is, as I think you are saying, constrained by it's parametrizations, which are bounded by observational data
on TOA radiation data etc. (although not all very tightly constrained) but this is not what is being questioned about the
models, rather the issue is whether the
model hindcasts matching historical temperatures to some degree is evidence that they have correct physics, or is merely a result of modelers making the choices for inputs which will produce a reasonable result.
Obviously, climate
models whose
hindcasts differ in sign from what is observed (Zhang et al., 2007), or which indicate that human influences are indistinguishable from natural changes (Sarojini et al., 2012) possess no skill in identifying a human - induced climate signal
on observed precipitation across the U.S. and therefore should not be used to make future projections.
That said, there does need to be more critical evaluation of
models based
on the physics they include and the accuracy of their
hindcast / forecast.
We find a close agreement between the CESM - based
hindcasts and the Markov
model, indicating that the largest contribution to the predictive skill of soil water
on interannual to decadal timescales in CESM can be attributed to the damped persistence, which is partly governed by the evapotranspiration (Delworth and Manabe 1988), the total runoff, and the diffusion of soil moisture into the deeper soil levels as shown in the Eq.
The potential to make skillful forecasts
on these timescales, and the ability to do so, is investigated by means of predictability studies and retrospective forecasts (termed
hindcasts) using climate
models and statistical approaches.
When compared to the Scharf / HAS / 7th grader
model (i.e. a simple straight line with a slope defined by the 1979 - 1988 trend), it beats it in predictive skill, and wallops it in
hindcasting, wherein the naive
model with a slope of 0.5 C / 30 years predicts that in 1700 the temperature of the planet was 5C cooler than 2000, and in 1400 the temperature of the planet was 10C cooler, and yes, keep
on going.
On the contrary, global warming is a problem for which the world is rather well equipped to make informed policy, thanks to the IPCC reviewing the best available scientific knowledge, and thanks to ensembles of
hindcasting - capable
models constrained by (real - world!)
Nic, considering the first part of your comment, let's write the response of a
model over the
hindcast and forecast periods as something like (A + e, B + d) where A and B are the forced response over the two intervals (which depends
on the parameter choices) and e and d are gaussian deviates due to internal variability (which depends
on random initial conditions).
If the counter argument is that these changes are a response to Global Warming - it would be really good to see a graph showing what the
models predicted /
hindcast on average for the global cloud cover.
e.g., take HALF the date to tune the
model, then see how well it forecasts /
hindcasts on the other half of the data.
Restore Academic «skin in the game» by funding
on prediction accuracy curryja Proposal: Make grant funds were contingent
on model forecast /
hindcast accuracy.