This is required because only
the magnitude of the linear trend can be meaningfully compared for such a short time series in the presence of substantial inter-annual variability.
Not exact matches
Because the
magnitude of the counter acting effects depends on the degree
of non-linearity, the rms
of the residuals to a
linear fit, the length
of the
trend and the temporal auto - correlation
of the «weather noise», it is difficult to generalize whether the method will generally result in too many false positives or negatives.
If nonlinearities are important, as you suggest, one would expect the result to be... well, crap, not the emergence
of a
linear trend, and certainly not a
linear trend with
magnitude equal throughout the observation period.
Two components are harmonic sine waves that run constantly through the record one with period
of 20 years and
magnitude of 0.1 C and the other a 60 year period and
magnitude of 0.2 C. Another component is a
linear trend of 0.1 C per century that runs through the record and the last is a
linear trend of 0.66 C per century that begins in 1950.
For a Gaussian time series, the margin
of error on a
trend of length N t estimated by
linear least - squares regression is a function
of the
magnitude of the interannual variability (given by the standard deviation σ), the lag - one autocorrelation and the
trend length (Thompson et al. 2015).
The
magnitude of each relative changepoint is calculated using the most appropriate two - phase regression model (e.g., a jump in mean with no
trend in the series, a jump in mean within a general
linear trend, etc.).