Weighted and geographic clustering of data were taken into account in the data analyses by
using a jackknife repeated replications simulation method implemented in SAS macro V. 14.
Coefficients are determined by averaging near - neighbor historical track data, with «near» determined optimally by
using jackknife out - of - sample validation to maximize the likelihood of the observations.
Standard errors for the D - statistic were determined
using jackknife resampling method by dividing the genome into 1 Mb blocks.
Not exact matches
Significance was assessed
using a weighted block
jackknife procedure for all five analysis types.
We
used FSTAT to estimate overall FST and assessed statistical significance by
jackknifing over loci.
But they don't like this number, and so instead
use a «leave - one - out» «
jackknife» procedure, and somehow end up
using a 95 % CI of 0.40 dC, corresponding to a SE of 0.20 dC, too small by a factor of 10!
I think they may be misunderstanding what the
jackknife procedure tells you, though I'm not certain how they
used it.
The Berkeley data is plotted with uncertainties estimated via randomly subdividing the 179,928 scalpeled stations into 8 smaller sets, calculating global land averages for each of those, and then comparing the results
using the «
jackknife» statistical method.