Mann et al. are very clear that better results are obtained when the data set is first reduced by taking the first
M eigenvalues.
First, reducing the data set (in this case, the AVHRR data) to the first
M eigenvalues is irrelevant insofar as the choice of infilling algorithm is concerned.
I had a discussion with Steve McIntyre a couple of years ago on the scaling issue but I also asked about how eigenvalues fit into the topic, i.e.
were the eigenvalues from the «noise» PCs smaller than the eigenvalues from the reconstruction.
Not exact matches
According to the PFA (on the basis of
eigenvalues, the Kaiser criterion, scree test and the interpretation) three aspects could
be constructed with 17 statements (Table 1 in Appendix).
The wider the energy range of electronic responses a researcher tries to capture in a system, the more
eigenvalues and eigenvectors need to
be computed, which also means more computing resources
are necessary.
Recently,
eigenvalues (
S values) and vectors (V values) have
been used to infer the genesis of glacial materials, indicating factors such as the rheology of the sediment.
In the PCA, three components with
eigenvalues > 1
were extracted from the data set.
c now determine suggested number of EOFs in training c based on rule N applied to the proxy data alone c during the interval t > iproxmin (the minimum c year by which each proxy
is required to have started, c note that default
is iproxmin = 1820 if variable c proxy network
is allowed (latest begin date c in network) c c we seek the n first eigenvectors whose
eigenvalues c exceed 1 / nproxy» c c nproxy»
is the effective climatic spatial degrees of freedom c spanned by the proxy network (typically an appropriate c estimate
is 20 - 40)
[Response: Something that'll help a bit
is to recognize that the basis of PCA
is simply an
eigenvalue / eigenvector decomposition.
They
are correct in that i implemented the fit to the log
eigenvalue spectrum in fig S4 incorrectly, but fortunately it makes no difference (as stated above)-- I have no idea why they didn't let me know when they found it.
Precisely that question
was addressed by Mann and coworkers in their response to the rejected MM comment through the use of so - called «Monte Carlo» simulations that generate an ensemble of realizations of the random process in question (see here) to determine the «null»
eigenvalue spectrum that would
be expected from simple red noise with the statistical attributes of the North American ITRDB data.
From the latter, you can't tell whether something
is a trend or a cycle with data short compared to the cycle (the
eigenvalues of the discriminating matrix explode, making every observation useless).
I would say looking at the PC1
eigenvalue and its explained variance, and the number of PCs required for a given minimal amount of cumulative explained variance (say 40 %) would
be very telling.
From M&M 2005: The loadings on the first
eigenvalues were inflated by he MBH98 method.
The «short - centered» leading
eigenvalue (EV) magnitude for Mann's tree - ring data
is much larger than the corresponding EV magnitudes produced in M&M's «red noise» runs.
So the median
eigenvalue for M&M's «centered» leading PC's
is about ~ 0.04; the median
eigenvalue for the «non-centered» leading PC's
is about ~ 0.13.
It puts relevant parts of mathematics to use, and finds parts of the vast field of mathematics that
are useful, such as Riemann geometries that Einstein used for general relativity, or
eigenvalues and matrix operators used by various other physicists for quantum mechanics.
«Along with the use of principal component regression there appears to have
been a growth in the misconception that the principal components with small
eigenvalues will rarely
be of any use in a regression.
The
eigenvalues produced by the red noise test
are an order of magnitude lower than the
eigenvalues produced by Mann's (admittedly incorrect) PCA methodology.
Very good: so you
are happy with the method on the grounds that the
eigenvalues are smaller for red noise.
The
eigenvalues produced by the red noise test
are an order of magnitude lower than the
eigenvalues produced by...»
But the very meaning of the
eigenvalues is to separate those that
are more important from the others.
Dismissing
eigenvalue analysis as a «trick» that can prove just about anything
is just mind - blowingly ignorant!
We found that a good description of the shower shape
is obtained when only the two most significant parameters, corresponding to the largest
eigenvalues,
are kept.
Even if the properly centered PCA
is applied to Mann's NOAMER tree - ring data, you get a small number of dominant singular - values /
eigenvalues.
Our
eigenvalue of 32.3
is quite high and
is evidence of a robust factor.
The first factor covered more than 64 % of the total variance of the readability measures with an
eigenvalue of 32.3, which
is more than 23 units greater than the next factor's
eigenvalue.
Eigenvalues are factors that
are derived through linear transformations; increasing values correspond to more useful factors.
A cutoff of 0.40
was used for factor loading with an
eigenvalue greater than 1, which allows the extracted factor to explain a reasonable proportion of the total variance.
The third component had an initial
eigenvalue close to 1 (0.9) and comprised two of the three sexual violence items; otherwise, the structure
was identical to the two component solution and largely mirrored VAWI's physical, psychological and sexual violence subscales.
Decisions on the number of components to extract
were based on parallel analysis, Kaiser's
eigenvalue - greater - than - one rule, total proportion of variance explained and Cattell's scree plot.
The cut - off point for factor loadings
was 0.40 and for
eigenvalues 1.00.
However, the six factors
were originally selected by the Kaiser - Guttman rule (
eigenvalue > 1), which
is not recommended for determining the number of factors [24] for the following reasons; First, this method
is recommended for the principal component analysis (PCA) case and not for the EFA.
Exploratory factor analysis: Using a minimum
eigenvalue of 1.0 as the extraction criterion for factors, 3 factors
were extracted.
The analysis highlighted 6 factors (the first six
eigenvalues were 11.4, 4.2, 2.4, 2.2, 1.7, 1.6) accounting for 41.2 % of the total variance.
Two component solutions
were examined: (1) component extraction based on a parallel analysis, proportion of variance explained, Kaiser's
eigenvalue - greater - than - one rule and on the examination of Cattell's scree plot and (2) a three - component solution as originally conceptualised in the VAWI.
An initial component extraction showed that there
were five dimensions with
eigenvalues greater than one.
Factors with an
eigenvalue > 1.0
were selected.
Cases
were deleted using a listwise deletion and an
eigenvalue of 1
was used to interpret the factor structure.
For each scale, two factors with an
eigenvalue > 1.0
were identified.
The number of factors
was determined by a minimum
eigenvalue of 1.00 or greater, followed by a minimum loading of.40 for the items in each factor.
The components that
were removed had
eigenvalues below 1.25.
Varimax rotation
was employed in the factor analysis, and an
eigenvalue above one
was used as the standard for selecting factors; the results
are shown in Table 1.
The criteria used to determine the number of profiles considered as meaningful
are identical to those used in PCA analysis (i.e.,
eigenvalue, explained variance, and interpretability).
Individual items
were retained if they had a loading near or over 0.35 and the number of factors
was based upon those with
eigenvalues greater than one.10 A two - factor solution
was the clearest at both ages and accounted for over 95 % of the total variance in the observed variables (Table 2).
Factor analysis of the 30 remaining items
was then conducted; the scree plot indicated a one - factor solution, having an
eigenvalue of 13.1 and accounting for 43.5 % of the variance.
A principal components factor analysis
was then conducted to determine whether the remaining items all loaded a single factor based on both the slope of scree plot as well as examination of the
eigenvalues.
We selected factors using
eigenvalue scree plots, and chose a factor - loading threshold of 0.3, taking the higher - loaded variable where there
was cross-loading (Table 1).