- You may also want to read the article by Frazier and Youngstrom (2007) on the issue of overfactoring in cognitive assessment instruments published in the journal Intelligence. Other issues related to how much variability is related to the highest order dimension (g) and what variability remains in lower order dimensions (second strata factors) also impacts interpretability of the scores.
- A historical increase in the number of factors purportedly measured by commercial tests of cognitive ability may result from four distinct pressures including: increasingly complex models of intelligence, test publishers' desires to provide clinically useful assessment instruments with greater interpretive value, test publishers' desires to includeminor factors thatmay be of interest to researchers (but are not clinically useful), and liberal statistical criteria for determining the factor structure of tests. The present study examined the number of factors measured by several historically relevant and currently employed commercial tests of cognitive abilities using statistical criteria derived from principal components analyses, and exploratory and confirmatory factor analyses. Two infrequently used statistical criteria, that have been shown to accurately recover the number of factors in a data set, Horn's parallel analysis (HPA) and Minimum Average Partial (MAP) analysis, served as gold-standard criteria. As expected, there were significant increases over time in the number of factors purportedly measured by cognitive ability tests (r=.56, p=.030). Results also indicated significant recent increases in the overfactoring of cognitive ability tests. Developers of future cognitive assessment batteries may wish to increase the lengths of the batteries in order to more adequately measure additional factors. Alternatively, clinicians interested in briefer assessment strategies may benefit from short batteries that reliably assess general intellectual ability.
- Gary, The paper you reference suggests that cognitive tests have "overfactored" their data, meaning they have extracted more factors than were really there in the data. It singles out the WJ-R and WJ-III as outliers (i.e., WJ batteries are REALLY overfactored)
. I found their conclusions hard to believe so I played with some simulated data using SPSS to see if I could make a simulated "WJ-III" dataset with 7 broad factor structures plus a g-factor uniting all the scores. Each subtest score was computed like this:
Subtest = g + BroadFactor + error
Each subtest was assigned to the broad factor it was designed to load
on. Each source of variance was normally distributed.
By systematically changing the variance of the g and broad factors, I
was able to look at how different factor extraction rules performed
under several combinations of g and broad factor sizes.
I found that the presence of even a moderately-sized g-factor caused all
of the factor extraction rules to underestimate the true number of
factors (7 correlated factors in this case).
It seems that under many plausible conditions WJ-III-like data will have
more factors than detected by popular factor extraction rules. Thus, I
think that this paper overstates its case.
Here is my SPSS syntax. Create a few thousand cases and then play with
the gCoefficient and FactorCoefficient variables (0 to 2 is a good range).
COMPUTE gCoefficient = 1.5 .
COMPUTE FactorCoefficient = 1.0.
COMPUTE g = RV.NORMAL(0,1) .
COMPUTE Gc = RV.NORMAL(0,1) .
COMPUTE Gf = RV.NORMAL(0,1) .
COMPUTE Gsm = RV.NORMAL(0,1) .
COMPUTE Gs = RV.NORMAL(0,1) .
COMPUTE Ga = RV.NORMAL(0,1) .
COMPUTE Glr = RV.NORMAL(0,1) .
COMPUTE Gv = RV.NORMAL(0,1) .
EXECUTE .
COMPUTE VC = RV.NORMAL(0,1) + gCoefficient * g + FactorCoefficient * Gc .
COMPUTE GI = RV.NORMAL(0,1) + gCoefficient * g + FactorCoefficient * Gc .
COMPUTE CF = RV.NORMAL(0,1) + gCoefficient * g + FactorCoefficient * Gf .
COMPUTE AS = RV.NORMAL(0,1) + gCoefficient * g + FactorCoefficient * Gf .
COMPUTE P = RV.NORMAL(0,1) + gCoefficient * g + FactorCoefficient * Gf .
COMPUTE VAL = RV.NORMAL(0,1) + gCoefficient * g + FactorCoefficient * Glr .
COMPUTE RF = RV.NORMAL(0,1) + gCoefficient * g + FactorCoefficient * Glr .
COMPUTE RPN = RV.NORMAL(0,1) + gCoefficient * g + FactorCoefficient * Glr .
COMPUTE NR = RV.NORMAL(0,1) + gCoefficient * g + FactorCoefficient * Gsm .
COMPUTE MW = RV.NORMAL(0,1) + gCoefficient * g + FactorCoefficient * Gsm .
COMPUTE AWM = RV.NORMAL(0,1) + gCoefficient * g + FactorCoefficient * Gsm .
COMPUTE VM = RV.NORMAL(0,1) + gCoefficient * g + FactorCoefficient * Gs .
COMPUTE DS = RV.NORMAL(0,1) + gCoefficient * g + FactorCoefficient * Gs .
COMPUTE PC = RV.NORMAL(0,1) + gCoefficient * g + FactorCoefficient * Gs .
COMPUTE SB = RV.NORMAL(0,1) + gCoefficient * g + FactorCoefficient * Ga .
COMPUTE AA = RV.NORMAL(0,1) + gCoefficient * g + FactorCoefficient * Ga .
COMPUTE IW = RV.NORMAL(0,1) + gCoefficient * g + FactorCoefficient * Ga .
COMPUTE SR = RV.NORMAL(0,1) + gCoefficient * g + FactorCoefficient * Gv .
COMPUTE PR = RV.NORMAL(0,1) + gCoefficient * g + FactorCoefficient * Gv .
EXECUTE .
FACTOR
/VARIABLES VC GI CF AS P VAL RF RPN NR MW AWM VM DS PC SB AA IW SR PR
/MISSING LISTWISE
/ANALYSIS VC GI CF AS P VAL RF RPN NR MW AWM VM DS PC SB AA IW SR PR
/PRINT INITIAL EXTRACTION ROTATION
/FORMAT SORT BLANK(.10)
/PLOT EIGEN
/CRITERIA MINEIGEN(1) ITERATE(25)
/EXTRACTION PAF
/CRITERIA ITERATE(25)
/ROTATION PROMAX(4)
/METHOD=CORRELATION .
All of this "this factor analysis is better than that factor analysis" reminds me of a chapter written a long time ago by Doug Detterman (current and long-standing editor of the journal Intelligence---sorry...I can't recall the reference or the exact quotes...I'm going from long-term memory on this one) in some book on individual differences and intelligence that I read very early in my psychometric carrier. It was a chapter dealing with the laws of individual differences research. One of the laws had to do with factor analysis. In my own words---"if you put two factor analysis methodologists in the same room and an argument will break out....there will be no agreement on the number of factors to extract, the proper rotation method to use, and the interpretation of the factors."
So true! My only current comment is that having personally learned some of my most important lessons re: factor analysis from the likes of John Horn, John "Jack" Carroll, and Jack McArdle, there is as much "art" as there is "science" (specific factor extraction rules) to a proper factor analysis of intelligence tests.
Stay tunned. Dr. John Garruto has just sent me a practitioner perspective on this article. It will show up as a guest blog post later today or tomorrow.
Let the games begin.
Technorati Tags: psychology, psychometrics, educational psychology, school psychology, neuropsychology, intelligence, IQ, IQ tests, factor analysis, statistics, cognition, WJ-R, WJ III, John Horn, Jack Carroll, Jack McArdle, Doug Dettermen
Powered by ScribeFire.