Beyond CHC: ITD—Within-CHC Domain Complexity Optimized Measures
Optimizing
Cognitive Complexity of CHC measures
I have recently begun to recognize the contribution that The
Brunswick Symmetry derived Berlin Intelligence Structure (BIS) model can make in applied intelligence research, especially for increasing predictor-criteria
relations by maximizing these
relations via matching the predictor-criteria space on the dimension of cognitive
complexity. What is cognitive
complexity? Why is it important? More important, what role should it play in
designing intelligence batteries to optimize CHC COG-ACH
relations?
Cognitive
complexity is often operationalized by inspecting individual test loadings on
the first principal component from principal component analysis (Jensen,
1998). The high g-test rationale is that performance on tests that are more
cognitively complex “invoke a wider range of elementary cognitive processes
(Jensen, 1998; Stankov, 2000, 2005)” (McGrew, 2010b, p. 452). High g-loading
tests are often at the center of MDS (multidimensional scaling) radex models (click here for AP101 Brief Report #15: Cognitive-Aptitude-Achievement Trait Complexes example)—but this isomorphism does not
always hold. David Lohman, a student of Richard Snow’s, has made extensive use of MDS methods to study intelligence and has one
of the best grasps of what cognitive complexity, as represented in the
hyperspace of MDS figures, contributes to understanding intelligence and
intelligence tests. According to Lohman
(2011), those tests closer to the center are more cognitively complex due five
possible factors—larger number of cognitive component processes; accumulation of
speed component differences: more important component processes (e.g.,
inference); increased demands of attentional control and working memory; and/or
or more demands on adaptive functions (assembly, control, and monitoring). Schneider’s (in press) level of abstraction description of broad CHC factors is similar to
cognitive complexity. He uses the simple
example of 100 meter hurdle performance.
According to Schneider (in press), one could independently measure 100
meter sprinting speed and then standing still and jumping over a hurdle (both
examples of narrow abilities). However,
running a 100 meter race is not the mere sum of the two narrow abilities and as
is more of a non-additive combination and integration of narrow abilities. This analogy captures the essence of
cognitively complexity—which, in the realm of cognitive measures, are tasks
that have more of the five factors listed by Lohman involvedduring successful task
performance.
Of critical
importance is the recognition that factor or ability domain breadth (i.e.,
broad or narrow) is not synonymous
with cognitively complexity. More
important, cognitive complexity has not always been a test design concept (as
defined by the Brunswick Symmetry and BIS model) explicitly incorporated into "intelligent" intelligence test design (ITD). A number of tests have incorporated the notion
of cognitive complexity in their design plans, but I believe this type of
cognitive complexity is different than the within-CHC domain cognitive
complexity discussed here.
For
example, according to Kaufman and Kaufman (2004), “in developing the KABC-II,
the authors did not strive to develop ‘pure’ tasks for measuring the five CHC
broad abilities. In theory, Gv tasks should exclude Gf or Gs, for example, and tests of other broad abilities, like Gc or Glr, should only measure that ability and none other. In practice, however, the goal of
comprehensive tests of cognitive ability like the KABC-II is to measure problem
solving in different contexts and under different conditions, with complexity being necessary to assess
high-level functioning” (p. 16; italics emphasis added). Although the Kaufman’s address the importance
of cognitively complex measures in intelligence test batteries, their CHC-grounded
description defines complex measures as those that are factorially complex or
mixed measures of abilities from more than one broad CHC domain. The Kaufman’s also address cognitive
complexity from the non-CHC neurocognitve three-block functional Luria
neurocognitive model when they indicate that it is important to provide
measurement that evaluates the “dynamic integration of the three blocks”
(Kaufman & Kaufman, 2004, p.13).
This emphasis on neurocognitive integration (and thus, complexity) is
also an explicit design goal of the latest Wechsler batteries. As stated in the WAIS-IV manual (Wechsler, 2008),
“although there are distinct advantages to the assessment and division of more
narrow domains of cognitive functioning, several issues deserve note. First, cognitive functions are interrelated,
functionally and neurologically, making it difficult to measure a pure domain
of cognitive functioning” (p. 2).
Furthermore, “measuring psychometrically pure factors of discrete domains
may be useful for research, but it does not necessarily result in information
that is clinically rich or practical in real world applications (Zachary,
1900)” (Wechsler, p. 3). Finally,
Elliott (2007) similarly argues for the importance of recognizing
neurocognitive-based “complex
information processing” (p. 15; italics emphasis added) in the design of the
DAS-III, which results in tests or composites measuring across CHC-described
domains, as important in test design.
The ITD
principle explicated and proposed here is that of striving to develop
cognitively complex measures within broad
CHC domains—that is, not attaining complexity via the blending of abilities
across CHC broad domains and not attempting to directly link to neurocognitive
network integration.[1] The Brunswick Symmetry based BIS model
provides a framework for attaining this goal via the development and analysis
of test complexity by paying attention to cognitive content and operations
facets.
Figure 12
presents the results of a 2-D MDS Radex model of most all key WJ III broad and
narrow CHC cognitive and achievement clusters (for all norm subjects from
approximately 6 years of age thru late adulthood). [2] The current
focus of the interpretation of the results in Figure 12 is only on the degree
of cognitive complexity (proximity to the center of the figure) of the broad
and narrow WJ III clusters within the same domain (interpretations of the content and operations facets are not a focus of this current material). Within a domain the broadest three-test parent
clusters are designated by black circles.[3] Two-test broad clusters are designed by gray
circles. Two test narrow offspring clusters within broad domains are
designated by white circles. All
clusters within a domain are connected to the broadest parent broad cluster by
lines. The critically important
information is the within-domain cognitive complexity of the respective parent
and sibling clusters as represented by their relative distances from the center
of the figure. A number of interesting conclusions
are apparent. [Click on image to enlarge]
First, as
expected, the WJ III GIA-Ext cluster is almost perfectly centered in the
figure—it is clearly the most cognitively complex WJ III cluster. In comparison, the three WJ III Gv clusters
are much weaker in cognitive complexity than all other cognitive clusters with
no particular Gv cluster demonstrating a clear cognitive complexity advantage. As
expected, the measured reading and math achievement clusters are primarily
cognitively complex measures. However,
those achievement clusters that deal more with basic skills (Math Calculation—MTHCAL; Basic Reading Skills—RDGBS) are
less complex that the application
clusters (Reading Comprehension-RDGCMP; Math Reasoning-MTHREA).
The most
intriguing findings in Figure 12 are the differential cognitive complexity
patterns within CHC domains (with at least one parent and at least one
offspring cluster). For example, the
narrow Perceptual Speed (Gs-P) offspring cluster is more cognitively complex
than the broad parent Gs cluster. The broad
Gs cluster is comprised of the Visual Matching (Gs-P) and Decision Speed
(Gs-R9; Glr-NA) tests, tests that measure different narrow abilities. In contrast the Perceptual Speed cluster (Gs-P)
is comprised of two tests that are classified as both measuring the same narrow
ability (perceptual speed). This finding
appears, on first blush, counterintuitive as one would expect a cluster
comprised of tests that measure different content and operations (Gs cluster) would
be more complex (as per the above definition and discussion) than one comprised
of two measures of the same narrow ability (Gs-P). However, one must task analyze the two
Perceptual Speed tests to realize that although both are classified as
measuring the same narrow ability (perceptual speed), they differ in both
stimulus content and cognitive operations.
Visual Matching requires processing of numeric stimuli. Cross Out requires the processing of
visual-figural stimuli. These are two
different content facets in the BIS model.
The Cross Out visual-figural stimuli are much more spatially challenging
than the simple numerals in Visual Matching.
Furthermore, the Visual Matching test requires the examinee to quickly
seek out and discover and mark two digit pairs that are identical. In contrast, in the Cross Out test the
subject is provided a target visual-figural shape and the subject must then
quickly scan a row of complex visual images and mark two that are identical to
the target. Interesting, in other
unpublished analyses I have completed, the Visual
Matching test often loads on or groups with quantitative achievement tests
while Cross Out has frequently show to load on a Gv factor. Thus, task analysis of the content and cognitive
operations of the WJ III Perceptual Speed tests suggests that although both are
classified as narrow indicators of Gs-P, they differ markedly in task
requirements. More important, the
Perceptual Speed cluster tests, when combined, appear to require more
cognitively complex processing than the broad Gs cluster. This finding is consistent with Ackerman,
Beier and Boyle’s (2002) research that suggests that perceptual speed has
another level of factor breadth via the identification of four subtypes of
perceptual speed (i.e., pattern recognition, scanning, memory and complexity;
see McGrew 2005 and Schneider & McGrew, 2012 for discussion of a
hierarchically organized model of speed abilities). Based on Bruinswick Symmetry/BIS cognitive
complexity principles, one would predict that a Gs-P cluster comprised of two
parallel forms of the same task (e.g., two Visual Matching or two Cross Out
tests) would be less cognitively complex than broad Gs. A hint of the possible correctness of this
hypothesis is present in the inspection of the Gsm-MS-MW domain results.
The WJ III Gsm
cluster is the combination of the Numbers Reversed (MW) and Memory for Words
(MS) tests. In contrast, the WJ III
Auditory Memory Span cluster (AUDMS; Gsm-MS) cluster is much less cognitively
complex when compared to Gsm (see Figure 12).
Like the Perceptual Speed (Gs-P) cluster described in the context of the
processing speed family of clusters, the Auditory Memory Span cluster is
comprised of two tests with the same memory span (MS) narrow ability
classification (Memory for Words; Memory for Sentences). Why is this narrow cluster less complex than
its broad parent Gsm cluster while the opposite held true for Gs-P and Gs? Task analysis suggests that the two memory
span tests are more alike than the two perceptual speed tests. The Memory for Words and Memory Sentences
tests require the same cognitive operation—simply repeating back, in order,
words or sentences spoken to the subject.
This differs from the WJ III Perceptual Speed cluster as the similarly
classified narrow Gs-P tests most likely invoke both common and different
cognitive component operations. Also,
the Memory Span cluster tests are comprised of stimuli from the same BIS content
facet (i.e., words and sentences; auditory-linguistic/verbal). In contrast, the Gs-P Visual Matching and
Cross Out tests involve two different content facets (numeric and
visual-figural).
In
contrast, the WJ III Working Memory cluster (Gsm-MW) is more cognitively
complex than the parent Gsm cluster. This
finding is consistent with the prior WJ III Gs/Perceptual Speed and WJ III
Gsm/Auditory Memory Span discussion. The
WJ III Working Memory cluster is comprised of the Numbers Reversed and Auditory
Working Memory tests. Numbers Reversed
requires the processing of stimuli from one BIS content facet—numeric
stimuli. In contrast, Auditory Working
Memory requires the processing of stimuli from two BIS content factors—numeric
and auditory-linguistic/verbal; numbers and words). The cognitive operations of the two tests
also differ. Both require the holding of
the presented stimuli in active working memory space. Numbers Reversed then requires the simple
reproduction of the numbers in reverse order.
In contrast, the Auditory Working Memory test requires the storage of
the numbers and words in separate chunks, and then the production of the
forward sequence of each respective chunk (numbers or words), one chunk before
the other. Greater reliance on divided
attention is most likely occurring during the Auditory Working Memory test.
In summary,
the results presented in Figure 12 suggest that it is possible to develop
cluster scores that vary by degree of cognitively complexity within the same
broad CHC domain. More important is the finding that the classification of clusters as broad or narrow does not provide information on the measures cognitive complexity. Cognitively complexity, as defined in the classification of clusters as broad or narrow does not provide information on the measures cognitive complexity. Cognitive complexity, as in the
Lohman sense, can be achieved within CHC domains without resorting to mixing
abilities across CHC domains. Finally,
narrow clusters can be more cognitively complex, and thus likely better
predictors of complex school achievement, than broad clusters or other narrow
clusters.
Implications
for Test Battery Design and Assessment Strategies
The recognition
of cognitive complexity as an important ITD principle suggests that the push to
feature broad CHC clusters in contemporary test batteries, or in the
construction of cross-battery assessments, fails to recognize the importance of
cognitive complexity. I plead guilty to
contributing to this focus via my role in the design of the WJ III which
focused extensively on broad CHC domain construct representation—most WJ III
narrow CHC clusters require the use of the third WJ III cognitive book (the
Diagnostic Supplement; Woodcock, McGrew, Mather & Schrank, 2003). Similarly, guilty as charged in the dominance
of broad CHC factor representation in the development of the original
cross-battery assessment principles (Flanagan & McGrew, 1997; McGrew & Flanagan, 1998).
It is also
my conclusion that the narrow is better
conclusion of McGrew and Wendling (2010) may need modification. Revisiting the McGrew and Wendling (2010)
results suggest that the narrow CHC clusters that were more predictive of
academic achievement likely may have been so not necessarily because they are narrow,
but because they are more cognitively complex.
I offer the hypothesis that a more correct principle is that
“cognitively complex measures” are better.
I welcome new research focused on
testing this principle.
In
retrospect, given the universe of WJ III clusters, a broad+narrow hybrid
approach to intelligence battery configuration (or cross-battery assessment) may
be more appropriate. Based exclusively
on the results presented in Figure 12, the following clusters would appear
those that might better be featured in the “front end” of the WJ III or a
selective testing constructed assessment—those clusters that examiners should
consider first within each CHC broad domain:
Fluid Reasoning (Gf)[4], Comprehension-Knowledge
(Gc), Long-term Retrieval (Glr), Working Memory (Gsm-MW), Phonemic Awareness 3
(Ga-PC), and Perceptual Speed (Gs-P). No
clear winner is apparent for Gv, although the narrow Visualization cluster is
slightly more cognitively complex than the Gv and Gv3 clusters. The above suggests that if broad clusters are
desired for the domains of Gs, Gsm and Gv, then additional testing beyond the “front
end” or featured tests and clusters would require administration of the
necessary Gs (Decision Speed), Gsm (Memory for Words) and Gv (Picture
Recognition) tests.
Utilization of the ITD test design principle of optimizing within-CHC cognitively complexity of clusters suggests that a different emphasis and configuration of WJ III tests might be more appropriate. It is proposed that the above WJ III cluster complexity priority or feature model would likely allow practitioners to administer the best predictors of school achievement. I further hypothesize that this cognitive complexity based broad+narrow test design principle most likely applies to other intelligence test batteries that have adhered to the primary focus on featuring tests that are the purest indicators of two or more narrow abilities within the provided broad CHC interpretation scheme. Of course, this is an empirical question that begs research with other batteries. More useful with be similar MDS Radex cognitive complexity analysis of cross-battery intelligence data sets.[5]
References (not included in this post. The complete paper will be announced and made available for reading and download in the near future)
Utilization of the ITD test design principle of optimizing within-CHC cognitively complexity of clusters suggests that a different emphasis and configuration of WJ III tests might be more appropriate. It is proposed that the above WJ III cluster complexity priority or feature model would likely allow practitioners to administer the best predictors of school achievement. I further hypothesize that this cognitive complexity based broad+narrow test design principle most likely applies to other intelligence test batteries that have adhered to the primary focus on featuring tests that are the purest indicators of two or more narrow abilities within the provided broad CHC interpretation scheme. Of course, this is an empirical question that begs research with other batteries. More useful with be similar MDS Radex cognitive complexity analysis of cross-battery intelligence data sets.[5]
References (not included in this post. The complete paper will be announced and made available for reading and download in the near future)
[1] This
does not mean that cognitive complexity may not be related to the integrity of
the human connectome or different brain networks. I am excited about contemporary
brain network research (Bressler & Menon, 2010; Cole, Yarkoni, Repovs,
Anticevic & Braver, 2012; Toga, Clark, Thompson, Shattuck, & Van Horn,
2012; van den Heuvel & Sporns, 2011), particularly that which has
demonstrated links between neural network efficiency and working memory,
controlled attention and clinical disorders such as ADHD (Brewer, Worunsky,
Gray, Tang, Weber & Kober, 2011; Lutz, Slagter, Dunne, & Davidson,
2008; McVay & Kane, 2012). The Parietal-Frontal Integration (P-FIT) theory
of intelligence is particularly intriguing as it has been linked to CHC
psychometric measures (Colom, Haier, Head, Álvarez-Linera, Quiroga, Shih, & Jung, 2009; Deary, Penke, & Johnson, 2010; Haier, 2009; Jung & Haier,
2007) and could be linked to CHC cognitively-optimized psychometric measures.
[2] Only
reading and math clusters were included to simplify the presentation of the
results and the fact, as reported previously, that reading and writing measures
typically do not differentiate well in multivariate analysis—and thus the Grw
domain in CHC theory.
[3]
GIA-Ext is also represented by a black circle.
[4]
Although the WJ III Fluid Reasoning 3 cluster (Gf3) is slightly closer to the
center of the figure, the difference from Fluid Reasoning (Gf) is not large and
time efficiency would argue for the two-test Gf cluster.
[5] It
is important to note that the cognitive complexity analysis and interpretation
discussed here is specific to within the WJ III battery only. The degree
of cognitive complexity in the WJ III cognitive clusters in comparison to
composite scores from other intelligence batteries can only be ascertained by
cross-battery MDS complexity analysis.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.