Saturday, November 30, 2024

On making individual tests in #CHC #intelligence test batteries more #cogntivelycomplex: Two approaches



The following information is from a section of the WJ IV techncial manual (McGrew, LaForte & Schrank, 2014) and will again be included in the WJ V technical manual (LaForte, Dailey, McGrew, Q1, 2025).  It was first discussed in McGrew (2012)

On making individual tests in intelligence test batteries more cogntively complex

In the applied intelligence test literature, their are typically two different approaches typically used to increase the cognitive complexity of individual tests (McGrew et al., 2014). The first approach is to deliberately design factorially complex CHC tests, or tests that deliberately include the influence of two or more narrow CHC abilities. This approach is exemplified by Kaufman and Kaufman (2004a) in the development of the Kaufman Assessment Battery for Children–Second Edition (KABC-II), where:

the authors did not strive to develop “pure” tasks for measuring the five CHC broad abilities. In theory, Gv tasks should exclude Gf or Gs, for example, and tests of other broad abilities, like Gc or Glr, should only measure that ability and no other abilities. In practice, however, the goal of comprehensive tests of cognitive abilities like the KABC-II is to measure problem solving in different contexts and under different conditions, with complexity being necessary to assess high-level functioning. (p. 16)

In this approach to test development, construct-irrelevant variance (Benson, 1998; Messick, 1995) is not deliberately minimized or eliminated. Although tests that measure more than one narrow CHC ability typically have lower validity as indicators of CHC abilities, they tend to lend support to other types of validity evidence (e.g., higher predictive validity). The WJ V has several new cognitive tests that use this approach to cognitive complexity. 

The second approach to enhancing the cognitive complexity of tests is to maintain the CHC factor purity of tests or clusters (as much as possible) while concurrently and deliberately increasing the complexity of information processing demands of the tests within the specific broad or narrow CHC domain (McGrew, 2012). As described by Lohman and Lakin (2011), the cognitive complexity of the abilities measured by tests can be increased by (a) increasing the number of cognitive component processes, (b) including differences in speed of component processing, (c) increasing the number of more important component processes (e.g., inference), (d) increasing the demands of attentional control and working memory, or (e) increasing the demands on adaptive functions (assembly, control, and monitoring). This second form of cognitive complexity, not to be confused with factorial complexity, is the inclusion of test tasks that place greater demands on cognitive information processing (i.e., cognitive load), that require greater allocation of key cognitive resources (viz., working memory or attentional control), and that invoke the involvement of more cognitive control or executive functions. Per this second form of cognitive complexity, the objective is to design a test that is more cognitively complex within a CHC domain, not to deliberately make it a mixed measure of two or more CHC abilities.

A large number of prior IQs Corner’s posts regarding the topic of cognitive complexity in intelligence testing can be found here.

Benson, J. (1998). Developing a strong program of construct validation: A test anxiety example. Educational Measurement: Issues and Practice, 17(1), 10–22.

Lohman, D. F., & Lakin, J. (2011). Reasoning and intelligence. In R. J. Sternberg & S. B. Kaufman (Eds.), The Cambridge handbook of intelligence (2nd ed., pp. 419–441). New York, NY: Cambridge University Press.

McGrew, K. S. (2012, September). Implications of 20 years of CHC cognitive-achievement research: Back-to-the-future and beyond CHC. Paper presented at the Richard Woodcock Institute, Tufts University, Medford, MA. (click here to access)

Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons' responses in performances as scientific inquiry into score meaning. American Psychologist, 50, 741–749.