Technology alert.
Based on the recommendation of my work associate (Jeff Evans of Evans Consulting), I recently started using a piece of "virtual office" software called the Groove. Jeff and I have been using it for approximately two weeks.
I must say that this is one of the best pieces of software I've every tried. KUDOS to Jeff for finding this gem. As many of us start to move towards the "flat world" (as per the best selling "The World is Flat" book) and we start to work and collaborate with others via "virtual offices", software like the Groove will become critical.
I'm not alone in my excitement for this software (which would be great for anyone doing collaborative research, work, etc. together). WinXPnews today gave it a great review.
If you collaborate with 1 or more people in different locations and are looking for a more efficient means to engage in P2P (person-2-person) communication, file sharing, etc., this is the real deal.
I'm in da Groove................
Tuesday, May 31, 2005
Monday, May 30, 2005
Interpretation of Gf tests: Ideas from Whilhelm
In a prior post I summarized a taxonomic lens for analyzing performance on figural/spatial matrix measures of fluid intelligence (Gf). Since then I have had the opportunity to read “Measuring Reasoning Ability” by Oliver Wilhelm (see early blog post on recommended books to read – this chapter is part of the Handbook of Understanding and Measuring Intelligence by Wilhelm and Engle). Below are a few select highlights.
The need for a more systematic framework for understanding Gf measures
As noted by Wilhelm, “there is certainly no lack of reasoning measures” (p. 379). Furthermore, as I learned when classifying tests as per CHC theory with Dr. Dawn Flanagan, the classificaiton of Gf tests as measures of general sequential (deductive) reasoning (RG) inductive reasoning (I), and quantitative reasoning (QR) is very difficult. Kyllonen and Christal’s 1990 statement (presented in the Wilhelm chapter) that the “development of good tests of reasoning ability has been almost an art form, owing more to empirical trial-and-error than to systematic delineation of the requirements which such tests must satisfy” (p.446 in Kyllonen and Christal; p. 379 in Wilhelm). It thus follows that the logical classification of Gf tests is often difficult…or, as we used to say when I was in high school..”no sh____ batman!!!!”
As a result, “scientists and practitioners are left with little advice from test authors as to why a specific test has the form it has. It is easy to find two reasoning tests that are said to measure the same ability but that are vastly different in terms of their features, attributes, and requirements” (p. 379).
Wilhelm’s system for formally classifying reasoning measures
Wilhelm articulates four aspects to consider in the classification of reasoning measures. These are:
Ok…that’s enough for this blog post. Readers are encouraged to chew on this taxonomic framework. I do plan (but don’t hold me to the promise…it is a benefit of being the benevolent blog dictator) to summarize additional information from this excellent chapter. Whilhelm’s taxonomy has obvious implications for those who engage in test development. Wilhelm’s framework suggests a structure from which to systematically design/specify Gf tests as per the four dimensions.
On the flip side (applied practice), Whilhelm’s work suggests that our understanding of the abilities measured by existing Gf tests might be facilitated via the classification of different Gf tests as per these dimensions. Work on the “operation” characteristic has been going strong since the mid 1990’s as per the CHC narrow ability classification of tests.
Might not a better understanding of Gf measures emerge if those leading the pack on how to best interpret intelligence tests add (to the CHC operation classifications of Gf tests) the analysis of tests as per the content and instantiation dimensions, as well as identifying the different types of cognitive strategies that might be elicited by different Gf tests by different individuals?
I smell a number of nicely focused and potentially important doctoral dissertations based on the administration of a large collection of available practical Gf measures (e.g., Gf tests from WJ III, KAIT, Wechslers, DAS, CAS, SB5, Ravens, and other prominent “nonverbal” Gf measures) to a decent sample, followed by exploratory and/or confirmatory factor analyses and multidimensional scaling (MDS). Heck….doesn’t someone out there have access to that ubiquitous pool of psychology experiment subjects --- viz., undergraduates in introductory psychology classes? This would be a good place to start.
More later…I hope.
The need for a more systematic framework for understanding Gf measures
As noted by Wilhelm, “there is certainly no lack of reasoning measures” (p. 379). Furthermore, as I learned when classifying tests as per CHC theory with Dr. Dawn Flanagan, the classificaiton of Gf tests as measures of general sequential (deductive) reasoning (RG) inductive reasoning (I), and quantitative reasoning (QR) is very difficult. Kyllonen and Christal’s 1990 statement (presented in the Wilhelm chapter) that the “development of good tests of reasoning ability has been almost an art form, owing more to empirical trial-and-error than to systematic delineation of the requirements which such tests must satisfy” (p.446 in Kyllonen and Christal; p. 379 in Wilhelm). It thus follows that the logical classification of Gf tests is often difficult…or, as we used to say when I was in high school..”no sh____ batman!!!!”
As a result, “scientists and practitioners are left with little advice from test authors as to why a specific test has the form it has. It is easy to find two reasoning tests that are said to measure the same ability but that are vastly different in terms of their features, attributes, and requirements” (p. 379).
Wilhelm’s system for formally classifying reasoning measures
Wilhelm articulates four aspects to consider in the classification of reasoning measures. These are:
- Formal operation task requirements – this is what most CHC assessment professionals have been encouraged to examine via the CHC lens. Is a test a measure of RG, I, RQ, or a mixture of more than one narrow ability?
- Content of tasks – this is where Wilhelm’s research group has made one of its many significant contributations during the past decade. Wilhelm et al. have reminded us that just because the Rubik’s cube model of intelligence (Guilford’s SOI model) was found seriously wanting, the analyses of intelligence tests by operation (see above) and content facets is theoretically and empirically sound. I fear that many psychologists, having been burned by the unfulfilled promise of the SOI interpretative framework, have often thrown out the content facet with the SOI bath water. There is clear evidence (see my prior post that presents evidence for content facets based on the analysis of 50 CHC designed measures via a Carroll analyses of the data) that most psychometric tests can be meaningfully classified as per stimulus content – figural, verbal, and quantitative.
- The instantiation of the reasoning tasks/problems – what is the formal underlying structure of the reasoning tasks? Space does not allow a detailed treatment here, but Wilhelm provides a flavor of this feature when he suggests that one must go through a “decision tree” to ascertain if the problems are concrete vs. abstract. Following the abstract branch, further differentiation might occur vis-à-vis the distinction of “nonsense” vs. “variable” instantiation. Following the concrete branch decision tree, reasoning problem instantiation can be differentiated as to whether they require prior knowledge or not. And so on.
- As noted by Wilhelm, “it is well established that the form of the instantiation has substantial effects on the difficulty of structurally identical reasoning tasks” (p. 380).
- Vulnerability of task to reasoning ‘strategies” – all good clinicians know, and have seen, that certain examinees often change the underlying nature of a psychometric task via the deployment of unique metacognitive/learning strategies. I often call this the “expansion of a tests specificity by the examinee.” According to Wilhelm, “if a subgroup of participants chooses a different approach to work on a given test, the consequence is that the test is measuring different abilities for different subgroups…depending on which strategy is chosen, different items are easy and hard, respectively” (p, 381). Unfortunately, research-based protocols for ascertaining which strategies are used during reasoning task performance are more-or-less non-existent.
Ok…that’s enough for this blog post. Readers are encouraged to chew on this taxonomic framework. I do plan (but don’t hold me to the promise…it is a benefit of being the benevolent blog dictator) to summarize additional information from this excellent chapter. Whilhelm’s taxonomy has obvious implications for those who engage in test development. Wilhelm’s framework suggests a structure from which to systematically design/specify Gf tests as per the four dimensions.
On the flip side (applied practice), Whilhelm’s work suggests that our understanding of the abilities measured by existing Gf tests might be facilitated via the classification of different Gf tests as per these dimensions. Work on the “operation” characteristic has been going strong since the mid 1990’s as per the CHC narrow ability classification of tests.
Might not a better understanding of Gf measures emerge if those leading the pack on how to best interpret intelligence tests add (to the CHC operation classifications of Gf tests) the analysis of tests as per the content and instantiation dimensions, as well as identifying the different types of cognitive strategies that might be elicited by different Gf tests by different individuals?
I smell a number of nicely focused and potentially important doctoral dissertations based on the administration of a large collection of available practical Gf measures (e.g., Gf tests from WJ III, KAIT, Wechslers, DAS, CAS, SB5, Ravens, and other prominent “nonverbal” Gf measures) to a decent sample, followed by exploratory and/or confirmatory factor analyses and multidimensional scaling (MDS). Heck….doesn’t someone out there have access to that ubiquitous pool of psychology experiment subjects --- viz., undergraduates in introductory psychology classes? This would be a good place to start.
More later…I hope.
Thursday, May 26, 2005
Do bigger brains = higher intelligence?
I have always found the research relating brain size/volume to intelligence of interest---more as a piece of "cocktail trivia" general knowledge. I've never bothered to read/study the "why" behind this line of research, and the current article cited below does not either. This brief aricle simply presents a meta-analysis estimate of the population correlation between brain volume and intelligence.
The correlation reported is .33. Of course, in practical terms this means that measures of intelligence and brain volume share approximately 10% common variance. A significant finding....but, not much in the way of practical implications (IMHO). I would not suggest that applied assessment professionals start carrying tape measures in their test kids.
Just FYI "interesting" information.
McDaniel, M. A. (2005). Big-brained people are smarter: A meta-analysis of the relationship between in vivo brain volume and intelligence. Intelligence, In Press, Corrected Proof.
Abstract
If anyone wants to research this literature in greater depth (big and small brain-headed scholars are all welcome - this is an equal brain-size opportunity blog), below are a few references I found in the IAP Reference DataBase
The correlation reported is .33. Of course, in practical terms this means that measures of intelligence and brain volume share approximately 10% common variance. A significant finding....but, not much in the way of practical implications (IMHO). I would not suggest that applied assessment professionals start carrying tape measures in their test kids.
Just FYI "interesting" information.
McDaniel, M. A. (2005). Big-brained people are smarter: A meta-analysis of the relationship between in vivo brain volume and intelligence. Intelligence, In Press, Corrected Proof.
Abstract
- The relationship between brain volume and intelligence has been a topic of a scientific debate since at least the 1830s. To address the debate, a meta-analysis of the relationship between in vivo brain volume and intelligence was conducted. Based on 37 samples across 1530 people, the population correlation was estimated at 0.33. The correlation is higher for females than males. It is also higher for adults than children. For all age and sex groups, it is clear that brain volume is positively correlated with intelligence.
If anyone wants to research this literature in greater depth (big and small brain-headed scholars are all welcome - this is an equal brain-size opportunity blog), below are a few references I found in the IAP Reference DataBase
- Colom, R., LluisFont, J. M., & AndresPueyo, A. (2005). The generational intelligence gains are caused by decreasing variance in the lower half of the distribution: Supporting evidence for the nutrition hypothesis. Intelligence, 33(1), 83-91.
- Haier, R. J., Chueh, D., Touchette, P., Lott, I. et al. (1995). Brain size and cerebral glucose metabolic rate in nonspecific mental retardation and Down syndrome. Intelligence, 20(2), 191-210.
- Lynn, R., Allik, J., & Must, O. (2000). Sex differences in brain size, stature and intelligence in children and adolescents: some evidence from Estonia. Personality and Individual Differences, 29(3), 555-560.
- Rushton, J. P. (2004). Placing intelligence into an evolutionary framework or how g fits into the r-K matrix of life-history traits including longevity. Intelligence, 32(4), 321-328.
- Rushton, J. P. (1991). "Mongoloid^Caucasoid differences in brain size from military sample": Reply. Intelligence, 15(3), 365-367.
- Rushton, J. P. (1991). Mongoloid^Caucasoid differences in brain size from military samples. Intelligence, 15(3), 351-359.
- Rushton, J. P. (1997). Cranial size and IQ in Asian Americans from birth to age seven. Intelligence, 25(1), 7-20.
- Wickett, J. C., Vernon, P. A., & Lee, D. H. (2000). Relationships between factors of intelligence and brain volume. Personality and Individual Differences, 29(6), 1095-1122.
- Willerman, L. (1991). "Mongoloid-Caucasoid differences in brain size from military samples": Commentary. Intelligence, 15(3), 361-364.
- Willerman, L., Schultz, R., Rutledge, J. N., & Bigler, E. D. (1991). In vivo brain size and intelligence. Intelligence, 15(2), 223-228.
Quote to note - measurement
"Measure what is measurable, and make measurable what is not so"
- Gottlob Frege (1848 - 1925) Quoted in H. Wey, "Mathematics and the Laws of Nature" in Gordon and S. Sorkin (eds.) The Armchair Science Reader, New York: Simon and Schuster, 1959
CHC-grounded neuropsych math study
The following abstract (from a forthcoming article -- JCEN, 27, 1-11, '05) somehow found its way into my in-box via the invisible university. It is a study by David Osmon at the University of Wisconsin-Milwaukee. This is all the information I have regarding this study.
- This study evaluated college age adults (N=138) referred for learning problems using a Cattell-Horn-Carroll based intelligence measure (Woodcock Johnson-Revised: WJ-R) and spatial and executive function neuropsychological measures to determine processing abilities underlying math skills. Auditory and visual perceptual (WJ-R Ga and Gv), long- and short-memory (WJ-R Glr and Gsm), crystallized and fluid intellectual (WJ-R Gc and Gf), and spatial and executive function (Judgment of Line Orientation [JLO] and Category Test) measures differentiated those with and without math deficits. Multiple regression revealed selective processing abilities (Gf, JLO, and Category) predicting about 16% of the variance in math skills after variance associated with general intelligence (also about 16%) was removed. Cluster analysis found evidence for a selective spatial deficit group, a selective executive function deficit group and a double deficit (spatial and executive function) group. Results were discussed in relation to a double deficit hypothesis associated with developmental dyscalculia.
Labels:
achievement,
CHC theory,
dyscalculia,
Ga,
Gf,
Glr,
Gq,
Gsm,
Gv,
math,
WJ-R
Tuesday, May 24, 2005
CHC listserv over 700+
I just checked the membership of the Cattell-Horn-Carroll (CHC) listserv. I was surprised. When I started this unmoderated list the goal was to reach 500 members. I hadn't checked for months, but all of a sudden that goal was reached and surpassed. As of today, n=716.
Spread the word to others who may be interested in participating in ongoing CHC and assessment related chatter.
Spread the word to others who may be interested in participating in ongoing CHC and assessment related chatter.
Monday, May 23, 2005
Psych Daily - daily posts about psych in the news
I'm on a roll this evening. I've run across two new sources to monitor for useful and understandable psych-related information. See prior post re: Cognitive Daily. The other is Psych Daily.
Check it out and decide for yourself.
Check it out and decide for yourself.
Keeping track of new cognitive psychology articles--daily readings
A while back I posted a brief summary of an article that dealt with choking under pressure. I just discovered a more thorough summary at the the Cognitive Daily blog.
This blog supposedly posts a "new cognitive psychology article nearly everyday." I'm going to add it to my regular blog reading. Check it out.
This blog supposedly posts a "new cognitive psychology article nearly everyday." I'm going to add it to my regular blog reading. Check it out.
Neuroscience e-journal alert blog
I just stumbled across a blog from Science Direct that posts references from the latest and greatest neuroscience journals. It is called e-journal alert. I will add to my links soon.
Cattell-Horn-Carroll assessment article: Two thumbs up!
Below is the abstract (and conclusion statement) from a recent CHC overview article all school psychology trainers and practitioners should read. I wish I could post a pdf copy for viewing without getting in trouble with the copyright police.
Yes...it is about my preferred theory and approach to cognitive assessment - the Cattell-Horn-Carroll (CHC) theory of cognitive abilities. The beauty of Fiorello and Primerano's article is that it provides, IMHO, the most succinct and understandable synthesis of the state-of-the art of CHC research as it relates to school achievement and the viability of CHC-driven assessments in the context of the changing role of cognitive assessment in special education.
Kudos to Fiorello and Primerano. I know that Cathy is regular reader of this blog...maybe she might be willing to post a comment with her email and folks might be able to request copies somehow :) Sorry Cathy....I couldn't resist applying some subtle pressure.
Two thumbs up from me-----ok.....no comments from the anonymous peanut gallery about how I can type with my two thumbs in the air!
Fiorello, C. A. & Primerano, D (2005). Research into practice: Cattell-Horn-Carroll Cognitive Assessment in Practice: Eligibility and Program Development Issues, Psychology in the Schools, 42(5), 525-536.
Abstract
Concluding statement
Yes...it is about my preferred theory and approach to cognitive assessment - the Cattell-Horn-Carroll (CHC) theory of cognitive abilities. The beauty of Fiorello and Primerano's article is that it provides, IMHO, the most succinct and understandable synthesis of the state-of-the art of CHC research as it relates to school achievement and the viability of CHC-driven assessments in the context of the changing role of cognitive assessment in special education.
Kudos to Fiorello and Primerano. I know that Cathy is regular reader of this blog...maybe she might be willing to post a comment with her email and folks might be able to request copies somehow :) Sorry Cathy....I couldn't resist applying some subtle pressure.
Two thumbs up from me-----ok.....no comments from the anonymous peanut gallery about how I can type with my two thumbs in the air!
Fiorello, C. A. & Primerano, D (2005). Research into practice: Cattell-Horn-Carroll Cognitive Assessment in Practice: Eligibility and Program Development Issues, Psychology in the Schools, 42(5), 525-536.
Abstract
- In this article we explore the application of Cattell-Horn-Carroll (CHC)-based cognitive assessment to school psychology practice. We review the theoretical literature to address both identification practices, with a focus on learning disabilities and mental retardation eligibility, and program development, with a focus on linking assessment to intervention design. We present case studies that illustrate the application of CHC-based cognitive assessment to identification and intervention development.
Concluding statement
- "School psychology practice should occur in the context of research findings. This places a great burden on practitioners to stay current in the research literature. However, the burden also falls on researchers to ensure that our research addresses issues pertinent to practice in the real world. Based on our current state of knowledge, we recommend that practitioners use CHC theory when interpreting assessment findings. Although learning disabilities identification is in a state of flux, when using a clinical model for evaluation, research findings on the links between cognitive abilities and achievement should always be kept in mind. Evaluations serve two purposes, diagnosis/classification and recommendations for intervention. CHC-based assessments can provide information relevant for both identification and programming."
RTI and NCLB - my thoughts on a CBM measurment system
Although not directly related to intelligence testing and CHC theory, I feel a need to point the readers of this blog to what I believe may be one of the better measurement systems from which to implement all the buzz surrounding response-to-instruction (RTI) in special education.
From the CBM systems I've examined, I've been most impressed with the underlying technical characteristics and measurement foundations of AIMSWEB. Check it out. I have no financial or contractual relations with this product. They are located in MN, but that is the only tie.
From the CBM systems I've examined, I've been most impressed with the underlying technical characteristics and measurement foundations of AIMSWEB. Check it out. I have no financial or contractual relations with this product. They are located in MN, but that is the only tie.
Saturday, May 21, 2005
Clinical interpreation of figural Gf matrices - a "radical" cross-battery idea
Matrices tests are frequently used as primary markers of fluid intelligence (Gf) and are often included in short/brief IQ screening batteries. And, as we know from the CHC taxonomy, the reasoning involved in different measures may invoke different types of reasoning or combinations of reasoning (viz., induction-I; general sequential [deductive] reasoning-RQ; quantitative reasoning-RQ).
Astute clinicians often attempt to ascertain if an individual’s pattern of successes and failures on different item types (within the same Gf test) are systematic and suggestive of differential strengths and weaknesses within Gf.
Two studies (one previously published in Intelligence and one “in press”) suggest an interesting framework by which to analyze the task demands of figural Gf items, regardless of I, RQ, RQ demands. The framework is briefly summarized below. Those interested in the empirical studies, and the successful use of this framework in the development of an automatic figural matrices item generatator, are encouraged to read the original articles (Primi, 2001 citation can be found in Arendasy & Sommer, 2005—listed below).
According to Primi, and described by Arendasy and Sommer, the component processes of figural Gf matrices can be dissected as per four main item design features (called “radicals”). These include:
According to recent research, the first two radicals are associated with the amount of information that has to be stored and processed in working memory (MW), and thus, may contribute to the explanation of the strong, positive, and significant relation between Gf and MW. Embretson (1998, 2002) has also associated the type of rules with working memory capacity--more difficult rules seem to put a heavier demand on the working memory capacity than easier rules. [See prior post for more MW->Gf discussion].
Finally, the perceptual organization features of figural matrices have been studied the least. This feature category involves: (a) perceptual features of the elements of the figural matrices and (b) the impact of the Gestalt perceptual principles of proximity, similarity, common region and continuity (Rock & Palmer, 1990; Wertheimer, 1923).
Interesting stuff…don’t you all think? To me, from my table in the corner of Barnes and Nobles on a Saturday night in St. Cloud, Mn, might this suggest yet another set of dimensions by which Gf (figural) matrices tests might be analyzed in order to understand why individuals may perform differently on tests that, on face value, appear to measure the same abilities with the same general class of items?
Might CHC cross-battery principles be extended to yet another level (for this class of Gf tests) via the analysis and classification of figural Gf matrices as per these radicals [don’t you love that word?---it brings me back to the 70s’]? Inquiring minds (at least one in central MN) want to know. Dawn and Sam…what say you?
So many ideas and data…so little time. I need more caffeine.
Astute clinicians often attempt to ascertain if an individual’s pattern of successes and failures on different item types (within the same Gf test) are systematic and suggestive of differential strengths and weaknesses within Gf.
Two studies (one previously published in Intelligence and one “in press”) suggest an interesting framework by which to analyze the task demands of figural Gf items, regardless of I, RQ, RQ demands. The framework is briefly summarized below. Those interested in the empirical studies, and the successful use of this framework in the development of an automatic figural matrices item generatator, are encouraged to read the original articles (Primi, 2001 citation can be found in Arendasy & Sommer, 2005—listed below).
- Arendasy, M., & Sommer, M. (2005). The effect of different types of perceptual manipulations on the dimensionality of automatically generated figural matrices. Intelligence, In Press, Corrected Proof.
According to Primi, and described by Arendasy and Sommer, the component processes of figural Gf matrices can be dissected as per four main item design features (called “radicals”). These include:
- Number of elements
- Number of rules
- Type of rules
- Perceptual organization.
According to recent research, the first two radicals are associated with the amount of information that has to be stored and processed in working memory (MW), and thus, may contribute to the explanation of the strong, positive, and significant relation between Gf and MW. Embretson (1998, 2002) has also associated the type of rules with working memory capacity--more difficult rules seem to put a heavier demand on the working memory capacity than easier rules. [See prior post for more MW->Gf discussion].
Finally, the perceptual organization features of figural matrices have been studied the least. This feature category involves: (a) perceptual features of the elements of the figural matrices and (b) the impact of the Gestalt perceptual principles of proximity, similarity, common region and continuity (Rock & Palmer, 1990; Wertheimer, 1923).
Interesting stuff…don’t you all think? To me, from my table in the corner of Barnes and Nobles on a Saturday night in St. Cloud, Mn, might this suggest yet another set of dimensions by which Gf (figural) matrices tests might be analyzed in order to understand why individuals may perform differently on tests that, on face value, appear to measure the same abilities with the same general class of items?
Might CHC cross-battery principles be extended to yet another level (for this class of Gf tests) via the analysis and classification of figural Gf matrices as per these radicals [don’t you love that word?---it brings me back to the 70s’]? Inquiring minds (at least one in central MN) want to know. Dawn and Sam…what say you?
So many ideas and data…so little time. I need more caffeine.
Labels:
CHC theory,
Gf,
Gsm,
Gv,
testing,
working memory
Measurement of practical intelligence via video technology
For years (actually, since I finished my dissertation on the topic) I've found Stephen Greenspan's Model of Personal Competence, which includes (depending on which revision of his model one is examining) the broad competence domains of physical and emotional competence and conceptual, practical, and social intelligence, a useful meta-model for conceptualizing human competence. [Note - For CHC thinkers, CHC abilities would fall under the broad umbrella of Conceptual Intellignece.] However, a major stumbling block in research and applied measurement has been difficulty in developing valid measures of social and practical intelligence.
Although based on a very small sample of convenience, Greenspan and Yahon-Chamovitz recently pubished the following encouraging article. The article reports on the promising use of technology (video portrayal of everday events) in the development of potentially valid measures of the cognitive component of practical intelligence. The article reference and abstract is provided below.
Anyone interested in developing measures of the cognitive component of practical intelligence should give this brief article a quick look. I can envision subjects responding to questions based on their viewing of video clips of everyday events presented via the screen of a handheld PDA (personal digital assistant).
YalonChamovitz, S., & Greenspan, S. (2005). Ability to identify, explain and solve problems in everyday tasks: preliminary validation of a direct video measure of practical intelligence. Research in Developmental Disabilities, 26(3), 219-230.
Although based on a very small sample of convenience, Greenspan and Yahon-Chamovitz recently pubished the following encouraging article. The article reports on the promising use of technology (video portrayal of everday events) in the development of potentially valid measures of the cognitive component of practical intelligence. The article reference and abstract is provided below.
Anyone interested in developing measures of the cognitive component of practical intelligence should give this brief article a quick look. I can envision subjects responding to questions based on their viewing of video clips of everyday events presented via the screen of a handheld PDA (personal digital assistant).
YalonChamovitz, S., & Greenspan, S. (2005). Ability to identify, explain and solve problems in everyday tasks: preliminary validation of a direct video measure of practical intelligence. Research in Developmental Disabilities, 26(3), 219-230.
- Recent developments in the definitional literature on mental retardation emphasize the need to ground the concept of adaptive behavior in an expanded model of intelligence, which includes practical and social intelligence. Development of a direct measure of practical intelligence might increase the likelihood that an assessment of this domain would be included in the diagnostic process of mental retardation. The current paper reports on the preliminary exploration of the validity and utility of using a videotaped portrayal of everyday tasks, with built-in errors, as a measure of practical intelligence. A correlation of .79 was found between the practical intelligence video score and the Vineland domestic and community sub-domains score in 50 adults with mild and moderate mental retardation. This suggests that the instruments are essentially measuring the same domain of human competence. The unexplained variance may be attributed to the fact that the video measure is more directly measuring cognition.
Friday, May 20, 2005
Working memory, domain knowledge and higher-level cognitive performance
Hambrick, D. Z., & Oswald, F. L. (2005). Does domain knowledge moderate involvement of working memory capacity in higher-level cognition? A test of three models. Journal of Memory and Language, 52(3), 377-397.
What makes a person more competent on higher-level cognitive tasks? A larger reservoir of domain-relevant knowledge (software), larger working memory capacity (hardware), or the interaction of the two? Inquiring minds want to know. Contemporary cognitive research has implicated both working memory capacity (Gsm-MW) and domain knowledge (Gk) in higher-level cognition.
In their research article, Hambrik and Oswald investigated the interplay between Gk and MW. According to the authors, this study represents one of a few studies that have investigated the interplay between MW capacity and depth of Gk. The authors characterize MW and Gk in the following manner:
Predictions regarding higher-level cognitive performance in a relatively large sample (n=381…yes…you guessed it…undergraduate students in an introductory psychology course) were based on three competing hypothesis.
The results revealed “greater use of baseball knowledge in the baseball task than in the spaceship task. However, even at high levels of baseball knowledge, this knowledge use did not alter the relationship between working memory capacity and task performance. This finding is inconsistent with compensation and rich-get-richer hypotheses. Instead, it suggests that working memory capacity and domain knowledge may operate independently under certain conditions”
Bottom line practical implications
What makes a person more competent on higher-level cognitive tasks? A larger reservoir of domain-relevant knowledge (software), larger working memory capacity (hardware), or the interaction of the two? Inquiring minds want to know. Contemporary cognitive research has implicated both working memory capacity (Gsm-MW) and domain knowledge (Gk) in higher-level cognition.
In their research article, Hambrik and Oswald investigated the interplay between Gk and MW. According to the authors, this study represents one of a few studies that have investigated the interplay between MW capacity and depth of Gk. The authors characterize MW and Gk in the following manner:
- “Working memory capacity might be thought of as a stable component of higher-level cognition---a possible ‘hardware' aspect of cognition”
- “Domain knowledge…might be thought of as a modifiable ‘software’ aspect of cognition”
Predictions regarding higher-level cognitive performance in a relatively large sample (n=381…yes…you guessed it…undergraduate students in an introductory psychology course) were based on three competing hypothesis.
- Compensation hypothesis – Gk will attenuate the influence of MW capacity on higher-level cognition. In other words, greater Gk will reduce or eliminate the influence of MW capacity on domain-relevant tasks – there is an interaction effect.
- The rich-get-richer hypothesis – larger MW capacity will enhance the use of Gk on higher level cognitive tasks. “In other words, people with high levels of working memory capacity tend to benefit from domain knowledge more than those with lower levels.”
- The independent influences hypothesis – MW and Gk make independent “additive” contributions to higher-level cognition
The results revealed “greater use of baseball knowledge in the baseball task than in the spaceship task. However, even at high levels of baseball knowledge, this knowledge use did not alter the relationship between working memory capacity and task performance. This finding is inconsistent with compensation and rich-get-richer hypotheses. Instead, it suggests that working memory capacity and domain knowledge may operate independently under certain conditions”
Bottom line practical implications
- Hardware (the working memory capacity an individual brings to a specific task) and software (domain-relevant knowledge an individual brings to a specific task) combine to facilitate higher-level cognitive performance. Simply having a larger CPU (MW) is not the answer. Having the most sophisticated software (knowledge base) is insufficient. Both hardware and software make unique contributions to performing at a high level on demanding cognitive tasks. At least in adult populations, both domains need to be assessed when trying to predict/explain a person’s performance.
Thursday, May 19, 2005
Quote to note - Theories and truth
"Professors of every branch of the sciences prefer their own theories to truth: the reason is, that their theories are private property, but truth is common stock"
- Charles Caleb Colton, Lacon (1849)
Gottfredson on general intelligence (g) - Scientific American
While rummaging around the internet this evening I stumbled on an article in Scientific American about g (general intelligence). Linda Gottfredson presents a nice concise synthesis of the majority view on the existence of g.
As for me....I still do not know if g exists, and if it does, what it represents (although I recently posted some empirical data in support of the "working memory may be g" position). Much better minds than mind have been debating the g vs no-g position since the days of Spearman. I've personally sat in a small room and witnessed John Horn and the late Jack Carroll strongly argue (to put it in mild terms) both positions. Both made convincing points.
Regardless, as a practical manner, g-based IQ scores are indeed powerful predictors, on the average, across a wide variety of domains. However, as reported in a prior post, a number of specific (narrow and/or broad) CHC abilities have been found to provide incremental validity above and beyond g.
As for me....I still do not know if g exists, and if it does, what it represents (although I recently posted some empirical data in support of the "working memory may be g" position). Much better minds than mind have been debating the g vs no-g position since the days of Spearman. I've personally sat in a small room and witnessed John Horn and the late Jack Carroll strongly argue (to put it in mild terms) both positions. Both made convincing points.
Regardless, as a practical manner, g-based IQ scores are indeed powerful predictors, on the average, across a wide variety of domains. However, as reported in a prior post, a number of specific (narrow and/or broad) CHC abilities have been found to provide incremental validity above and beyond g.
MS Windows of the future - a sneak peak review from NY Times
Yes...we all bemoan Microsoft's monopoly with the Windows operating system. A love-hate relationship....one nurtured because we have become addicted to it as an operating-system drug...the most powerful one we all first experimented with (not counting some of the early DOS, CPM, etc. days for those of us who go a ways back with PCs).
The NY Times Circuits section just posted a sneak peek review at the next generation of Windows (nicknamed Longhorn.
The NY Times Circuits section just posted a sneak peek review at the next generation of Windows (nicknamed Longhorn.
Cognitive assessment of deaf and hard of hearing
Yep...this is information stolen from the National Association of School Psychologists general membership listserv. Thanks to Guy McBride whose post on 5-19-05 brought this resource to my attention. Guy...you da man!
Recommendations regarding the cognitive assessment of individuals who are deaf or hard of hearing can be found at the Gallaudent web page.
Thanks again Guy. Stealing posted information is a compliment in the currency of the internet :)
Recommendations regarding the cognitive assessment of individuals who are deaf or hard of hearing can be found at the Gallaudent web page.
Thanks again Guy. Stealing posted information is a compliment in the currency of the internet :)
Accessible Reading Assessments workshop
FYI - Registration for NCEO's National Accessible Reading Assessments Topical Clinic (at CCSSO's annual National Conference on Large Scale Assessment) are now being taken.
The descriptive blurb (from the NCEO web page) is reproduced below
The descriptive blurb (from the NCEO web page) is reproduced below
- "How can we ensure that our state reading assessments accurately reflect what ALL students know and are able to do, even those students with disabilities that affect reading? Accessible reading assessments that produce reading scores for all students are an essential part of inclusive accountability systems. Participants will engage in detective work to build understanding of how to best assess students with disabilities that affect reading. The National Accessible Reading Assessment Projects' definition of reading proficiency will serve as a set of clues for participants. With the assistance of NCEO's crack team of detectives (a.k.a. researchers), participants will be asked to think in and outside of the box to begin to unlock these mysteries!"
Large scale assessments & kids with disabilities
I have made it no secret that I provide consultative services (via service contracts) to the University of Minnesota National Center on Education Outcomes (NCEO), which is one of two centers (the other being ETS) dealing with the federally funded National Accessible Reading Assessment Project.
Why?....because I believe passionately in the need to support the efforts of groups advocating on the behalf of "kids on the margins" during nationally-driven (NCLB) educational reform, esp. as it relates to measurement/quantoid issues.
That being said, this is an FYI post.
NCEO has recently made the following information available via their web page. Please check the reports/links if you are interested in the issues related to the inclusion of students with disabilities in large scale accountability assessments. These are important issues.
Why?....because I believe passionately in the need to support the efforts of groups advocating on the behalf of "kids on the margins" during nationally-driven (NCLB) educational reform, esp. as it relates to measurement/quantoid issues.
That being said, this is an FYI post.
NCEO has recently made the following information available via their web page. Please check the reports/links if you are interested in the issues related to the inclusion of students with disabilities in large scale accountability assessments. These are important issues.
Tuesday, May 17, 2005
Brain Candy and the Flynn Effect
FYI.
Someone recently directed me to a book review, in the New Yorker, of Malcom Gladwell's Brain Candy: Is Pop Culture Dumbing Us Down or Smartening Us Up? This is a pop review about a pop psychology book that delves into the Flynn Effect.
I must admit that I've not read the book myself.
Someone recently directed me to a book review, in the New Yorker, of Malcom Gladwell's Brain Candy: Is Pop Culture Dumbing Us Down or Smartening Us Up? This is a pop review about a pop psychology book that delves into the Flynn Effect.
I must admit that I've not read the book myself.
Monday, May 16, 2005
Stanford Binet 5 (SB5) post-publication resources: 5-20-13
This is an update of a post made a number of years ago...with new information
Stanford Binet 5 Assessment Service Bulletins (info from Riverside Publishing web page)
Stanford Binet 5 Assessment Service Bulletins (info from Riverside Publishing web page)
- SB5 Assessment Service Bulletin #1: History of the Stanford-Binet Intelligence Scales: Content and Psychometrics
- SB5 Assessment Service Bulletin #2: Accommodations on the Stanford-Binet Intelligence Scales, Fifth Edition
- SB5 Assessment Service Bulletin #3: Use of the Stanford-Binet Intelligence Scales, Fifth Edition in the Assessment of High Abilities
- SB5 Assessment Service Bulletin #4: Special Composite Scores for the Stanford-Binet Intelligence Scales, Fifth Edition
- Quality of Performance and Change - Sensitive Assessment for Cognitive Ability by Gale H. Roid
- Technical Brief - Interpretation of SB5/Early SB5 Factor Index Scores by Gale H. Roid
Sunday, May 15, 2005
Quote to note - Asimov on life and eath
"Life is pleasant. Death is peaceful. Its the transition thats troublesome"
- Isaac Asimov, How Easy to See the Future, Natural History, April 1975
Saturday, May 14, 2005
Quote to note - da Vinci on science and mathematics
"No human investigation can be called real science if it cannot be demonstrated mathematically"
- Leonardo da Vinci, Treatise on Painting (1651)
Friday, May 13, 2005
Quote to note - purpose of models
"The purpose of models is not to fit the data but to sharpen the questions"
- Samuel Karlin, 11th R. A. Fisher Memorial Lecture, Royal Society, April 20, 1983
Thursday, May 12, 2005
Working memory--TheVillage-2 more
Two more working memory abstracts from TheVillage working memory listserv sent to me today.
- Author Name(s): Nelson Cowan, Emily M. Elliott, J. Scott Saults, Lara D.; Nugent, Pinky Bomb, and Anna Hismjatullina
- Contact email: CowanN@missouri.edu
- Title: Rethinking Speed Theories of Cognitive Development: Increasing the Rate of Recall Without Affecting Accuracy
- Journal: Psychological Science
- Abstract: Researchers have suggested that developmental improvements in immediate recall stem from increases in the speed of mental processes. However, that inference has depended on evidence from correlation, regression, and structural equation modeling. We provide counterexamples in two experiments in which the speed of spoken recall is manipulated. In one experiment, second-grade children and adults recalled lists of digits more quickly than usual when the lists were presented at a rapid rate of 2 items per second (items/s). In a second experiment, children received lists at a 1 item/s rate but half of them were successfully trained to respond more quickly than usual, and similar to adults' usual rate. Recall accuracy was completely unaffected by either of these response-speed manipulations. Although response rate is a strong marker of an individual's maturational level, it thus does not appear to determine immediate recall. There are important implications for developmental methodology.
- Author Name(s): Jefferies, E., Frankish, C., Lambon Ralph, M. A.
- Contact email: beth.jefferies@manchester.ac.uk
- Title: Lexical and semantic influences on item and order memory in immediate serial recognition: Evidence from a novel task
- Journal: QJEP (A)
- Abstract: Previous studies have reported that, in contrast to immediate serial recall, lexical/semantic factors have little effect on immediate serial recognition: this has been taken as evidence that linguistic knowledge contributes to verbal short-term memory in a redintegrative process at recall. Contrary to this view, we found that lexicality, frequency and imageability all influenced matching span. The standard matching span task, requiring changes in item order to be detected, was less susceptible to lexical/semantic factors than a novel task involving the detection of phoneme order and hence item identity changes. Therefore, in both immediate recognition and immediate serial recall, lexical/semantic knowledge makes a greater contribution to item identity as opposed to item order memory. Task sensitivity, and not the absence of overt recall, may have underpinned previous failures to show effects of these variables in immediate recognition. We also compared matching span for pure and unpredictable mixed lists of words and nonwords. Lexicality had a larger impact on immediate recognition for pure as opposed to mixed lists, in line with findings for immediate serial recall. List composition affected the detection of phoneme but not item order changes in matching span; similarly, in recall, mixed lists produce more frequent word phoneme migrations but not migrations of entire items. These results point to strong similarities between immediate serial recall and recognition. Lexical/semantic knowledge may contribute to phonological stability in both tasks.
Working memory virtual home
As a follow-up to my last post, click here to visit the home page for TheVillage, a listserv dedicated to the exchange of scholarly information regarding working memory research.
Working memory listserv - TheVillage
Below is an FYI post I received from TheVillage working memory listserv. Interested IQ blogsters may want to consider joining this list.
The directions I received to join are to send an email to:
FYI message reposted from TheVillage listserv
Dear colleagues,
*****
AUTHOR INSTRUCTIONS
Once a month, a list of in-press papers and their abstracts will be distributed through the WM Village listserve. If you are an author who would like your paper listed, then please contact me off-list (at mjkane@uncg.edu), as I will be collating the information.
In the subject line of the email, please write:
The value of this preprint information list depends on subscribers sharing preprint information, so we do strongly encourage you to let the list know if you have relevant articles accepted for publication.
The directions I received to join are to send an email to:
- majordomo@workingmemory.org
- subscribe thevillage
FYI message reposted from TheVillage listserv
Dear colleagues,
- Below is TheVillage's monthly list of new, in-press papers related to WM received in April. The listings are presented in order of receipt during the month; each listing is separated by 5 asterisks. If you would like a copy of any of the papers listed and abstracted below, please email the contact author off-list.
- If you are an author who would like to announce your own in-press work, please see the instructions at the bottom of this message.
- If you have not yet joined the working memory listserve, but would like to, then please write the list owner, John Towse, at: j.towse@lancaster.ac.uk.
*****
- Author Name(s): Kevin Dent & Mary M Smyth
- Contact email: k.dent@bham.ac.uk
- Title: Capacity limitations and representational shifts in spatial short-term memory capacity
- Journal: Visual Cognition
- Abstract: Performance was examined in a task requiring the reconstruction of spatial locations. Previous research suggests that it may be necessary to differentiate between memory for smaller and larger numbers of locations (Postma and DeHaan, 1996), at least when locations are presented simultaneously (Igel and Harvey, 1991). Detailed analyses of the characteristics of performance showed that such a differentiation may also be required for sequential presentation. Furthermore the slope of the function relating each successive response to accuracy was greater with 3 than with 6, 8, or 10 locations which did not differ. Participants also reconstructed the arrays as being more proximal than in fact they were, however sequential presentation eliminated this distortion when there were 3 but not when there were more than 3 locations. These results support the idea that very small numbers of locations are remembered using a specific form of representation, which is unavailable to larger numbers of locations.
- Author Name(s): Kevin Dent & Mary M Smyth
- Contact email: k.dent@bham.ac.uk
- Title: Verbal coding and the storage of form-position associations in visual-spatial short-term memory
- Journal: Acta Psychologica
- Abstract: Short-term memory for form-position associations was assessed using an object relocation task. Participants attempted to remember the positions of either 3 or 5 Japanese Kanji characters, presented on a computer monitor. Following a short blank interval, participants were presented with 2 alternative Kanji, only 1 of which was present in the initial stimulus, and the set of locations occupied in the initial stimulus. They attempted to select the correct item and relocate it back to its original position. The proportion of correct item selections showed effects of both articulatory suppression and memory load. In contrast the conditional probability of location given a correct item selection showed an effect of load but no effect of suppression. These results are consistent with the proposal that access to visual memory is aided by verbal recoding, but that there is no verbal contribution to memory for the association between form and position.
- Author Name(s): Roy Allen, Peter McGeorge, David G. Pearson, Alan Milne.
- Contact email: roy.allen@abdn.ac.uk
- Title: Multiple-target tracking: A role for working memory?
- Journal: Quarterly Journal of Experimental Psychology Section A
- Abstract: In order to identify the cognitive processes associated with target tracking, a dual task experiment was carried out in which participants undertook a dynamic multiple-object tracking task first alone and then again, concurrently with one of several secondary tasks, in order to investigate the cognitive processes involved. The research suggests that after designated targets within the visual field have attracted preattentive indexes that point to their locations in space, conscious processes, vulnerable to secondary visual and spatial task interference, form deliberate strategies beneficial to the tracking task, before tracking commences. Target tracking itself is realized by central executive processes, which are sensitive to any other cognitive demands. The findings are discussed in the context of integrating dynamic spatial cognition within a working memory framework.
- Author Name(s): Deschuyteneer, M. & Vandierendonck, A.
- Contact email: maud.deschuyteneer@UGent.be
- Title: The role of response selection and input monitoring in solving simple arithmetical products
- Journal: Memory & Cognition
- Abstract: Several studies have already shown that the central executive, as conceptualised in the working memory model of Baddeley and Hitch (1974), is important in simple mental arithmetic. Recently, attempts have been made to define more basic processes that underlie the "central executive". In this vein, monitoring, response selection, updating, mental shifting, and inhibition have been proposed as processes capturing executive control. Previous research has shown that secondary tasks which require a choice decision impair the calculation of simple sums, whereas input monitoring was not found to be a sufficient condition to impair the calculation of the sums (Deschuyteneer & Vandierendonck, in press). In the present paper we report data on the role of input monitoring and response selection in solving simple arithmetical products. In four experiments subjects solved one-digit products (e.g., 5 x 7) in a single-task as well as in dual-task conditions. Just as for solving simple sums, the results show a strong involvement of response selection in calculating simple products, while input monitoring does not seem to impair the calculation of such products. These findings give additional evidence that response selection may be one of the processes needed for solving simple mental arithmetic problems.
AUTHOR INSTRUCTIONS
Once a month, a list of in-press papers and their abstracts will be distributed through the WM Village listserve. If you are an author who would like your paper listed, then please contact me off-list (at mjkane@uncg.edu), as I will be collating the information.
In the subject line of the email, please write:
- "In Press, [Month, Year], [1st Author Name]"
- - where month and year refer to the date your email is sent. This will assist with the job of sorting and tracking messages.
- The email itself should contain the referencing information. The format we'd prefer is given below (using a standard template will help subscribers peruse the listings for the information they desire).
- Author Name(s):
- Contact email:
- Title:
- Journal:
- Abstract:
The value of this preprint information list depends on subscribers sharing preprint information, so we do strongly encourage you to let the list know if you have relevant articles accepted for publication.
A quote to note: Committees
"A committee is a cul-de-sac down which ideas are lured and then quietly strangled"
- Sir Barnett Cocks, in New Scientist, 1973
Wisconsin Card Sorting Test and ADHD
FYI meta-analytic review of sensitivity of Wisconsin Card Sorting Test for ADHD.
Bottom line practical conclusion - the WCST, much like other measures of cognitive efficiency (e.g., Gs measures), might best be viewed as a thermometer of the efficiency of a person's information processing system. If the scores are poor (the temperature is high), it tells you that something is amiss in the IP processing system, but like a thermometer, it doesn't posses the diagnostic specificity to tell you what the problem is.
Romine, C. B., Lee, D., Wolfe, M. E., Homack, S., George, C., & Riccio, C. A. (2004). Wisconsin Card Sorting Test with children: a meta-analytic study of sensitivity and specificity. Archives of Clinical Neuropsychology, 19(8), 1027-1041.
Abstract
Bottom line practical conclusion - the WCST, much like other measures of cognitive efficiency (e.g., Gs measures), might best be viewed as a thermometer of the efficiency of a person's information processing system. If the scores are poor (the temperature is high), it tells you that something is amiss in the IP processing system, but like a thermometer, it doesn't posses the diagnostic specificity to tell you what the problem is.
Romine, C. B., Lee, D., Wolfe, M. E., Homack, S., George, C., & Riccio, C. A. (2004). Wisconsin Card Sorting Test with children: a meta-analytic study of sensitivity and specificity. Archives of Clinical Neuropsychology, 19(8), 1027-1041.
Abstract
- More and more frequently the presence of executive function deficits appears in the research literature in conjunction with disabilities that affect children. Research has been most directed at the extent to which executive function deficits may be implicated in specific disorders such as attention deficit hyperactivity disorder (ADHD); however, deficits in executive function have been found to be typical of developmental disorders in general. The focus of this paper is to examine the extent to which one frequently used measure of executive function, theWisconsin Card Sorting Test (WCST), demonstrates sensitivity and specificity for the identification of those executive function deficits associated with ADHD as well as its use with other developmental disorders through meta-analytic methods. Evidence of sensitivity of the WCST to dysfunction of the central nervous system is reviewed. Effect sizes calculated for all studies compared groups of children on differing variables of the WCST. The results of this meta-analysis suggest that across all of the studies, individuals with ADHD fairly consistently exhibit poorer performance as compared to individuals without clinical diagnoses on the WCST as measured by Percent Correct, Number of Categories, Total Errors, and Perseverative Errors. Notably, other various clinical groups performed more poorly than the ADHD groups in a number of studies. Thus, while impaired performance on the WCST may be indicative of an underlying neurological disorder, most likely related to frontal lobe function, poor performance is not sufficient for a diagnosis of ADHD. Implications for further research are presented.
Wednesday, May 11, 2005
The intelligence testing world is being "flattened" - The World is Flat
There is little doubt (IMHO) that the buzz surrounding one of the current top best selling non-fiction books, "The World is flat: A brief history of the twenty-first century" by Thomas L. Friedman, is appropriate.
Im only 70+ pages in to the book and I must say that Friedman's thesis regarding the impact of globalization on the world is very interesting. His thoughts on the "ten forces that flattened the world" are very thought provoking. In his discussion of Flattener # 3 (Work Flow Software), he quotes (in the context of the impact of the development of standard internet protocols) Joel Crawley, the head of IBM's strategic planning unit, as stating that:
Thus, if my late night extrapolation/generalization has any merit, and if I'm correct in my conclusion/prediction that most test developers are (or will) jump on some variation of the CHC bandwagon in the near future (if not already), we in the field of applied psychometric intelligence testing now have the CHC standard.
If true, the logical prediction follows that as more-and-more intelligence batteries adopt a CHC framework, the real innovation in intelligence testing will come from those who add "value" and those who produce instruments that "add above and beyond the standard."
You heard it first here...the recognition of the CHC cognitive ability framework may be a key world flattening event in the world of intelligence test development. Value added instruments which provide more than CHC construct valid measures will be what folks will be asking for and what test developers need to focus on.
Im only 70+ pages in to the book and I must say that Friedman's thesis regarding the impact of globalization on the world is very interesting. His thoughts on the "ten forces that flattened the world" are very thought provoking. In his discussion of Flattener # 3 (Work Flow Software), he quotes (in the context of the impact of the development of standard internet protocols) Joel Crawley, the head of IBM's strategic planning unit, as stating that:
- "Standards don't eliminate innovation, they just allow you to focus it. They allow you to focus on where the real value lies, which is usually everything you can add above and beyond the standard" (p. 76).
Thus, if my late night extrapolation/generalization has any merit, and if I'm correct in my conclusion/prediction that most test developers are (or will) jump on some variation of the CHC bandwagon in the near future (if not already), we in the field of applied psychometric intelligence testing now have the CHC standard.
If true, the logical prediction follows that as more-and-more intelligence batteries adopt a CHC framework, the real innovation in intelligence testing will come from those who add "value" and those who produce instruments that "add above and beyond the standard."
You heard it first here...the recognition of the CHC cognitive ability framework may be a key world flattening event in the world of intelligence test development. Value added instruments which provide more than CHC construct valid measures will be what folks will be asking for and what test developers need to focus on.
Tuesday, May 10, 2005
Fourth Spearman Conference Presentations available
FYI - The Fourth Spearman Conference (“Diagnostics for Education: Theory, Measurement, Applications,”) was held in Philadelphia, Pennsylvania, USA on October 20-23, 2004.
By clicking here blogsters can visit a page and click and view/download the presentations "as is." Below is a list of the presenations for which material has been posted.
By clicking here blogsters can visit a page and click and view/download the presentations "as is." Below is a list of the presenations for which material has been posted.
- Aldrich, Clark, Educational Simulations - Simulation; Games; and Pedagogy
- Flanagan, Dawn P., The Role of IQ Tests in LD Identification
- Graesser, Ozuru, Rowe and Floyd, Enhancing the Landscape and Quality of Multiple Choice Questions
- Graf, Steffen, Peterson, Saldivia and Wang, Designing and Revising Quantitative Item Models
- Hunt, Earl, Patterns of Thought
- Matthews, Gerald, Picking a Bone with Spearman: A Contrary View of Mental Energy and Attention
- Misley, Robert J., Cognitive Diagnosis as Evidentiary Argument
- Pennebaker, James W., What Our Words Say About Us
- Sabatini, John, Critical Issues in Aligning Psychometrics with Current Reading Models
- Stout, William, The Reparameterized Unified Model Skills Diagnostic System
- Swanson, H. Lee, Memory and Learning Disabilities: Historical Perspective and Current Status
- van der Linden, Wim J., Statistical Models for Diagnosis: A Discussion
- VanLehn, Kurt, The Andes Intelligent Tutoring System: Lessons Learned about Assessment
- von Davier, Mattias and Yamamoto, Kentaro, A Class of Models for Cognitive Diagnosis
- Wiliam, Dylan, From Diagnosis to Action: Toward Instructionally Tractable Assessment
- Wilson, Mark, Cognitive Diagnosis using Explanatory Item Response Models
- Wittmann, Werner W., Brunswik-Symmetry: A Key Concept for Successful Assessment in Education and Elsewhere
Berlin BIS model of intelligence--material to review
I previously suggested that American intelligence scholars need to pay more attention to the Berlin BIS facted model of intelligence and how it can interface with CHC theory (see March 28, 2005 post). In a response to my recent post ("g, working memory, specific CHC abilities and achievement"), Werner Wittmann extends this recommendation and, more importantly, directs IQ blogsters to a number of on-line papers and PowerPoint presentations.
For those who did not notice Werner's comment, below are the links he provided
For those who did not notice Werner's comment, below are the links he provided
- 2004 PowerPoint presented APS in Atlanta - deals with working memory, intelligence and Phil Ackerman's PPIK-theory under a Brunswik symmetry perspective. [Editorial note - I'm a big fan of Ackerman's PPIK theory, particularly his work on aptitude-trait complexes, which follow's in the footsteps of Richard Snow's work on aptitudes.]
- 2004 presentation at the 4th Spearman conference - relating Brunswik symmetry to education
Sunday, May 08, 2005
g, working memory, specific CHC abilities and achievement
[Another set of “hidden on my hard disk analyses" that now see the light of day.]
Much has been written, both in the theoretical and applied intelligence theory/assessment literature, about
By way of background, I have previously summarized the primary issues in “Betwixt, Behind, and Beyond g.” (Editorial note - please read this prior information before going further in this post.) In those prior writings I presented SEM analyses (of the WJ III norm data) that supported the hypothesis that working memory (actually, cognitive efficiency as defined by Gs and working memory) has a strong causal relationship to g (i.e., if you believe g is a workable and sound construct.) I also summarized research suggesting that a number of broad and narrow CHC abilities are significant above and beyond the influence of g.
So....what is this topic of this current post? Simple. To combine the working memory--> g information processing causal hypothesized model with the more traditional psychometric g+specific abilities research model.
Using the same norm samples as described in Betwixt, Behind, and Beyond g, I ran SEM causal models that defined g via the broad CHC latent abilities of Gv, Ga, Glr, Gf, and Gc. Additionally, adhering to a general information processing causal CHC model (click here to see prior relevant post), Gs and memory span (MS) are specified to be casually related to working memory, and working memory is in turn causally related to g.
The new twist is the specification of a causal path from g (as defined above) to achievement variables (letter-word identification; word attack) plus the inclusion of any significant paths from the cognitive efficiency variables (Gs and working memory) and/or the CHC abilities (that define the measurement model for g) to the dependent achievement latent variables.
All plausible models are presented and can be viewed by clicking here (Editorial note – for each age group presented, the models reported are all equally plausible. The model fit statistics did not identify any of the models in the two age groups as being better fits than the others).
Enjoy. I’d like to encourage the readers of this blog to offer interpretations and hypothesis for the various models. My only comment is that if you believe in g, the hypothesis that g is strongly related to working memory is supported (actually, to the broader notion of cognitive efficiency). Furthermore, some specific CHC abilities are again found to provide potentially important insights above and beyond the effect of g within a CHC-based information processing model.
For all who immediately shout “but what about parsimony?....what about Occam’s Razor?”….I defer to Stankov, Boyle & Cattell, R. (1995; Models and paradigms in personality and intelligence research. In Saklofske, D. & Zeidner, M. (Eds.) International Handbook of Personality and Intelligence. New York: Plenum Press), who state, within the context of research on human intelligence, that:
Much has been written, both in the theoretical and applied intelligence theory/assessment literature, about
- The definition of g (general intelligence),
- Whether g exists,
- The importance of g in the prediction of a wide variety of outcomes, and
- If g exists, the importance of specific broad and narrow stratum abilities above and beyond g in the prediction of outcomes
By way of background, I have previously summarized the primary issues in “Betwixt, Behind, and Beyond g.” (Editorial note - please read this prior information before going further in this post.) In those prior writings I presented SEM analyses (of the WJ III norm data) that supported the hypothesis that working memory (actually, cognitive efficiency as defined by Gs and working memory) has a strong causal relationship to g (i.e., if you believe g is a workable and sound construct.) I also summarized research suggesting that a number of broad and narrow CHC abilities are significant above and beyond the influence of g.
So....what is this topic of this current post? Simple. To combine the working memory--> g information processing causal hypothesized model with the more traditional psychometric g+specific abilities research model.
Using the same norm samples as described in Betwixt, Behind, and Beyond g, I ran SEM causal models that defined g via the broad CHC latent abilities of Gv, Ga, Glr, Gf, and Gc. Additionally, adhering to a general information processing causal CHC model (click here to see prior relevant post), Gs and memory span (MS) are specified to be casually related to working memory, and working memory is in turn causally related to g.
The new twist is the specification of a causal path from g (as defined above) to achievement variables (letter-word identification; word attack) plus the inclusion of any significant paths from the cognitive efficiency variables (Gs and working memory) and/or the CHC abilities (that define the measurement model for g) to the dependent achievement latent variables.
All plausible models are presented and can be viewed by clicking here (Editorial note – for each age group presented, the models reported are all equally plausible. The model fit statistics did not identify any of the models in the two age groups as being better fits than the others).
Enjoy. I’d like to encourage the readers of this blog to offer interpretations and hypothesis for the various models. My only comment is that if you believe in g, the hypothesis that g is strongly related to working memory is supported (actually, to the broader notion of cognitive efficiency). Furthermore, some specific CHC abilities are again found to provide potentially important insights above and beyond the effect of g within a CHC-based information processing model.
For all who immediately shout “but what about parsimony?....what about Occam’s Razor?”….I defer to Stankov, Boyle & Cattell, R. (1995; Models and paradigms in personality and intelligence research. In Saklofske, D. & Zeidner, M. (Eds.) International Handbook of Personality and Intelligence. New York: Plenum Press), who state, within the context of research on human intelligence, that:
- “While we acknowledge the principle of parsimony and endorse it whenever applicable, the evidence points to relative complexity rather than simplicity. Insistence on parsimony at all costs can lead to bad science. “
Labels:
attention,
CHC theory,
factor analysis,
g (gen IQ),
Ga,
Gc,
Gf,
Glr,
Grw,
Gs,
Gsm,
Gv,
SEM,
working memory
Thursday, May 05, 2005
"Current Challenges in Educational Testing" conference
FYI - "pass along" post from NCME listserv
ACT and the Center for Advanced Studies in Measurement and Assessment (CASMA) of the University of Iowa are sponsoring a one-day conference on Saturday, November 5, 2005, at ACT's conference facilities in Iowa City.
A Keynote Address will be given by Laura Schwalm, Superintendent, Garden Grove Unified School District, Orange County, CA, winner of the 2004 Broad Prize for Urban Education.
Sessions and speakers include:
The registration fee is $50 (includes breakfast, lunch, and reception). The registration deadline is October 7, 2005. To register starting June 6, and to obtain travel and hotel information, go to http://www.act.org/casma.
Contributing sponsors include the College Board, CTB/McGraw Hill, Harcourt Assessment, Measured Progress, National Evaluation Systems, Riverside Publishing Company, and Vantage Learning.
ACT and the Center for Advanced Studies in Measurement and Assessment (CASMA) of the University of Iowa are sponsoring a one-day conference on Saturday, November 5, 2005, at ACT's conference facilities in Iowa City.
A Keynote Address will be given by Laura Schwalm, Superintendent, Garden Grove Unified School District, Orange County, CA, winner of the 2004 Broad Prize for Urban Education.
Sessions and speakers include:
- (1) Welcome (Richard Ferguson, CEO, ACT; David Skorton, President, University of Iowa)
- (2) K-12 Testing and NCLB (Robert Linn, University of Colorado and CRESST)
- (3) Computerized Grading of Essays (eRater; KAT; Vantage Learning)
- (4) NAEP 12th Grade Testing (Sharif Shakrani, National Assessment Governing Board; Ted Stilwill, former Director of Iowa Department of Education)
- (5) Non-cognitive Assessment in College Admissions and the Workforce (Robert Sternberg, Yale; Michael Campion, Purdue; Paul Sackett, University of Minnesota).
- (6) Panel Discussion on a "Hot Topic"
- (7) Reception
The registration fee is $50 (includes breakfast, lunch, and reception). The registration deadline is October 7, 2005. To register starting June 6, and to obtain travel and hotel information, go to http://www.act.org/casma.
Contributing sponsors include the College Board, CTB/McGraw Hill, Harcourt Assessment, Measured Progress, National Evaluation Systems, Riverside Publishing Company, and Vantage Learning.
Wednesday, May 04, 2005
Conative abilities and "aptitude"
I've long been a believer that educational/school psychologists need to pay more attention to non-cognitive variables (conative abilities) when discussing a student's aptitude for school learning. A major barrier to attending to these abilities has been a shortage of empirically based conceptual/theoretical models and practical measures.
Although there is still a clear void in the area of practical measurement tools, significant progress has been made the past 10 years in the development of theoretical models of important conative abilities. Two books are suggested for those who want to learn more.
First, and probably the most difficult read, is Remaking the Concept of Aptitude: Extending the Legacy of Richard Snow. This posthumously published book, in honor of Richard Snow, provides (in one place) a summary of the "state of the art" of Snow's work on aptitude...which, in simple terms, requires an integration of cognitive and conative abilities into a coherent theoretical framework. The reader is forewarned---this book shows the signs of being authored by committee. At times it is not coherent, material is not well linked, and it becomes difficult to see the forest from the trees.
The recently published Handbook of Competence and Motivation (Elliot & Dweck, 2005) represents a monumental integration of the emirical/conceptual progress made during the past few decades regarding to the most salient conative abilities (e.g., achievement goal orientation, self-theories, self-regulatory learning processes, etc.). Having recently spent the better part of 2 years searching for contemporary research in the diverse domain of conative abilities, I believe that this edited volume provides the best contemporary integration of this vast literature. An expensive book (yes, I know..."tell me something new"), but probably the best single integrative work I've seen to date.
Hopefully these books will stimulate the development of applied measures that can narrow the theory-practice gap.
Although there is still a clear void in the area of practical measurement tools, significant progress has been made the past 10 years in the development of theoretical models of important conative abilities. Two books are suggested for those who want to learn more.
First, and probably the most difficult read, is Remaking the Concept of Aptitude: Extending the Legacy of Richard Snow. This posthumously published book, in honor of Richard Snow, provides (in one place) a summary of the "state of the art" of Snow's work on aptitude...which, in simple terms, requires an integration of cognitive and conative abilities into a coherent theoretical framework. The reader is forewarned---this book shows the signs of being authored by committee. At times it is not coherent, material is not well linked, and it becomes difficult to see the forest from the trees.
The recently published Handbook of Competence and Motivation (Elliot & Dweck, 2005) represents a monumental integration of the emirical/conceptual progress made during the past few decades regarding to the most salient conative abilities (e.g., achievement goal orientation, self-theories, self-regulatory learning processes, etc.). Having recently spent the better part of 2 years searching for contemporary research in the diverse domain of conative abilities, I believe that this edited volume provides the best contemporary integration of this vast literature. An expensive book (yes, I know..."tell me something new"), but probably the best single integrative work I've seen to date.
Hopefully these books will stimulate the development of applied measures that can narrow the theory-practice gap.
Tuesday, May 03, 2005
FYI - Used/rare books in psychology-Powells.com
FYI tidbit
If anyone likes to purchase used and/or rare books (like I do), I would suggest subscribing to a service from one of the largest used books stores I've ever seen - Powell's Books in Portland, Oregon. Book lovers could get lost in this store for days.
I receive via email (every day) a listing of new books posted in three areas - psychology, mathematics, and rare books. I often find a book I want for my collection and then simply order. The service has always been great.
If blogsters want to take a peak, click here and you will be taken to the list of currently featured psychology books. From that page you can navigate around to other topics and find the page where you can subscribe. E-mailed book reviews is another feature I enjoy
If anyone likes to purchase used and/or rare books (like I do), I would suggest subscribing to a service from one of the largest used books stores I've ever seen - Powell's Books in Portland, Oregon. Book lovers could get lost in this store for days.
I receive via email (every day) a listing of new books posted in three areas - psychology, mathematics, and rare books. I often find a book I want for my collection and then simply order. The service has always been great.
If blogsters want to take a peak, click here and you will be taken to the list of currently featured psychology books. From that page you can navigate around to other topics and find the page where you can subscribe. E-mailed book reviews is another feature I enjoy
Cognitive Efficiency and achievement
The following article, which is "in press" in Intelligence, provides interesting information regarding the potential importance of measures of cognitive efficiency in predicting/explaining school achievement. The abstract is printed below along with a few highlights from the study.
Luo, D., Thompson, L. A., & Detterman, D. K. (2005). The criterion validity of tasks of basic cognitive processes. Intelligence, In Press, Corrected Proof.
Abstract
Luo, D., Thompson, L. A., & Detterman, D. K. (2005). The criterion validity of tasks of basic cognitive processes. Intelligence, In Press, Corrected Proof.
Abstract
- The present study evaluated the criterion validity of the aggregated tasks of basic cognitive processes (TBCP). In age groups from 6 to 19 of the Woodcock-Johnson III Cognitive Abilities and Achievement Tests normative sample, the aggregated TBCP, i.e., the processing speed and working memory clusters, correlate with measures of scholastic achievement as strongly as the conventional indexes of crystallized intelligence and fluid intelligence. These basic processing aggregates also mediate almost exhaustively the correlations between measures of fluid intelligence and achievement, and appear to explain substantially more of the achievement measures than the fluid ability index. The results from the Western Reserve Twin Project sample using TBCP with more rigorous experimental paradigms were similar, suggesting that it may be practically feasible to adopt TBCP with experimental paradigms into the psychometric testing tradition. Results based on the latent factors in structural equation models largely confirmed the findings based on the observed aggregates and composites.
- The measures of TBCP in the present study were taken from two data sources, Woodcock-Johnson III Cognitive Abilities and Achievement Tests (W-J III; Woodcock et al., 2001a, 2001c; Woodcock, McGrew, & Mather, 2001b) normative data and the Western Reserve Twin Project (WRTP) data. The WJ III results will be the focus of this post.
- Luo et al. examined (via multiple regression and SEM) the extent to which measures of what the WJ III authors (myself included – see home page conflict of interest disclosure) call “cognitive efficiency” (CE - Gs and Gsm tests/clusters) add to the prediction of total achievement, above and beyond Gc and Gf.
- These researchers found that CE measures/abilities demonstrated substantial correlations with scholastic performance (WJ III Total Achievement). The CE-Ach correlations were similar to correlations between conventional test composites and scholastic performance. These results suggested that measures of CE provide incremental predictive validity beyond Gc. Collectively, CE+Gc accounted for approximately about 60% of the variability in achievement when observed measures were analyzed (multiple regression) and up to 70% or more of the variance when the SEM latent traits were analyzed. The authors concluded that these levels of prediction were “remarkable”
- Gf measures did NOT contribute significantly to the prediction of achievement beyond that already accounted for by the CE measures. [Editorial note – my prior research with the WJ III and WJ-R suggests this result may be due to the authors using “total achievement” as their criterion. My published research has consistently found that Gf is an important predictor/causal variable in the domain of mathematics].
- A potential explanation for the power of the CE measures/variables has previously been published and posted to the web (click here).
- The current results, IMHO, fit nicely within CHC-based information model frameworks that have been suggested. Simplified schematic models (based on the work of Woodcock) can be viewed by clicking here.
Subscribe to:
Posts (Atom)