As a professional courtesy, and as a result of a thread on the CHC listerv regarding the importance of Gv abilities in academic performance, I have posted a copy of "Application of SB5 results to the classroom", a poster paper written by Andrew Carson (Riverside Publishing employee) and Gale Roid (SB5 author) presented by Carson at APA 2004. Although written with regard to the SB5, the information (on the relevance of CHC broad abilities to classroom performance and activities) could be generalized to other CHC broad ability composites measures.
All usual disclaimers apply. In particular, IQ blogster readers should read my 3-25-05 "Airport ruminations: Glass on scholary e-pubs" post on the need for readers to serve as their own filters/reviewers of material that has not been reviewed by formal journal boards.
If other readers have papers that are relevant to this blog, and that won't cause copyright problems, contact me and I'll take a look. Maybe this could be a feature of this blog---dissemination of prepublication research findings and other "fugitive" literature (with all caveats to the reader).
Tuesday, March 29, 2005
Monday, March 28, 2005
g, CHC theory and genetics: FYI on another blog
FYI post - I just ran across a discussion of g and CHC theory on the Gene Expression blog, a blog dealing with contemporary genetic research and it's relationship to other disciplines. According to the Gene Expression home page, the purpose of this blog is as follows:
- "We are a collection of individuals interested in exploring the cutting edge of genetics and its intersection with other disciplines and everyday life (or the inverse!). We are liberal, conservative, libertarian, white, brown, Asian, male, female, Jewish, gentile, religious, nonreligious, American, non-American and many other labels. We are united by a reverence for the exploration of the heterodox in an empirical and analytic fashion drained of excessive emotive enthusiasm or revulsion. We are part of the remnant."
CHC-related book chapter free for download
Guilford Publishing has made available for download, or online viewing, one chapter from the new Flanagan and Harrison "Contemporary Intellectual Assessment" (CIA2) book.
The chapter (Impact of Cattell-Horn-Carroll Theory on Test Development and Interpretaton ofCognitive and Academic Abilities) is by Alfonson,Flanagan and Radwan. Click on the chapter title and you should be taken directly to the chapter.
The chapter (Impact of Cattell-Horn-Carroll Theory on Test Development and Interpretaton ofCognitive and Academic Abilities) is by Alfonson,Flanagan and Radwan. Click on the chapter title and you should be taken directly to the chapter.
Carroll Analyses of 50 CHC tests: Rnd 4-Two hypothesized higher-stratum models
Time to move to a higher level!
I will be attending the 2005 NASP conference this week. Although I don’t have time to interpret/discuss the results in detail, I decided that I would post the 2nd- and 3rd-order factor structures identified in the Carroll Analyses of 50 CHC-designed tests (see 3-17, 3-18, & 3-26-05 posts) with little comment—folks can chew on the results, and fellow NASPites might want to corner and chat with me about the results in Atlanta.
Click here and you can view/download/print the entire pdf file…which includes the descriptions of the tests used, the first-order results (already discussed) and two equally plausible higher-order structural models. These are then followed by two pages that provide background information regarding the terms (based on the dual cognitive processing model of information processing--with major distinctions between controlled vs automatic processing and process vs product-dominant abilities) suggested for one of the plausible models.
The second plausible structure, where the factors are labeled as per the Berlin BIS model refers to Berlin Model of Intelligence Structure, a model I believe that American psychologists need to become more familiar with. The Berlin BIS is a faceted model of intelligence (see SuB & Beauducel, 2005). Briefly, a facet model, which has its roots in the work of Guttman and the Radex models, attempts to describe/classify cognitive tasks with regard to several characteristics (typically mental operations and content—test format/stimuli). In contrast to the more-or-less discredited Guilford SOI model, each of the 12 BIS facets (4 operations x 3 content facets) do not have the status of ability factors, but instead, are used to classify performance on measures in each cell.
As you can see, the second alternative BIS higher-order structure suggests that CHC theorists may benefit from incorporating information regarding test content (figural, verbal, numerical) in possible “tweaks” to the CHC model framework. Collectively, both higher-order structures presented in these analyses suggest possible intermediate level stratum that should be entertained in CHC-driven research, particular if the results can be replicated in additional data sets.
Enjoy. I hope that some of the regular IQ blogsters will take a stab at commenting on these two plausible higher-order CHC frameworks. Much to ponder.
I will be attending the 2005 NASP conference this week. Although I don’t have time to interpret/discuss the results in detail, I decided that I would post the 2nd- and 3rd-order factor structures identified in the Carroll Analyses of 50 CHC-designed tests (see 3-17, 3-18, & 3-26-05 posts) with little comment—folks can chew on the results, and fellow NASPites might want to corner and chat with me about the results in Atlanta.
Click here and you can view/download/print the entire pdf file…which includes the descriptions of the tests used, the first-order results (already discussed) and two equally plausible higher-order structural models. These are then followed by two pages that provide background information regarding the terms (based on the dual cognitive processing model of information processing--with major distinctions between controlled vs automatic processing and process vs product-dominant abilities) suggested for one of the plausible models.
The second plausible structure, where the factors are labeled as per the Berlin BIS model refers to Berlin Model of Intelligence Structure, a model I believe that American psychologists need to become more familiar with. The Berlin BIS is a faceted model of intelligence (see SuB & Beauducel, 2005). Briefly, a facet model, which has its roots in the work of Guttman and the Radex models, attempts to describe/classify cognitive tasks with regard to several characteristics (typically mental operations and content—test format/stimuli). In contrast to the more-or-less discredited Guilford SOI model, each of the 12 BIS facets (4 operations x 3 content facets) do not have the status of ability factors, but instead, are used to classify performance on measures in each cell.
As you can see, the second alternative BIS higher-order structure suggests that CHC theorists may benefit from incorporating information regarding test content (figural, verbal, numerical) in possible “tweaks” to the CHC model framework. Collectively, both higher-order structures presented in these analyses suggest possible intermediate level stratum that should be entertained in CHC-driven research, particular if the results can be replicated in additional data sets.
Enjoy. I hope that some of the regular IQ blogsters will take a stab at commenting on these two plausible higher-order CHC frameworks. Much to ponder.
Sunday, March 27, 2005
New blog link: Myomancy
I've added a link to the Myomancy Blog in the link section of my page. Although not dealing with specific intelligence theories and test topics, this blog presents information that is of potential interest to IQs Corner Blog readership. The Blog is described as an "Independent news and reviews of alternative therapies relating to Dyslexia, ADHD and the Autistic spectrum."
I make no endorsement of their material or statements. This is an FYI courtesy link.
I make no endorsement of their material or statements. This is an FYI courtesy link.
Saturday, March 26, 2005
"A beautiful mind" - High Gv and MindManager
One of my favorite scenes from the moview "A beautiful mind" is when Professor John Nash has a wall covered with various newspaper and magazine clippings that are interconnected in complex ways via strings.......it looks like a complex spider web.
Whenever I reflect on that scene, I think about the powerful and creativity enhancement piece of software I stumbled across a few years ago (note - no commercial interest or conflict of interest for this plug) that has revolutionalized how I, a person high in Gv thinking abilities (please don't ask what I'm low in), organize almost all my professional thinking. By clicking here, you can visit one such map as posted on the web, and then click your way through the topics and subtopics.
The program is called MindManager and is published by MindJet. High Gv thinkers, especially those who like to draw mind or concept maps, should take a peek at this tool. I couldn't live without it.
Whenever I reflect on that scene, I think about the powerful and creativity enhancement piece of software I stumbled across a few years ago (note - no commercial interest or conflict of interest for this plug) that has revolutionalized how I, a person high in Gv thinking abilities (please don't ask what I'm low in), organize almost all my professional thinking. By clicking here, you can visit one such map as posted on the web, and then click your way through the topics and subtopics.
The program is called MindManager and is published by MindJet. High Gv thinkers, especially those who like to draw mind or concept maps, should take a peek at this tool. I couldn't live without it.
Carroll Analyses of 50 CHC tests: Rnd 3--Speed
An interesting finding in the Carroll Analyses of 50 CHC-designed tests (see 3-17 & 3-18-05 posts) is the emergence of separate cognitive (Gsc) and achievement (Gsa) factors, a distinction I have not found (although it may be out there somewhere) in the human cognitive abilities literature. Does this distinction make sense? I think so.
First, one potential distinction between the Gsc and Gsa factors/abilities may be a content facet dimension. The two strongest Gsc tests (Pair Cancellation and Visual Matching), as well as three of the four highest loading Gsc tests, require individuals to process visual/figural stimuli (e.g., numbers, pictures, shapes). In contrast, the two strongest Gsa tests (Reading Fluency and Writing Fluency) primarily require the processing of language or semantic material (words, sentences). Could the Gsc/Gsa factors reflect differential efficiency in the processing of information along a visual/spatial/figural—auditory/linguistic dimension?
Second, Newland’s classic process- vs. product-dominant distinction might also help understand the emergence of these two separate factors. The Gsa factor could be considered the fluency/rate indicator for the processing of more product-dominant (acquired knowledge) information and stimuli while Gsc may be associated with more process-dominant information processing. More specifically, the Gsa ability requires searching efficiently through an individual's store of acquired knowledge (e.g., achievements) while, in contrast, the Gsc stimuli are more novel and most likely require little in the way of acquired knowledge domain searches.
It is clear that much additional research is needed within the domain of human cognitive speed. A recent review of literature (published since Carroll’s 1993 treatise) has suggested a possible hierarchical speed model (click here) with at least three different strata, and possibly four or more. In that proposed hierarchy, even Perceptual Speed (P – an ability classified at stratum I – a narrow ability) has recently been suggested (by Ackerman et al) to have a substructure of yet narrower abilities (Pattern Recognition, Scanning, Memory, Complex).
Clearly the broad domain of human cognitive speed is not yet clearly defined and much is yet to be learned. If there are any doctoral students looking for dissertation topics, this CHC domain is still ripe for potential unique scholarly contributions…..just hurry up and do the research (Gs.phd – PhD processing speed).
Do others have hypotheses to explain and define the Gsa and Gsc factors identified in these analyses? The IQ blogsters are all ears.
First, one potential distinction between the Gsc and Gsa factors/abilities may be a content facet dimension. The two strongest Gsc tests (Pair Cancellation and Visual Matching), as well as three of the four highest loading Gsc tests, require individuals to process visual/figural stimuli (e.g., numbers, pictures, shapes). In contrast, the two strongest Gsa tests (Reading Fluency and Writing Fluency) primarily require the processing of language or semantic material (words, sentences). Could the Gsc/Gsa factors reflect differential efficiency in the processing of information along a visual/spatial/figural—auditory/linguistic dimension?
Second, Newland’s classic process- vs. product-dominant distinction might also help understand the emergence of these two separate factors. The Gsa factor could be considered the fluency/rate indicator for the processing of more product-dominant (acquired knowledge) information and stimuli while Gsc may be associated with more process-dominant information processing. More specifically, the Gsa ability requires searching efficiently through an individual's store of acquired knowledge (e.g., achievements) while, in contrast, the Gsc stimuli are more novel and most likely require little in the way of acquired knowledge domain searches.
It is clear that much additional research is needed within the domain of human cognitive speed. A recent review of literature (published since Carroll’s 1993 treatise) has suggested a possible hierarchical speed model (click here) with at least three different strata, and possibly four or more. In that proposed hierarchy, even Perceptual Speed (P – an ability classified at stratum I – a narrow ability) has recently been suggested (by Ackerman et al) to have a substructure of yet narrower abilities (Pattern Recognition, Scanning, Memory, Complex).
Clearly the broad domain of human cognitive speed is not yet clearly defined and much is yet to be learned. If there are any doctoral students looking for dissertation topics, this CHC domain is still ripe for potential unique scholarly contributions…..just hurry up and do the research (Gs.phd – PhD processing speed).
Do others have hypotheses to explain and define the Gsa and Gsc factors identified in these analyses? The IQ blogsters are all ears.
Thursday, March 24, 2005
Dynamic assessment references: Free IAP searchable on-line specialized reference database
On the CHC listserv today, a question was asked about the status of dynamic assessment. As per sargeant Scultz on "Hogan's Heroes"..."I know nothinggggggggggggg......" I've not kept up with the literature, but I have continued to track recent references related to the approach. Below is what I found.
Readers who have questions about available literature related to intelligence testing are encouraged to check out the FREE IAP searchable on-line database that I've nurtured for over 10 years. I've tracked references specifically related to intelligence testing, cognition, measurement, etc., and have put them in a nice neat place for people to find. It is only a reference search database...you still need to find the articles on your own. Information regarding this reference data base can be found at the following link (click here)
Enjoy
Brown, A. L., Haywood, H. C., & Wingenfeld, S. (1990). Dynamic approaches to psychoeducational assessment. School Psychology Review, 19(4), 411-422.
Embretson, S. E., & Prenovost, L. K. (2000). Dynamic cognitive testing: What kind of information is gained by measuring response time and modifiability? Educational and Psychological Measurement, 60(6), 837-863.
Grigorenko, E. L., & Sternberg, R. J. (1998). Dynamic testing. Psychological Bulletin, 124(1), 75-111.
Guthke, J., & Stein, H. (1996). Are learning tests the better version of intelligence tests? European Journal of Psychological Assessment, 12(1), 1-13.
Haney, M. R., & Evans, J. G. (1999). National survey of school psychologists regarding use of dynamic assessment and other nontraditional assessment techniques. Psychology in the Schools, 36(4), 295-304.
Kaderavek, J. N., & Justice, L. M. (2004). Embedded-explicit emergent literacy intervention II: Goal selection and implementation in the early childhood classroom. Language Speech and Hearing Services in Schools, 35(3), 212-228.
Lauchlan, F., & Elliott, J. (2001). The psychological assessment of learning potential. British Journal of Educational Psychology, 71, 647-665.
Laughon, P. (1990). The dynamic assessment of intelligence: A review of three approaches. School Psychology Review, 19(4), 459-470.
Lidz, C. S. (1991). Practitioner's Guide To Dynamic Assessment. New York: The Guilford Press.
Reynolds, C. R., & Kamphaus, R. W. (1990). Handbook of psychological & educational assessment of children: Intellligence & achievement. New York: Guilford Press.
Skuy, M., Gewer, A., Osrin, Y., Khunou, D., Fridjhon, P., & Rushton, J. P. (2002). Effects of mediated learning experience on Raven's matrices scores of African and non-African university students in South Africa. Intelligence, 30(3), 221-232.
Sternberg, R. J. (1994). Encyclopedia of human intelligence. New York: Macmillian.
Sternberg, R. J., Grigorenko, E. L., Ngorosho, D., Tantufuye, E., Mbise, A., Nokes, C., Jukes, M., & Bundy, D. A. (2002). Assessing intellectual potential in rural Tanzanian school children. Intelligence, 30(2), 141-162.
Taylor, T. R. (1994). A review of three approaches to cognitive assessment, and a proposed integrated approach based on a unifying theoretical framework. South African Journal of Psychology, 24(4), 183-193.
Tzuriel, D. (2000). Dynamic assessment of young children: Educational and intervention perspectives. Educational Psychology Review, 12(4), 385-435.
Readers who have questions about available literature related to intelligence testing are encouraged to check out the FREE IAP searchable on-line database that I've nurtured for over 10 years. I've tracked references specifically related to intelligence testing, cognition, measurement, etc., and have put them in a nice neat place for people to find. It is only a reference search database...you still need to find the articles on your own. Information regarding this reference data base can be found at the following link (click here)
Enjoy
Brown, A. L., Haywood, H. C., & Wingenfeld, S. (1990). Dynamic approaches to psychoeducational assessment. School Psychology Review, 19(4), 411-422.
Embretson, S. E., & Prenovost, L. K. (2000). Dynamic cognitive testing: What kind of information is gained by measuring response time and modifiability? Educational and Psychological Measurement, 60(6), 837-863.
Grigorenko, E. L., & Sternberg, R. J. (1998). Dynamic testing. Psychological Bulletin, 124(1), 75-111.
Guthke, J., & Stein, H. (1996). Are learning tests the better version of intelligence tests? European Journal of Psychological Assessment, 12(1), 1-13.
Haney, M. R., & Evans, J. G. (1999). National survey of school psychologists regarding use of dynamic assessment and other nontraditional assessment techniques. Psychology in the Schools, 36(4), 295-304.
Kaderavek, J. N., & Justice, L. M. (2004). Embedded-explicit emergent literacy intervention II: Goal selection and implementation in the early childhood classroom. Language Speech and Hearing Services in Schools, 35(3), 212-228.
Lauchlan, F., & Elliott, J. (2001). The psychological assessment of learning potential. British Journal of Educational Psychology, 71, 647-665.
Laughon, P. (1990). The dynamic assessment of intelligence: A review of three approaches. School Psychology Review, 19(4), 459-470.
Lidz, C. S. (1991). Practitioner's Guide To Dynamic Assessment. New York: The Guilford Press.
Reynolds, C. R., & Kamphaus, R. W. (1990). Handbook of psychological & educational assessment of children: Intellligence & achievement. New York: Guilford Press.
Skuy, M., Gewer, A., Osrin, Y., Khunou, D., Fridjhon, P., & Rushton, J. P. (2002). Effects of mediated learning experience on Raven's matrices scores of African and non-African university students in South Africa. Intelligence, 30(3), 221-232.
Sternberg, R. J. (1994). Encyclopedia of human intelligence. New York: Macmillian.
Sternberg, R. J., Grigorenko, E. L., Ngorosho, D., Tantufuye, E., Mbise, A., Nokes, C., Jukes, M., & Bundy, D. A. (2002). Assessing intellectual potential in rural Tanzanian school children. Intelligence, 30(2), 141-162.
Taylor, T. R. (1994). A review of three approaches to cognitive assessment, and a proposed integrated approach based on a unifying theoretical framework. South African Journal of Psychology, 24(4), 183-193.
Tzuriel, D. (2000). Dynamic assessment of young children: Educational and intervention perspectives. Educational Psychology Review, 12(4), 385-435.
Tuesday, March 22, 2005
Carroll Analyses of 50 CHC tests: Rnd 2: Gf and the executive controlled model of working memory
The Gf findings
One of the most interesting and unexpected findings to emerge from the Carroll Analyses of 50 CHC-designed tests (see 3-17 & 3-18-05 posts) is the composition of the Gf (fluid reasoning) factor. The presence of tests of inductive (Concept Formation) and deductive reasoning (Analysis-Synthesis) are consistent with decades of research—induction and deductive reasoning are considered the hallmarks of Gf. But, how does one explain the significant loadings for a test that requires the following of increasing complex series of oral directions (WJ III Understanding Directions) and an oral auditory analysis test that requires individuals to apply different processes (rhyming, substitution, deletion and reversal- Sound Awareness)?
First, as way of additional background, I have found (yes, in a series of other unpublished “languishing or banned to forgotten sectors of my hard drive” analyses of the WJIII+DS tests) that the WJ III Understanding Directions and Sound Awareness tests are two of the:
It is my current working hypothesis that the WJ III UD and SA tests require much in the way of working memory and executive/controlled attention, abilities that have been repeatedly linked in contemporary research on working memory and Gf (particularly as articulated by Engle & Kane and colleagues and as recently summarized by Horn and Blankson [2005] in the CIA2 book.) This view states that Gf abilities involve the process of:
In other words, the capacity to focus and maintain attention is now being seen as a critical component for successful performance on tasks that require higher-level fluid reasoning (Gf). According to Horn and Blankson, “focusing and maintaining attention appears to be an aspect of the capacity for apprehension that Spearman described as a major feature of g” (p. 54; emphasis added by me).
From my readings, Engle and colleagues appear to have developed the most systematic program of research and theoretical explanation for the “controlled-attention” (or executive attention) view of working memory, it’s role in many real world phenomena, and more importantly, the hypothesis that “executive attentional abilities are some way related to general fluid intelligence” (Heitz, Unsworth & Engle, 2005, p. 74). Executive or controlled attention enhances performance in situations that require selective or controlled attention, the ability to switch between plans and strategies, and the inhibition of task-irrelevant irrelevant information (intrusions) in working memory, abilities that collectively contribute to the ability to solve complex reasoning problems in working memory (e.g., WJ III Concept Formation – a measure of Induction).
A caveat
Of course, the question is still not answered (from the current factor analysis based results) whether the executive/controlled attention component of working memory is a component of, or a causal mechanism, of Gf. The later causal view is just as probable. Empirical evidence for the causal hypothesis, based on these same data, can be viewed in a summary of the working memory/Gf causal research in my CIA2 chapter or by viewing a pre-publication version of that chapter (click here)
Concluding comments
The Gf factor reported in the Carroll Analyses of 50 CHC-tests is consistent with contemporary research that suggests a significant link between fluid reasoning (Gf) and the executive/controlled attention model of working memory.
From a practical perspective, these findings suggest that assessment professionals who are evaluating the ability of a person to perform complex cognitive reasoning and problem solving tasks (Gf), if using the WJ III battery, should attend to (aside from the WJ III Gf tests) an individuals performance on the WJ III Understanding Directions and Sound Awareness tests (and possibly WJ III Story Recall as well). Performance on these tests may provide insights on a person's ability to sustain focused executive control during the activite manipulation of multidimensional pieces of information in working memory--an ability that may be important when tasks require complex fluid reasoning ability (Gf).
One of the most interesting and unexpected findings to emerge from the Carroll Analyses of 50 CHC-designed tests (see 3-17 & 3-18-05 posts) is the composition of the Gf (fluid reasoning) factor. The presence of tests of inductive (Concept Formation) and deductive reasoning (Analysis-Synthesis) are consistent with decades of research—induction and deductive reasoning are considered the hallmarks of Gf. But, how does one explain the significant loadings for a test that requires the following of increasing complex series of oral directions (WJ III Understanding Directions) and an oral auditory analysis test that requires individuals to apply different processes (rhyming, substitution, deletion and reversal- Sound Awareness)?
First, as way of additional background, I have found (yes, in a series of other unpublished “languishing or banned to forgotten sectors of my hard drive” analyses of the WJIII+DS tests) that the WJ III Understanding Directions and Sound Awareness tests are two of the:
- (a) best WJ III test-level predictors of academic achievement across language arts (reading and writing) and math, (b) highest g-loading tests on the first extracted principal component, and (c) most cognitively complex tests as defined by Guttmann’s Radex model (using multidimensional scaling). Clearly the WJ III UD and SA awareness tests are measures of complex cognitive processes associated with higher level abstract reasoning (Gf).
It is my current working hypothesis that the WJ III UD and SA tests require much in the way of working memory and executive/controlled attention, abilities that have been repeatedly linked in contemporary research on working memory and Gf (particularly as articulated by Engle & Kane and colleagues and as recently summarized by Horn and Blankson [2005] in the CIA2 book.) This view states that Gf abilities involve the process of:
- "(1) gaining awareness of information (attention) and (2) holding different aspects of information in the span of awareness (working memory), both of which are dependent on (3) a capacity for maintaining concentration" (Horn & Blankson, 2005, p. 55)
In other words, the capacity to focus and maintain attention is now being seen as a critical component for successful performance on tasks that require higher-level fluid reasoning (Gf). According to Horn and Blankson, “focusing and maintaining attention appears to be an aspect of the capacity for apprehension that Spearman described as a major feature of g” (p. 54; emphasis added by me).
From my readings, Engle and colleagues appear to have developed the most systematic program of research and theoretical explanation for the “controlled-attention” (or executive attention) view of working memory, it’s role in many real world phenomena, and more importantly, the hypothesis that “executive attentional abilities are some way related to general fluid intelligence” (Heitz, Unsworth & Engle, 2005, p. 74). Executive or controlled attention enhances performance in situations that require selective or controlled attention, the ability to switch between plans and strategies, and the inhibition of task-irrelevant irrelevant information (intrusions) in working memory, abilities that collectively contribute to the ability to solve complex reasoning problems in working memory (e.g., WJ III Concept Formation – a measure of Induction).
A caveat
Of course, the question is still not answered (from the current factor analysis based results) whether the executive/controlled attention component of working memory is a component of, or a causal mechanism, of Gf. The later causal view is just as probable. Empirical evidence for the causal hypothesis, based on these same data, can be viewed in a summary of the working memory/Gf causal research in my CIA2 chapter or by viewing a pre-publication version of that chapter (click here)
Concluding comments
The Gf factor reported in the Carroll Analyses of 50 CHC-tests is consistent with contemporary research that suggests a significant link between fluid reasoning (Gf) and the executive/controlled attention model of working memory.
From a practical perspective, these findings suggest that assessment professionals who are evaluating the ability of a person to perform complex cognitive reasoning and problem solving tasks (Gf), if using the WJ III battery, should attend to (aside from the WJ III Gf tests) an individuals performance on the WJ III Understanding Directions and Sound Awareness tests (and possibly WJ III Story Recall as well). Performance on these tests may provide insights on a person's ability to sustain focused executive control during the activite manipulation of multidimensional pieces of information in working memory--an ability that may be important when tasks require complex fluid reasoning ability (Gf).
Monday, March 21, 2005
Airport ruminations: Glass on evaluating scholarly e-pubs
Thoughts from the Mpls, MN airport…three hours to kill.
Why should anyone trust the data analyses and interpretation I have posted to date regarding my Carroll Analyses of 50 CHC-designed tests (3-17-05 and 3-18-05 )? I’ve asked this question of myself many times and had to resolve it to my satisfaction before launching this blog. The tipping point came when I recently read an interview in the Educational Researcher (Vol 33, No 3, April 2004 ) with one of the leading and respected educational researchers of our times—Dr. Gene Glass.
First---who is Gene Glass and why are his thoughts worth merit?
As stated in this interview article:
So…what did he say and why should it make you consider the results and interpretations I post as credible? Or, as stated by Glass in his ER interview (note – all italics are emphasis added by me):
According to Glass (in the ER interview), and consistent with my belief in the need for more timely dissemination of scholarly insights:
Furthermore, my beliefs and experiences coincide with those of Dr. Glass:
I resonate to Dr. Glass’s observed paradox in the importance of referred publications. According to Glass:
However, if someone is going to play in my professional sandbox (i.e., present research regarding CHC theory and measurement), I believe I am best served by being my own filter of all available information. As stated by Glass:
It is my belief that the readers of this blog must serve as their own filters, judges, and reviewers of any data-based results I present. Those knowledgeable in the areas where I present data and/or hypothesis are in the best position to evaluate the integrity of the information presented. In hope that the Blogs “comments” feature will provide an outlet for those who want to question or critique what is presented. In the pursuit of knowledge that (ultimately) may impact the lives of individuals (e.g., individuals referred for comprehensive psycho-educational assessments), I believe in open access and the timely dissemination of results to the widest audience possible. Thus, this blog, which will be judged by the readers and users.
Finally, like Dr. Glass, I still believe in, and continue to pursue, the publication of scholarly results via the mechanism of peer-reviewed journals. There is more than one means by which to contribute to our respective professional domains of knowledge.
Ok….within the next few days I will return to interpretation of the Carroll Analysis of 50-CHC designed tests…starting first with the most interesting Gf factor. And, in the spirit of accountability for the results I present, when done, I will post the correlation matrix analyzed for independent analysis by others.
Why should anyone trust the data analyses and interpretation I have posted to date regarding my Carroll Analyses of 50 CHC-designed tests (3-17-05 and 3-18-05 )? I’ve asked this question of myself many times and had to resolve it to my satisfaction before launching this blog. The tipping point came when I recently read an interview in the Educational Researcher (Vol 33, No 3, April 2004 ) with one of the leading and respected educational researchers of our times—Dr. Gene Glass.
First---who is Gene Glass and why are his thoughts worth merit?
As stated in this interview article:
- Gene V. Glass is a Regents’ Professor of both Educational Leadership & Policy Studies and Psychology in Education at Arizona State University. He has won the Palmer O. Johnson Award for best article in the American Educational Research Journal (AERJ) not once, but twice (1968 and 1970). He has served as President of the American Educational Research Association (AERA) in 1975, co-editor of AERJ (1984–1986), editor of Review of Educational Research (1968–1970) and Psychological Bulletin (1978–1980), is executive editor of the International Journal of Education and the Arts (since 2000), and serves as editor of Education Policy Analysis Archives (since 1993) and Education Review (since 2000).
- Dr. Glass has also served on the editorial boards of 13 journals and has published approximately 200 books, chapters, articles, and reviews. Dr. Glass is perhaps best known for his role in the development of the quantitative research synthesis technique known as meta-analysis. Not shabby! And, Dr. Glass has been a vocal advocate of on-line e-journal publications, and, more recently, the publication and posting of non-peer reviewed manuscripts (he has written) for dissemination to other scholars.
So…what did he say and why should it make you consider the results and interpretations I post as credible? Or, as stated by Glass in his ER interview (note – all italics are emphasis added by me):
- "Now, the questions in most people’s mind are: Won’t there be chaos if everybody 'publishes' anything they want? How will we be able to separate wheat from chaff, truth from error, pure gold from garbage? There are answers, of course, and more questions."
According to Glass (in the ER interview), and consistent with my belief in the need for more timely dissemination of scholarly insights:
- “Some papers that I have made public through my own Website are downloaded a half dozen times a day, often from places that have no access to the traditional journals. That’s reaching a wider, larger audience than paper journals reach.”
Furthermore, my beliefs and experiences coincide with those of Dr. Glass:
- "Let’s get down to brass tacks. Scholarly communications are not essentially about paper vs. Internet “packets”; they are about commercialization vs. open access to knowledge. My experiences with publishing e-journals over the last decade have taught me that many more people than we ever dreamed want access to educational research: parents, teachers, professionals of many types, students and scholars far from the United States who cannot afford our books and journals.
I resonate to Dr. Glass’s observed paradox in the importance of referred publications. According to Glass:
- "There’s an irony hiding in this business of “refereed publication.” If I am vaguely interested in a topic that isn’t at the center of my work, then I’m prone to pick up a peer-reviewed journal that rejects 90% of everything sent to it. I simply don’t have the background to plow through tons of stuff and judge it for myself; trust the experts."
However, if someone is going to play in my professional sandbox (i.e., present research regarding CHC theory and measurement), I believe I am best served by being my own filter of all available information. As stated by Glass:
- "But if you are talking about things that come to the core of my own research (let’s say, the re-segregating effects of school choice policies), then please don’t filter out anything for me; I want to see whatever anyone is writing on the subject and I want it right away. I’ll judge it for myself. When we interviewed a sample of “hard scientists” about their reading habits, they said exactly that."
It is my belief that the readers of this blog must serve as their own filters, judges, and reviewers of any data-based results I present. Those knowledgeable in the areas where I present data and/or hypothesis are in the best position to evaluate the integrity of the information presented. In hope that the Blogs “comments” feature will provide an outlet for those who want to question or critique what is presented. In the pursuit of knowledge that (ultimately) may impact the lives of individuals (e.g., individuals referred for comprehensive psycho-educational assessments), I believe in open access and the timely dissemination of results to the widest audience possible. Thus, this blog, which will be judged by the readers and users.
Finally, like Dr. Glass, I still believe in, and continue to pursue, the publication of scholarly results via the mechanism of peer-reviewed journals. There is more than one means by which to contribute to our respective professional domains of knowledge.
Ok….within the next few days I will return to interpretation of the Carroll Analysis of 50-CHC designed tests…starting first with the most interesting Gf factor. And, in the spirit of accountability for the results I present, when done, I will post the correlation matrix analyzed for independent analysis by others.
Labels:
blogging,
CHC theory,
Glass
On the road again: Blogging gains mainstream (CNN's) attention
I'm sitting in a hotel room in Chicago reading the McDonald's of newspapers, the USA Today. In the Life section I ran across the following article--"It's prime time for blogs on CNN's Inside Politics'.
Many of you have asked me questions about what Blogs are, as they have not been on your internet radar screen. In a prior post (Blink and Blog) I briefly discuss how influential "blogging" and the "blogosphere" have become in politics and opinion formation related to political issues. Today's article, which indicates that CNN will now routinely have a piece on "what the blogs are saying (in politics)" is clearly a sign that mainstream media is recognizing the emerging role and importance of blogs in opinion formation.
In the article, Judy Woodruff states that
Of course, a blog on intelligence theories and tests is by no means going to set the world on fire. It is just my humble opinion, and the reason for me giving this medium a try, that blogging (and whatever it may morph into next on the internet) is a potentiallly important outlet for providing information, analyses, and comment regarding a diverse array of topics. Just my 2 cents.
[PS - those who really know me probably recognize that the real motivation for my Blog is to hopefully gather enough attention so I can finally "make it big"---yep----being interviewed by Katie Couric on the Today show:) ]
Many of you have asked me questions about what Blogs are, as they have not been on your internet radar screen. In a prior post (Blink and Blog) I briefly discuss how influential "blogging" and the "blogosphere" have become in politics and opinion formation related to political issues. Today's article, which indicates that CNN will now routinely have a piece on "what the blogs are saying (in politics)" is clearly a sign that mainstream media is recognizing the emerging role and importance of blogs in opinion formation.
In the article, Judy Woodruff states that
- "not being a child of the internet, I confess I was skeptical when Jon first suggested the segment. I viewed blogs as pure opinion, no reporting. But I've come to see the segment as a tool for getting at a new, unpredictable and increasingly influence place on the political landscape."
Of course, a blog on intelligence theories and tests is by no means going to set the world on fire. It is just my humble opinion, and the reason for me giving this medium a try, that blogging (and whatever it may morph into next on the internet) is a potentiallly important outlet for providing information, analyses, and comment regarding a diverse array of topics. Just my 2 cents.
[PS - those who really know me probably recognize that the real motivation for my Blog is to hopefully gather enough attention so I can finally "make it big"---yep----being interviewed by Katie Couric on the Today show:) ]
Friday, March 18, 2005
Carroll CHC analyses of 50 tests - Rnd 1
Ok. Time for the blogmaster to buck up and comment on the first-order factor analyses results described and posted in the “Carroll” Analysis of 50 CHC-defined tests. If you haven’t viewed (and printed) the results posted online (click here to view and print), this post will make no sense at all---skip it.
I’m not going to comment on everything, but just a few things for now.
The obvious
Some of the most intriguing findings, IMHO, which will be discussed in future posts, are:
That’s it for now. Comments, hypotheses, ideas, etc. are all welcome. Due to travel plans, it may be a number of days before I revisit this topic.
I’m not going to comment on everything, but just a few things for now.
The obvious
- Although not in need of additional validation, the results presented continue to provide structural validity evidence for the narrow CHC ability factors of Memory Span (MS), Naming Facility (NA), Associative Memory (MA), Math Achievement (KM), and Quantitative Reasoning (QR). Support is also found for the broad cognitive abilities of Fluid Reasoning (Gf), Visual-Spatial Abilities (Gv), Auditory Processing (Ga), and Crystallized or Comprehension-Knowledge (Gc) abilities. Lets here a big round of applause for these abilities.
- The presence of distinct KM and RQ factors is nice to see and clarifies the fact that the store of acquired mathematical knowledge (KM) is distinct from quantitative reasoning (RQ). I know that in earlier writings there was often confusion over the so called Gq factor. As per the CHC framework I advocate (click here), the KM narrow ability would fall under Gq, while the RQ narrow factor would fall under Gf.
- The NA factor has been one of the most robust post-WJ III findings I have seen. The WJ III Retrieval Fluency and Rapid Picture Naming tests “hang together” across structural methodologies (EFA, CFA, cluster analyses, MDS). I believe the historical term “naming facility” does not do this narrow factor justice. The underlying common denominator (in my opinion) seems to be the fluency/automaticity of retrieval of vocabulary/words. I would hypothesize that this factor may be analogous to the current rage term “rapid automatic naming (RAN)” and/or Perfetti’s “speed of lexical access.” Maybe this factor needs a makeover in terms of name and definition to emphasize the critical features---namely, speed of lexical/semantic access/ processing.
Some of the most intriguing findings, IMHO, which will be discussed in future posts, are:
- The composition of the Fluid Reasoning (Gf) factor. Why do the Understanding Directions and Sound Awareness tests load together with more classic Gf measures of Induction (I—Concept Formation) and Deduction (RG-General Sequential Reasoning—Analysis-Synthesis)? I have some interesting hypotheses. Does anyone want to speculate? Hints---think of some of the research of Kyllonen and Engle and associates (for those who have their new copies of the CIA2 book, read the section in Horn & Blankson’s chapter on vulnerable abilities). This is so exciting…isn’t it?
- Two different Gs abilities (Gsa and Gsc)? Why? Do they make sense? What are some interpretations and definitions?
- Two separate factors (PG and WA/V) in the Grw domain. How should they best be defined and why is it difficult to use Carroll’s seminal treatise to classify these Grw factors?
That’s it for now. Comments, hypotheses, ideas, etc. are all welcome. Due to travel plans, it may be a number of days before I revisit this topic.
Labels:
CHC theory,
factor analysis,
g (gen IQ),
Ga,
Gc,
Gf,
Glr,
Gq,
Grw,
Gs,
Gsm,
Jack Carroll,
WJ III
New blog link: Eide Neurolearning
Based on a feedback comment to this BLOG, I've added a link to the Eide Neurolearning Blog on in the link section of my page. Although not dealing with specific intelligence theories and test topics, this blog does appear to present information that is of potential interest to IQs Corner Blog readership. I make no endorsement of their material or statements. This is an FYI link courtesy.
CIA2 books being shipped
The "CIA2 books are here! The CIA2 books are here! The CIA2 books are here!" (if you are a Steve Martin fan, you should be chuckling, as this is a take-off on his "the phone books are here!" from the movie The Jerk. Seriously, I received my copy of Flanagan's and Harrison's Contemporary Intellectual Assessment-Second Edition yesterday. It looks great. So many books and journal articles---so little time. If you have ordered your copy it should arrive soon...they apparently are being shipped.
Where have all the quantoids gone?
HHmmmmmm. Almost 24 hours since I posted the Carroll Analysis of 50 CHC-Designed Tests: Part 1, and only one comment (on the CHC listerv--none on this blog). Where have all the quanotids and CHC adherents gone? Inquiring minds want to know. Are they all too busy running stats on their March Madness (NCAA B-Ball tourney) gambling sheets?
Regarding the one email, the answer is "yes".....I will eventually post the correlation matrix so folks can run their own special favorite flavor of EFA/CFA analyses. But, I'll hold that back until I've posted all the results and hopefully recieve some comment/discussion feedback.
Regarding the one email, the answer is "yes".....I will eventually post the correlation matrix so folks can run their own special favorite flavor of EFA/CFA analyses. But, I'll hold that back until I've posted all the results and hopefully recieve some comment/discussion feedback.
Thursday, March 17, 2005
"Carroll" Analysis of 50 CHC-designed tests: Part 1
As promised in March 14 post (So much data…so little time: In honor of Jack Carroll), this is a first attempt to present previously unpublished research results for dissemination and hopefully some comment-driven discussion on this blog. This activity will be spread across multiple posts, so be patient. I don’t want anyone exceeding their RDA (recommended daily allowance) of data.
The Analysis
Following my personal Fairbanks, Alaska tutoring session with Jack Carroll on his self-written suite of Schmid-Leiman (SL) EFA DOS-based software, I ran an analysis (in Dec 2003 – Note to self - publish soon, or results will perish/languish on your hard drive) as per the steps described in Chapter three of his seminal treatise (Human Cognitive Abilities). I ran the analyses on 50 individual test variables from the WJ III and WJ III Diagnostic Supplement, a battery of tests designed as per CHC theory. The analysis was on all norm subjects from ages 6 through adulthood (click here to go to page where you can download ASB2, which is a technical abstract that describes the WJIII norm sample)
As I’ve written elsewhere (CHC Theory: Past, Present and Future), Jack, during his later years, had clearly moved beyond sole reliance on his SL-EFA procedures and had embraced confirmatory factor analyses (CFA) methods. Jack had blended the two methodologies as can be seen in his last published book chapter (The Higher-stratum Structure of Cognitive Abilities: Current Evidence Supports g and About Ten Broad Factors in Nyborg's The Scientific Study of General Intelligence [2003]). As an aside, while visiting Jack in Fairbanks, I found his computer disks were full of unpublished EFA+CFA analyses that he had graciously completed for other researchers or, that represented his analysis of correlation matrices that had been included in manuscripts he had been asked to review for a number of journals. It was clear his factor analytic approach had clearly evolved to one of first obtaining results from his EFA SL approach and then using those results as the starting point for CFA refinement and model testing.
Following Jack’s lead, after running his SL-EFA procedures on the 50 variable WJIII/DS dataset, I used the final SL-EFA solution as the initial starting point for further model “tweaking” via CFA methods. I believe this use of CFA methods is often referred to as model generation CFA, to keep it distinct from model confirmation CFA. Yes, the inferential statistics are shot to hell when doing this.
The Results: Phase I
I’ve now posted a portion of the results (yes…I’m going to meter these results out over time). By clicking here, you will be taken to a pdf file that includes: (a) a listing and description of the WJ III and DS tests included in the analyses (you will need this as your key to the results) and (b) a summary, and my interpretation, of the first-order factor results. Please note, as frequently pointed out by Dr. Carroll, that there is a difference between the order and stratum of a factor. I’ve made my interpretations as per his logic regarding the breadth of the factors.
For now, I’m just posting these first-order findings “as is”….so all you quantoids can download/print the results and run your fingers through the numbers and generate hypotheses to your hearts content. My factor interpretations are represented by the CHC factor codes and names (along the top and down the left hand column). If you are unfamiliar with the CHC factor codes and terms you should visit (and print for reference) the CHC Definitions at the IAP CHC Definition Project page. Burn the names and codes into your brain. It is critical you learn CHC-SL (CHC as a second language).
Note that in the summary of the results I’ve had to propose some new narrow ability factors to account for some of the findings (comments, including possible definitions for these “newbies” is very much welcomed). Also, if you are new to CHC theory, it is recommended that your first read CHC Theory: Past, Present and Future to secure the necessary background information to interpret and understand the findings and the discussion to follow.
Comments
The Analysis
Following my personal Fairbanks, Alaska tutoring session with Jack Carroll on his self-written suite of Schmid-Leiman (SL) EFA DOS-based software, I ran an analysis (in Dec 2003 – Note to self - publish soon, or results will perish/languish on your hard drive) as per the steps described in Chapter three of his seminal treatise (Human Cognitive Abilities). I ran the analyses on 50 individual test variables from the WJ III and WJ III Diagnostic Supplement, a battery of tests designed as per CHC theory. The analysis was on all norm subjects from ages 6 through adulthood (click here to go to page where you can download ASB2, which is a technical abstract that describes the WJIII norm sample)
As I’ve written elsewhere (CHC Theory: Past, Present and Future), Jack, during his later years, had clearly moved beyond sole reliance on his SL-EFA procedures and had embraced confirmatory factor analyses (CFA) methods. Jack had blended the two methodologies as can be seen in his last published book chapter (The Higher-stratum Structure of Cognitive Abilities: Current Evidence Supports g and About Ten Broad Factors in Nyborg's The Scientific Study of General Intelligence [2003]). As an aside, while visiting Jack in Fairbanks, I found his computer disks were full of unpublished EFA+CFA analyses that he had graciously completed for other researchers or, that represented his analysis of correlation matrices that had been included in manuscripts he had been asked to review for a number of journals. It was clear his factor analytic approach had clearly evolved to one of first obtaining results from his EFA SL approach and then using those results as the starting point for CFA refinement and model testing.
Following Jack’s lead, after running his SL-EFA procedures on the 50 variable WJIII/DS dataset, I used the final SL-EFA solution as the initial starting point for further model “tweaking” via CFA methods. I believe this use of CFA methods is often referred to as model generation CFA, to keep it distinct from model confirmation CFA. Yes, the inferential statistics are shot to hell when doing this.
The Results: Phase I
I’ve now posted a portion of the results (yes…I’m going to meter these results out over time). By clicking here, you will be taken to a pdf file that includes: (a) a listing and description of the WJ III and DS tests included in the analyses (you will need this as your key to the results) and (b) a summary, and my interpretation, of the first-order factor results. Please note, as frequently pointed out by Dr. Carroll, that there is a difference between the order and stratum of a factor. I’ve made my interpretations as per his logic regarding the breadth of the factors.
For now, I’m just posting these first-order findings “as is”….so all you quantoids can download/print the results and run your fingers through the numbers and generate hypotheses to your hearts content. My factor interpretations are represented by the CHC factor codes and names (along the top and down the left hand column). If you are unfamiliar with the CHC factor codes and terms you should visit (and print for reference) the CHC Definitions at the IAP CHC Definition Project page. Burn the names and codes into your brain. It is critical you learn CHC-SL (CHC as a second language).
Note that in the summary of the results I’ve had to propose some new narrow ability factors to account for some of the findings (comments, including possible definitions for these “newbies” is very much welcomed). Also, if you are new to CHC theory, it is recommended that your first read CHC Theory: Past, Present and Future to secure the necessary background information to interpret and understand the findings and the discussion to follow.
Comments
- Please note that my posts (at least initially) are going to focus on the structure of the CHC theory and not so much on the WJ III instrument and tests --- but this does not preclude other folks from taking a test validity approach to interpretation. Isn’t this fun?
- Please note that I’m holding back what I think are the most exciting results—the second- and third-order factor results. Yes, the final solution is a hierarchical solution with three levels. The higher-order results, IMHO, although speculative, are very exciting.
- I’ll post comments and interpretations over the next few days (weeks?). Consider this an on-line instructional exercise.
- Let the games begin.
Labels:
CHC theory,
factor analysis,
Gc,
Gf,
Glr,
Gq,
Grw,
Gs,
Gsm,
Gv,
Jack Carroll,
WJ III
Wednesday, March 16, 2005
ISIR 2005 Conference Announcement
The location and date of the International Society for Intelligence Research (ISIR) 2005 annnual conference has been announced. The 2005 ISIR conference will be from Dec 1-3 at the Hyatt Regency in Albuquerque, NM.
ISIR was founded in 2000 as a "scientific society for researchers in human intelligence." A benefit worth the membership fee is that every member receives a subscription to the journal, Intelligence. As stated at the ISIR web page, the annual "conference offers an opportunity for those interested in intelligence to meet, present their research, and discuss current issues." If you want to see the "who's who" in intelligence research and theory, this is the conference to attend. It is a relatively small and cozy conference (when compared to the likes of APA, AERA, NASP), but the size provides many opportunities to exchange ideas with leading scholars in intelligence. The program agenda and abstracts from prior conferences are available for viewing (click here).
ISIR was founded in 2000 as a "scientific society for researchers in human intelligence." A benefit worth the membership fee is that every member receives a subscription to the journal, Intelligence. As stated at the ISIR web page, the annual "conference offers an opportunity for those interested in intelligence to meet, present their research, and discuss current issues." If you want to see the "who's who" in intelligence research and theory, this is the conference to attend. It is a relatively small and cozy conference (when compared to the likes of APA, AERA, NASP), but the size provides many opportunities to exchange ideas with leading scholars in intelligence. The program agenda and abstracts from prior conferences are available for viewing (click here).
Tuesday, March 15, 2005
Blink and Blog: Two one-thumb up books
Blink
By now, I’m sure that most folks have heard the buzz created by the best selling nonfiction book Blink (The Power of Thinking without Thinking) by Malcolm Gladwell. I recently finished the book. I give it one thumbs up (out of two…..when I use both my hands since I’m currently a solo act). It is a quick read and interesting, although for me, the book died approximately 2/3 of the way in.
Although much of the buzz surrounding Blink is that it supposedly tells us how accurate first impressions, subconscious thinking, and our intuition can be, I found that many of the examples seem to allude to contemporary findings in cognitive psychology (CP) and information processing (IP) theories. That is, CP/IP research has revealed much about the development of automaticity of cognitive processes and how quickly/fluently experts (from the expert-novice research) can recognize complex arrays of information via pattern recognition mechanisms. To me this is not intuition. Does anyone remember Anderson’s original classic work (and ACT-R theory) on cognitive production systems?
Many of the examples of intuition and subconscious thinking (which has a Freudian connotation to many laypeople), IMHO, are actually good examples of the development of expertise and the automatization of cognitive processes, processes explained by contemporary CP/IP research. If anything, the examples used by Gladwell might serve as good “real world” starting points for instructional purposes in a class/seminar/lecture on information processing models and research.
I personally believe that Gladwell’s first book (The Tipping Point) was much better and should be required reading for anyone who wants to understand the development and spread of new social phenomena. I must admit that I am jealous. I wish I could take contemporary research findings and write in such a style to make the best sellers list…and make the beaucoup (thanks to my sister Kris for the proper spelling of beaucoup) bucks…it is an art.
Blog
Finally, I just finished Hugh Hewitt’s Blog (Understanding the information reformation that’s changing your world). Hewitt is one of the major league political bloggers. The book is worth the price, if for no other reason, to secure his list of recommended blogs that are changing the world of politics and the functioning of mainstream media. I must confess that my reading of Blog was my personal “tipping point” to start my blog.
Those from the left should be warned. Hewitt is a JND (just noticeable difference) from the right of the political center and some of his comments are a bit over the top. Yet, I think he captures the essence of the potential revolution that blogs (and the blogosphere) are serving in opinion formation. The book is a very quick read, largely because I found myself skipping many sections that were simple very long quotes from other blogs. I got the feeling that Blog was a hastily put together book….with the goal to be one of the first books on blogs in the bookstores (say “books on blogs in the bookstores” 10-times fast). Yet…I give Hewitt credit for capturing the emerging influence of blogs in all fields and endeavors.
I also give Blog one thumbs up…appropriately from my right hand, given his political persuasion.
By now, I’m sure that most folks have heard the buzz created by the best selling nonfiction book Blink (The Power of Thinking without Thinking) by Malcolm Gladwell. I recently finished the book. I give it one thumbs up (out of two…..when I use both my hands since I’m currently a solo act). It is a quick read and interesting, although for me, the book died approximately 2/3 of the way in.
Although much of the buzz surrounding Blink is that it supposedly tells us how accurate first impressions, subconscious thinking, and our intuition can be, I found that many of the examples seem to allude to contemporary findings in cognitive psychology (CP) and information processing (IP) theories. That is, CP/IP research has revealed much about the development of automaticity of cognitive processes and how quickly/fluently experts (from the expert-novice research) can recognize complex arrays of information via pattern recognition mechanisms. To me this is not intuition. Does anyone remember Anderson’s original classic work (and ACT-R theory) on cognitive production systems?
Many of the examples of intuition and subconscious thinking (which has a Freudian connotation to many laypeople), IMHO, are actually good examples of the development of expertise and the automatization of cognitive processes, processes explained by contemporary CP/IP research. If anything, the examples used by Gladwell might serve as good “real world” starting points for instructional purposes in a class/seminar/lecture on information processing models and research.
I personally believe that Gladwell’s first book (The Tipping Point) was much better and should be required reading for anyone who wants to understand the development and spread of new social phenomena. I must admit that I am jealous. I wish I could take contemporary research findings and write in such a style to make the best sellers list…and make the beaucoup (thanks to my sister Kris for the proper spelling of beaucoup) bucks…it is an art.
Blog
Finally, I just finished Hugh Hewitt’s Blog (Understanding the information reformation that’s changing your world). Hewitt is one of the major league political bloggers. The book is worth the price, if for no other reason, to secure his list of recommended blogs that are changing the world of politics and the functioning of mainstream media. I must confess that my reading of Blog was my personal “tipping point” to start my blog.
Those from the left should be warned. Hewitt is a JND (just noticeable difference) from the right of the political center and some of his comments are a bit over the top. Yet, I think he captures the essence of the potential revolution that blogs (and the blogosphere) are serving in opinion formation. The book is a very quick read, largely because I found myself skipping many sections that were simple very long quotes from other blogs. I got the feeling that Blog was a hastily put together book….with the goal to be one of the first books on blogs in the bookstores (say “books on blogs in the bookstores” 10-times fast). Yet…I give Hewitt credit for capturing the emerging influence of blogs in all fields and endeavors.
I also give Blog one thumbs up…appropriately from my right hand, given his political persuasion.
Monday, March 14, 2005
Calling all relevant Psych/Ed-blogs
I'm looking to find, so I can provide links, other psychology and/or education blogs that cover topics relevant to this blog. If you know of a relevant blog, please drop me a note via the "comment" option. Thanks.
So much data...so little time: In honor of Jack Carroll
As a quantoid with access to many large databases, I, like many other quantoid’s, have a "pleasant problem." I often run late night analyses to investigate questions or hypotheses that my mind generates or which are posed by others on listserv’s or at conferences. I always have the best of intentions to write up the results either as an IAP research report (to be posted to my web page) or to be submitted to a journal for possible publication. Unfortunately, the demands of work call me away or a new analyses capture my attention and the interesting results, insights, and discoveries of yesterday end up being buried in my hard drive forever (although I do integrate many of the results into my PowerPoint presentations when on the “road.”).
One reason for creating this blog is to provide a quicker and more efficient means by which to share some of these unpublished delights. Of course, they will need to be taken with a grain of salt as the results will not have benefited from peer review (I hope to address this issue in a subsequent post) and will not have extensive explanatory text regarding methodology, literature reviews, etc. But heck….the results and conclusions are provided free of charge.
My first planned set of postings will be in honor of John (Jack) Carroll. I was fortunate to have been the last professional to visit Jack at his daughter’s house in Alaska one month prior to his passing away. He personally instructed me on the use and interpretation of his custom (self-written) suite of DOS-based programs for conducting exploratory factor analyses as per the procedures used in his seminal treatise (Human Cognitive Abilities: A Survey of Factor-Analytic Studies, 1993). Following this trip I applied what I had learned to the complete set of WJ3 (including the Diagnostic Supplement) test variables. The results where extremely interesting and have been simmering on my hard drive since Sept, 2003.
My plan is to post a summary of the results and then, possibly in a series of posts (strung across a period of time) highlight and discuss the key findings and insights. By making this public post I hope to place enough pressure on myself to get this done. These data need to see the light of day and will not if I continue to wait for the right time to pump out a nice APA-style research report.
One reason for creating this blog is to provide a quicker and more efficient means by which to share some of these unpublished delights. Of course, they will need to be taken with a grain of salt as the results will not have benefited from peer review (I hope to address this issue in a subsequent post) and will not have extensive explanatory text regarding methodology, literature reviews, etc. But heck….the results and conclusions are provided free of charge.
My first planned set of postings will be in honor of John (Jack) Carroll. I was fortunate to have been the last professional to visit Jack at his daughter’s house in Alaska one month prior to his passing away. He personally instructed me on the use and interpretation of his custom (self-written) suite of DOS-based programs for conducting exploratory factor analyses as per the procedures used in his seminal treatise (Human Cognitive Abilities: A Survey of Factor-Analytic Studies, 1993). Following this trip I applied what I had learned to the complete set of WJ3 (including the Diagnostic Supplement) test variables. The results where extremely interesting and have been simmering on my hard drive since Sept, 2003.
My plan is to post a summary of the results and then, possibly in a series of posts (strung across a period of time) highlight and discuss the key findings and insights. By making this public post I hope to place enough pressure on myself to get this done. These data need to see the light of day and will not if I continue to wait for the right time to pump out a nice APA-style research report.
Wednesday, March 09, 2005
IQ scores, NCLB & Forrest Gump: Run Forrest..Run
As a professional who has written about intelligence theories and tests and who is a coauthor of a frequently used individual measure of intelligence (WJ III), I often find other professionals and educators shocked when I highlight the less than perfect predictive capability of IQ tests. Although test manuals and research reports often report high correlations between IQ and achievement tests (e.g., .60-.70's), I occassionaly hear statements that suggest that some educators and assessment professionals fail to recognize that such high correlations (although some of the highest in all of psychology) are evidence of the fallibility of intelligence tests--they are less than perfect predictors.
Correlations of this magnitude tell us that IQ tests, on their best days, predict 40-50% of school achievement (Applied Psychometrics 101 – square the correlations and multiply by 100 to get the percent of variance explained). This is very good. Yet….50-60% of a person’s school achievement is still related to factors “beyond IQ!”
An unfortunate unintended negative side effect of the success of IQ tests can be the implicit or explict use of global IQ scores to form substandard or low expectations for individuals. How often have we all heard someone state, after hearing a child's general IQ score that, after using some norms or forumula to generate an "expected" achievement score, that teachers and parents should expect the child to achieve "at or below" these already below average expectations? It is often not recognized that for any given level of IQ score, half of the individuals with that score will achieve at or above predicted levels of expected achievement (based on that score). In the context of NCLB (No Child Left Behind), there is a real fear that IQ test scores may seduce educators and other education-related professionals into the “soft bigotry of low expectations” (it was either G. W. Bush or his then Education Secretary who coined this phrase).
The National Center on Educational Outcomes (NCEO) has recently published a report dealing specifically with this issue in the context of NCLB. Expectations for Students with Cognitive Disabilities: Is the Cup Half Empty or Half Full? Can the Cup Flow Over? (McGrew & Evans) is a report that addresses this issue. This report is pubished on the NCEO web page (click here). [If you go to the following page(click here) and right click on PDF (after the report title) you can download a pdf copy to your hard drive.] --- isn't technology wonderful?
As stated in the report introduction,
This report…includes an analysis of nationally representative cognitive and achievement data to illustrate the dangers in making blanket assumptions about appropriate achievement expectations for individuals based on their cognitive ability or diagnostic label. In addition, a review of research on the achievement patterns of students with cognitive disabilities and literature on the effects of teacher expectations is included. The literature raises numerous issues that are directly relevant to today’s educational context for students with disabilities in which both the Individuals with Disabilities Education Act (IDEA) and the No Child Left Behind (NCLB) Act of 2001 are requiring improved performance. Particularly for those students with cognitive disabilities, the information on expectancy effects should cause us much concern. Is it possible that expectancy effects have been holding students back in the past? Are we under the influence of silently shifting standards, especially for students with cognitive disabilities? It is anticipated that the information in this report will help guide decisions about appropriately high and realistic academic expectations for students with cognitive disabilities.
The fictitious story of Forrest Gumpis used in the report to illustrate the potential danger IQ score based generalized low expectations for students with disabilities---food for thought for educators, parents, and professionals involved in the education of students with disabilities during the current wave of NCLB-driven education reform.
After reading the report readers should feel compelled to yell “run Forrest run….from the potential negative impact of the soft bigotry of low expectations.”
3-14-05 note. Please note, in the spirit of full disclosure and potential conflicts of interest, that I was a coauthor of the NCEO report. I received $$ for writing the report but receive nothing for the number of copies that will be printed or distributed. Sorry for not pointing this out in the first post. I'm new at this....baby steps.
Correlations of this magnitude tell us that IQ tests, on their best days, predict 40-50% of school achievement (Applied Psychometrics 101 – square the correlations and multiply by 100 to get the percent of variance explained). This is very good. Yet….50-60% of a person’s school achievement is still related to factors “beyond IQ!”
An unfortunate unintended negative side effect of the success of IQ tests can be the implicit or explict use of global IQ scores to form substandard or low expectations for individuals. How often have we all heard someone state, after hearing a child's general IQ score that, after using some norms or forumula to generate an "expected" achievement score, that teachers and parents should expect the child to achieve "at or below" these already below average expectations? It is often not recognized that for any given level of IQ score, half of the individuals with that score will achieve at or above predicted levels of expected achievement (based on that score). In the context of NCLB (No Child Left Behind), there is a real fear that IQ test scores may seduce educators and other education-related professionals into the “soft bigotry of low expectations” (it was either G. W. Bush or his then Education Secretary who coined this phrase).
The National Center on Educational Outcomes (NCEO) has recently published a report dealing specifically with this issue in the context of NCLB. Expectations for Students with Cognitive Disabilities: Is the Cup Half Empty or Half Full? Can the Cup Flow Over? (McGrew & Evans) is a report that addresses this issue. This report is pubished on the NCEO web page (click here). [If you go to the following page(click here) and right click on PDF (after the report title) you can download a pdf copy to your hard drive.] --- isn't technology wonderful?
As stated in the report introduction,
This report…includes an analysis of nationally representative cognitive and achievement data to illustrate the dangers in making blanket assumptions about appropriate achievement expectations for individuals based on their cognitive ability or diagnostic label. In addition, a review of research on the achievement patterns of students with cognitive disabilities and literature on the effects of teacher expectations is included. The literature raises numerous issues that are directly relevant to today’s educational context for students with disabilities in which both the Individuals with Disabilities Education Act (IDEA) and the No Child Left Behind (NCLB) Act of 2001 are requiring improved performance. Particularly for those students with cognitive disabilities, the information on expectancy effects should cause us much concern. Is it possible that expectancy effects have been holding students back in the past? Are we under the influence of silently shifting standards, especially for students with cognitive disabilities? It is anticipated that the information in this report will help guide decisions about appropriately high and realistic academic expectations for students with cognitive disabilities.
The fictitious story of Forrest Gumpis used in the report to illustrate the potential danger IQ score based generalized low expectations for students with disabilities---food for thought for educators, parents, and professionals involved in the education of students with disabilities during the current wave of NCLB-driven education reform.
After reading the report readers should feel compelled to yell “run Forrest run….from the potential negative impact of the soft bigotry of low expectations.”
3-14-05 note. Please note, in the spirit of full disclosure and potential conflicts of interest, that I was a coauthor of the NCEO report. I received $$ for writing the report but receive nothing for the number of copies that will be printed or distributed. Sorry for not pointing this out in the first post. I'm new at this....baby steps.
Labels:
achievement,
Beyond IQ,
conative,
EdPsych,
education,
g (gen IQ),
IQ scores,
IQ tests,
NCLB,
school psych,
SpecEd
If Oprah's Book Club would focus on intelligence--2 recommended books
I'm frequently asked for recommendations regarding books on intellectual theories and assessments. All of those interested in intelligence theory and assessment should now be very pleased....you can start your X-mas gift shopping early! Two new books (one new and one a revision) have just been, or will soon be, published.
First, Flanagan and Harrisons's second edition of Contemporary Intellectual Assessment (aka the "CIA" book) is a must read. Like the first editon, it is "the" must read on the application of contemporary intelligence theories to the applied science of intelligence testing. It is due out in March. This book is clearly one Oprah would have on her book club if she was wanting to help the field of applied intellectual assessment move forward. Kudos to Flanagan and Harrison! (Potential conflict of interest note - I was paid 200 bucks for one chapter in the book).
The second is probably the best up-to-date book that delves deeper into contemporary research and theory. Handbook of Understanding and Measuring Intelligence (Wilhelm and Engle), IMHO, appears to be the best single integrative source that deals with the hard research side of theories of intelligence.
Together these books are very complimentary. Wilhelm and Engle's edited text provides the research and theoretical background upon which Flanagan and Harrison's edited CIA book then builds (not on all parts of Wilhelm and Engle's book...but many). So, if one could only read two books to get up-to-speed on the contemporary research and theory regarding human intelligence and then the translation of that research into the applied practice of intelligence testing, these are the two books I would recommend.
Oprah. Are you listening?
First, Flanagan and Harrisons's second edition of Contemporary Intellectual Assessment (aka the "CIA" book) is a must read. Like the first editon, it is "the" must read on the application of contemporary intelligence theories to the applied science of intelligence testing. It is due out in March. This book is clearly one Oprah would have on her book club if she was wanting to help the field of applied intellectual assessment move forward. Kudos to Flanagan and Harrison! (Potential conflict of interest note - I was paid 200 bucks for one chapter in the book).
The second is probably the best up-to-date book that delves deeper into contemporary research and theory. Handbook of Understanding and Measuring Intelligence (Wilhelm and Engle), IMHO, appears to be the best single integrative source that deals with the hard research side of theories of intelligence.
Together these books are very complimentary. Wilhelm and Engle's edited text provides the research and theoretical background upon which Flanagan and Harrison's edited CIA book then builds (not on all parts of Wilhelm and Engle's book...but many). So, if one could only read two books to get up-to-speed on the contemporary research and theory regarding human intelligence and then the translation of that research into the applied practice of intelligence testing, these are the two books I would recommend.
Oprah. Are you listening?
Tuesday, March 08, 2005
Baby steps--Why an IQ blog?
My intentions are good...be patient with me. As per the therapy advocated in one of my favorite Bill Murray movies ("What about Bob?"), this endeavor will be in the form of "baby steps." :)
The idea of this blog is to provide a up-to-date bridge between intelligence theory and the applied practice of intelligence testing.
First, conflict(s) of interest need to be divulged. I, Kevin McGrew, have a commercial interest in the Woodcock-Johnson Battery--Third Edition (see www.iapsych.com for more information), a battery that includes both cognitive (intelligence) and achievement components.
However, my primary interest is in the Cattell-Horn-Carroll (CHC) Theory of Human Cognitive Abilities and how it can be measured with stand-alone batteries (e.g., WJ III; SB5; K-ABC II) or with CHC-designed Cross-Battery methods. To get up-to-speed you are encouraged to read CHC Theory: Past, Present and Future (click to visit and read).The goal is to focus primarily on the CHC theory as the center from which all types of CHC assessments emerge.
Stay tuned. I make no promises. This one post may be it. We shall see.
The idea of this blog is to provide a up-to-date bridge between intelligence theory and the applied practice of intelligence testing.
First, conflict(s) of interest need to be divulged. I, Kevin McGrew, have a commercial interest in the Woodcock-Johnson Battery--Third Edition (see www.iapsych.com for more information), a battery that includes both cognitive (intelligence) and achievement components.
However, my primary interest is in the Cattell-Horn-Carroll (CHC) Theory of Human Cognitive Abilities and how it can be measured with stand-alone batteries (e.g., WJ III; SB5; K-ABC II) or with CHC-designed Cross-Battery methods. To get up-to-speed you are encouraged to read CHC Theory: Past, Present and Future (click to visit and read).The goal is to focus primarily on the CHC theory as the center from which all types of CHC assessments emerge.
Stay tuned. I make no promises. This one post may be it. We shall see.
Subscribe to:
Posts (Atom)