Friday, December 20, 2024
Research Byte: #Cognitive Factors Underlying #Mathematical Skills: A Systematic Review and #MetaAnalysis - relevant for #schoolpsychology
Thursday, December 19, 2024
Notable open access Journal of #Intelligence articles by topics: #Gf #Gwm #criticalthinking #creativity etc
Notable Papers in the Field of /Fluid Intelligence/
https://www.mdpi.com/journal/jintelligence/announcements/9960
Notable Papers in the Field of /Creativity/
https://www.mdpi.com/journal/jintelligence/announcements/9340
Notable Papers in the Field of /Working Memory/
https://www.mdpi.com/journal/jintelligence/announcements/8927
Notable Papers in the Field of /Critical Thinking/
https://www.mdpi.com/journal/jintelligence/announcements/8240
https://www.mdpi.com/journal/jintelligence/announcements/7166
Notable Papers in the Field of /Metacognition/
https://www.mdpi.com/journal/jintelligence/announcements/6450
Notable Papers in the Field of /Personality/
https://www.mdpi.com/journal/jintelligence/announcements/6202
Editor's Choice Papers
https://www.mdpi.com/journal/jintelligence/editors_choice
We will be honored if you could keep an eye on the publications in our
journal. All papers can be downloaded freely.
Wednesday, December 18, 2024
FYI: A guide to the #WAISV ancillary index scores for #schoolpsychology and other #intelligence assessment professionals
Monday, December 16, 2024
“Be and see” the #WISC-V correlation matrix: Unpublished analyses of the WISC-V #intelligence test
Sunday, December 15, 2024
Research byte: Good article with summary of the major #cognitive #neuroscience models of human #intelligence
Aron K. Barbey. Open access available for download
Quote2note: Niels Bohr on experts and mistakes
Niels Bohr, recalled on his death, November 18, 1962
Friday, December 13, 2024
The #Cattell-Horn-Carroll (#CHC) periodic table of #cognitive elements: Just in time for the holidays and your favorite #schoolpsychology #intelligence testing friend
Thursday, December 12, 2024
Research byte: Prediction of human #intelligence (#g #Gf #Gc) from #brain (#network) #connectivity - #CHC
Choosing explanation over performance: Insights from machine learning-based prediction of human intelligence from brain connectivity
Abstract
A growing body of research predicts individual cognitive ability levels from brain characteristics including functional brain connectivity. The majority of this research achieves statistically significant prediction performance but provides limited insight into neurobiological processes underlying the predicted concepts. The insufficient identification of predictive brain characteristics may present an important factor critically contributing to this constraint. Here, we encourage to design predictive modeling studies with an emphasis on interpretability to enhance our conceptual understanding of human cognition. As an example, we investigated in a preregistered study which functional brain connections successfully predict general, crystallized, and fluid intelligence in a sample of 806 healthy adults (replication: N = 322). The choice of the predicted intelligence component as well as the task during which connectivity was measured proved crucial for better understanding intelligence at the neural level. Further, intelligence could be predicted not solely from one specific set of brain connections, but from various combinations of connections with system-wide locations. Such partially redundant, brain-wide functional connectivity characteristics complement intelligence-relevant connectivity of brain regions proposed by established intelligence theories. In sum, our study showcases how future prediction studies on human cognition can enhance explanatory value by prioritizing a systematic evaluation of predictive brain characteristics over maximizing prediction performance (emphasis added).
#Intelligence (#IQ) #cognitive testing in perspective: An #ecological systems brief video explanation—useful for #schoolpsychology
I first posted the video in 2015—-9 years ago! So be gentle…I’m much better with these videos now :) Thus, some of my COI statements/disclaimers/affiliations are no longer accurte (and updated version can be found a theMindHub.com—Under About IAP: The Director: Disclosures & Bio).
If all works well, just click the start arrow on the video screen…and tap the enlarge icon in the lower right corner. This video is now hosted on YouTube, so it may be possible that you may first encounter 1-2 very brief adds that you can skip within the first 15-10 seconds. It is possible (it seems to vary everytime) that you might be asked to “sign in” to show you are not a bot. All you need to do is press the message, or if images of muliptle videos appear, press the first one…if you only get the message you may need to back up and try link again (no signing in….I hate having lost control of how these work by using YouTube 9 years ago…as now the starting has these mild annoyances..but it is the price for a free service). Be aware that some of the first 4-5 slides may have minimal or no narration and you can skip ahead to the beginning…it is the first slide shown immediately below before the video. Given the caveats above, it is possible the video might not deploy exactly how I describe…the platform seems to be a bit tempormental, at least for me. Enjoy.
Wednesday, December 11, 2024
Applied #psychometrics 101: Strong programs of #constructvalidity—the #theory - #measurement framework with emphasis on #substantive & #structural validity - #WJIV #WJV #shoolpsychology #psychology
The validity of psychological tests “is an overall judgment of the degree to which empirical evidence and theoretical rationales support the adequacy and appropriateness of inferences and actions based on test scores or other modes of assessment” (Messick, 1995, p. 741).
The ability to draw valid inferences regarding theoretical constructs from observable or manifest measures (e.g., test or composite scores) is a function of the extent to which the underlying program of validity research attends to both the theoretical and measurement domains of the focal constructs (Bensen, 1998; Bensen & Hagtvet, 1996; Cronbach, 1971; Cronbach & Meehl, 1955; Loevinger, 1957; Messick, 1995; Nunnally, 1978).
The theoretical—measurement domain framework that has driven the revisions of the WJ test batteries, particularly from the WJ-R to the forthcoming WJ V cognitive and achievement test batteries (Q1, 2025; COI disclosure: I am a coauthor of the current WJ IV and forthcoming WJ V), is represented in the figures below.
The goal of this post is to provide visual-graphic (Gv) images that hopefully, if properly studied by the reader (and if I did a decent job), provide the basic concepts of what constitutes the substantive component (and to a more limited extent the structural component) of a strong program of construct validity—in particular, the theoretical-measurement domain mapping framework used in the WJ-R to the forthcoming WJ V. The external stage of construct validity is not highlighted in this current post. The goal is for conceptual understanding…thus the absence of empirical data, etc.
For those who want written background information, the most succinct conceptual overview of a “strong program of construct validation” is Bensen (1998; click to download and read).
Otherwise…sit back and enjoy the Gv presenation…where five images equal at least one or more chapter in a technical manual :).
Be sure to click on each image to enlarge (and make readable)
This figure below was first published in a book on CHC theoretical (then known as Gf-Gc) interpretation of the Wechsler intelligence test batteries (Flanagan, McGrew, & Ortiz, 2000).
Monday, December 09, 2024
#quote2note: Louis Agassiz on stages of #scientific truth
"Every great scientific truth goes through three stages. First, people say it conflicts with the Bible. Next they say it had been discovered before. Lastly they say they always believed it."
- Louis Agassiz
Sunday, December 08, 2024
Thursday, December 05, 2024
Tuesday, December 03, 2024
Research Byte: The structure of adult thinking. A #network approach to #metacognitive processing —#cognition #executivefunction
Click here to access copy of article
Abstract
Educational relevance statement
Sunday, December 01, 2024
Research Byte: Past reflections, present insights: A systematic #review and new empirical research into the #workingmemory capacity (WMC)-#fluidintelligence (#Gf) relationship
Past reflections, present insights: A systematic review and new empirical research into the working memory capacity (WMC)-fluid intelligence (Gf) relationship
Click here to go to journal
Abstract
According to the capacity account, working memory capacity (WMC) is a causal factor of fluid intelligence (Gf) in that it enables simultaneous activation of multiple relevant information in the aim of reasoning. Consequently, correlation between WMC and Gf should increase as a function of capacity demands of reasoning tasks. Here we systematically review the existing literature on the connection between WMC and Gf. The review reveals conceptual incongruities, a diverse range of analytical approaches, and mixed evidence. While some studies have found a link (e.g., Little et al., 2014), the majority of others did not observe a significant increase in correlation (e.g., Burgoyne et al., 2019; Salthouse, 1993; Unsworth, 2014; Unsworth & Engle, 2005; Wiley et al., 2011). We then test the capacity hypothesis on a much larger, non-Anglo-Saxon culture sample (N = 543). Our WMC measures encompassed Operation, Reading, and Symmetry Span task, whereas Gf was based on items from Raven's Advanced Progressive Matrices (Raven). We could not confirm the capacity hypothesis either when we employed the analytical approach based on the Raven's item difficulty or when the number of rule tokens required to solve a Raven's item was used. Finally, even the use of structural equation modeling (SEM) and its variant, latent growth curve modeling (LGCM), which provide more “process-pure” latent measures of constructs, as well as an opportunity to control for all relevant interrelations among variables, could not produce support for the capacity account. Consequently, we discuss the limitations of the capacity hypothesis in explaining the WMC-Gf relationship, highlighting both theoretical and methodological challenges, particularly the shortcomings of information processing models in accounting for human cognitive abilities.
Saturday, November 30, 2024
On making individual tests in #CHC #intelligence test batteries more #cogntivelycomplex: Two approaches
The following information is from a section of the WJ IV techncial manual (McGrew, LaForte & Schrank, 2014) and will again be included in the WJ V technical manual (LaForte, Dailey, McGrew, Q1, 2025). It was first discussed in McGrew (2012)
On making individual tests in intelligence test batteries more cogntively complex
In the applied intelligence test literature, their are typically two different approaches typically used to increase the cognitive complexity of individual tests (McGrew et al., 2014). The first approach is to deliberately design factorially complex CHC tests, or tests that deliberately include the influence of two or more narrow CHC abilities. This approach is exemplified by Kaufman and Kaufman (2004a) in the development of the Kaufman Assessment Battery for Children–Second Edition (KABC-II), where:
the authors did not strive to develop “pure” tasks for measuring the five CHC broad abilities. In theory, Gv tasks should exclude Gf or Gs, for example, and tests of other broad abilities, like Gc or Glr, should only measure that ability and no other abilities. In practice, however, the goal of comprehensive tests of cognitive abilities like the KABC-II is to measure problem solving in different contexts and under different conditions, with complexity being necessary to assess high-level functioning. (p. 16)
In this approach to test development, construct-irrelevant variance (Benson, 1998; Messick, 1995) is not deliberately minimized or eliminated. Although tests that measure more than one narrow CHC ability typically have lower validity as indicators of CHC abilities, they tend to lend support to other types of validity evidence (e.g., higher predictive validity). The WJ V has several new cognitive tests that use this approach to cognitive complexity.
The second approach to enhancing the cognitive complexity of tests is to maintain the CHC factor purity of tests or clusters (as much as possible) while concurrently and deliberately increasing the complexity of information processing demands of the tests within the specific broad or narrow CHC domain (McGrew, 2012). As described by Lohman and Lakin (2011), the cognitive complexity of the abilities measured by tests can be increased by (a) increasing the number of cognitive component processes, (b) including differences in speed of component processing, (c) increasing the number of more important component processes (e.g., inference), (d) increasing the demands of attentional control and working memory, or (e) increasing the demands on adaptive functions (assembly, control, and monitoring). This second form of cognitive complexity, not to be confused with factorial complexity, is the inclusion of test tasks that place greater demands on cognitive information processing (i.e., cognitive load), that require greater allocation of key cognitive resources (viz., working memory or attentional control), and that invoke the involvement of more cognitive control or executive functions. Per this second form of cognitive complexity, the objective is to design a test that is more cognitively complex within a CHC domain, not to deliberately make it a mixed measure of two or more CHC abilities.