A Systematic Review of Working Memory Applications for Children with Learning Difficulties: Transfer Outcomes and Design Principles
Wednesday, November 20, 2024
Research Byte: A Systematic #Review of #Working#Memory (#Gwm) Applications for #Children with #LearningDifficulties (#LD): Transfer Outcomes and Design Principles
#AI resource for #educators and #psychologists: Lockwood Educational and Psychological Consutling
I just connected (via LinkedIn) with Lockwood Educational and Psychological Consulting. The group describes itself below. Given the considerable interest in AI in education and psychology, I would suggest checking out their web page. I’ve not yet conducted a deep dive into the website, but it appears to be a solid AI-related resource. I plan to take a closer look.
Is your district, organization, or practice considering implementing AI but concerned about the ethical and practical implications? You've come to the right place. With expertise in both AI, education, and psychology, I provide guidance to navigate these complex waters, ensuring ethical, effective, and confidence-inspiring AI integration in educational, and psychological practice settings.
Research Byte: Domain-specific and domain-general skills as predictors of #arithmetic #fluency development—New #WJV will have similar measure—#MagnitudeComparison test
Domain-specific and domain-general skills as predictors of arithmetic fluency development
Link to PDF appears available at journal page (click here to go directly to PDF)
Abstract
As an FYI, the forthcoming WJ V (Q1, 2025) has a new test (Magnitude Comparison) that measures abilities similar to the symbolic magnitude processing ability measure used in this study (COI - I’m a coauthor of the WJ V)
https://www.sciencedirect.com/science/article/pii/S104160802400178X
Tuesday, November 19, 2024
Research Byte: Revising Baddeley and Hitch’s #workingmemory (#Gwm) 50 years later—relevance to #children and #developmental models.
EXPRESS: Revisiting Working Memory Fifty Years after Baddeley and Hitch: A Review of Field-specific Conceptualizations, Use and Misuse, and Paths Forward for Studying Children
As trained educational and developmental psychologists who study the role of working memory in educational outcomes, we know the various assumptions made about definitions and measurements of this cognitive ability. Considering the popularity of the Baddeley and Hitch working memory model (1974) in these fields, we raise challenges related to measurement, overlap with executive function, and adopting working memory measurement approaches from adult models. We propose that researchers consider how working memory tasks might tap multiple other abilities. This is problematic in the context of child cognitive development and in understanding which factors explain educational outcomes in children. We recommend giving greater attention to the central executive, acknowledging the overlap between the central executive and executive function in study design, and investigating a developmental model in the context of the broader abilities evoked in measurement. These recommendations may provide a fuller understanding of working memory's mechanistic role in children's learning and development and assist in developing reasonable adjustments for specific aspects of working memory for children who struggle
Occam’s razor and human #intelligence (and #cognitive ability tests)….yes…but sometimes no…food for thought for #schoolpsychologists
Monday, November 18, 2024
#ITC #ATP #Guidelines for #technology - based psychological and educational #assessments
The new #WAIS-V and forthcoming #WJ V tests—time for a reminder: The Joint Test #Standards, the publisher and you.
It is often not understood that these standards, which are of critical importance to those who author and sell psychological and educational tests (i.e, publishers), also include important user standards…that is, assessment professionals who select, evaluate, administer and interpret such tests have certain standards they should ascribe to.
As stated in the Joint Test Standards, “validation is the joint responsibility of the test developer and the test user. The test developer is responsible for furnishing relevant evidence and a rationale in support of any test score interpretations for specified uses intended by the developer. The test user is ultimately responsible for evaluating the evidence in the particular setting in which the test is to be used. When a test user proposes an interpretation or use of test scores that differs from those supported by the test developer, the responsibility for providing validity evidence in support of that interpretation for the specified use is the responsibility of the user. It should be noted that important contributions to the validity evidence may be made as other researchers report findings of investigations that are related to the meaning of test scores” (p.13; emphasis added).
The publication of a technical manual is the start..but not the end. The publication of a tests technical manual provides the starting point or foundation (as per the Joint Test Standards) for an ongoing “never ending story” of additional validity evidence that accumulates post-publication through additional research studies. A thorough and well written technical manual is a “must” at publication of a new test, adhering as close as reasonably possible to the relevant Joint Test Standards.
“It is commonly observed that the validation process never ends, as there is always additional information that can be gathered to more fully understand a test and the inferences that an be drawn from it” (p .21).
If the reader has access to the latest edition of Flanagan and McDonough’s Contemporary Intellectual Assessment (2018) book, the Montgomery, Torres, and Eiseman chapter on “Using the Joint Test Standards to Evaluate the Validity Evidence for Intelligence Tests”) is highly recommended as a concise summary of the ins-and-outs of the Joint Test Standards.
Psychological and educational assessment tools provide information that often result in crucial, and at times, life altering decisions for individuals, especial school age students. All parties (authors, publisher, users) must take these Joint Test Standards seriously.
Saturday, November 16, 2024
Research Byte: Wait, Where’s the #FlynnEffect on the #WAIS-5?
Friday, November 15, 2024
#WJIV Geometric-Quantoid (#geoquant) #intelligence art: A geoquant interpretation of #cognitive tests is worth a 1000 words—some similar “art parts” will be in #WJV technical manual
I frequently complete data analyses that never see the light-of-day in a journal article. The results are all I need (at the time) to answer intriguing questions for me, and I then move on…or tantalize psychologists during a workshop or conference presentation. Thus, this is non-peer reviewed information. Below is one of my geoquant figures from a series of 2016 analyses (later updated in 2020) I completed on a portion of the WJ IV norm data. To interpret you should have knowledge of the WJ IV tests—so you can understand the test variable abbbreviation names. This MDS figure includes numerous interesting cognitive psychology constructs and theoretical principles based on multiple methodological lenses and supporting theory/research. This was completed before I was introduced to psychometric network analysis methods as yet another visual means to understand intelligence test data. You can play “where’s Waldo” and look for the following
- CHC broad cognitive factors
- Cognitive complexity information re WJ IV tests
- Kahneman’s two systems of cognition (System I/II thinking)
- Berlin BIS ability x content facet framework
- Two of Ackerman’s intelligence dimensions as per PPIK theory (intelligence-as-process; intelligence-as-knowledge)
- Cattell’s general fluid (gf) and general crystallized (gc) abilities, the two major domains in his five domain triadic theory of intelligence.…..lower case gf/gc notation is deliberate and indicates more “general” capacities (akin, in breadth, to Spearman’s g, who was Cattell’s mentor) and not the Horn and Carroll-like broad Gf and Gc
- Newlands process and product dominant distinction of cognitive abilities.
Thursday, November 14, 2024
Stay tunned!!!! #WJV g and non-g multiple #CHC theoretical models to be presented in the forthcoming (2025) technical manual: Senior author’s (McGrew) position re the #pscyhometric #g factor and #bifactorg models.
(c) Copyright, Dr. Kevin S. McGrew, Institute for Applied Psychometrics (11-14-24)
Warning, may be TLDR for many. :). Also, I will be rereading again multiple times and may tweak minor (not substantive) errors and post updates….hey….blogging has an earthy quality to it:)
The classic hierarchical g model “places a psychometric g stratum III ability at the apex over multiple broad stratum II CHC abilities” (McGrew et al., 2023, p. 2). This model is most often associated with Carroll (1993; 2003) and is called (in panel A in the above figure) the Carroll hierarchical g broad CHC model. In this model the shared variance of subsets of moderately to highly correlated tests are first specified as 10 CHC broad ability factors (i.e., the measurement model; Gf, Gc, Gv, etc.). Next the covariances (latent factor correlations) among the broad CHC factors are specified as being the direct result of a higher-order psychometric g factor (i.e., the structural model).
Because…the complexity involved in specifying and evaluating bi-factor g models with 60 cognitive and achievement tests was found to be extremely complex and fraught with statistical convergence issues. Trust me…I tried hard and long to run bifactor g models for the WJ V norm data. It was possible to run bifactor g models separately on the cognitive and achievement sets of WJ V tests, but that does not allow for the direct comparison to the other three structural models that utilized all 60 cognitive and achievement tests in single CFA models. Instead, at of the time the WJ V technical manual analyses were being completed and are now being summarized, the Riverside Insights (RI) internal psychometric research team was tackling the complex issues involved in completing WJ V bifactor g models, first in the separate sets of cognitive and achievement tests. Stay tunned for future professional conference paper presentations, white papers, or journal article submissions by the RI research team.