Wednesday, November 20, 2024

Research Byte: A Systematic #Review of #Working#Memory (#Gwm) Applications for #Children with #LearningDifficulties (#LD): Transfer Outcomes and Design Principles

 A Systematic Review of Working Memory Applications for Children with Learning Difficulties: Transfer Outcomes and Design Principles 

by 
Adel Shaban
 1,*
Victor Chang
 2
Onikepo D. Amodu
 1
Mohamed Ramadan Attia
 3 and 
Gomaa Said Mohamed Abdelhamid
 4,5
1
Middlesbrough College, University Centre Middlesbrough, Middlesbrough TS2 1AD, UK
2
Aston Business School, Aston University, Birmingham B4 7UP, UK
3
Department of Educational Technology, Faculty of Specific Education, Fayoum University, Fayoum 63514, Egypt
4
Department of Educational Psychology, Faculty of Education, Fayoum University, Fayoum 63514, Egypt
5
Department of Psychology, College of Education, Sultan Qaboos University, Muscat 123, Oman
*
Author to whom correspondence should be addressed. 
Educ. Sci. 202414(11), 1260; https://doi.org/10.3390/educsci14111260

Visit article page where PDF of article can be downloaded

Abstract

Working memory (WM) is a crucial cognitive function, and a deficit in this function is a critical factor in learning difficulties (LDs). As a result, there is growing interest in exploring different approaches to training WM to support students with LDs. Following the PRISMA 2020 guidelines, this systematic review aims to identify current computer-based WM training applications and their theoretical foundations, explore their effects on improving WM capacity and other cognitive/academic abilities, and extract design principles for creating an effective WM application for children with LDs. The 22 studies selected for this review provide strong evidence that children with LDs have low WM capacity and that their WM functions can be trained. The findings revealed four commercial WM training applications—COGMED, Jungle, BrainWare Safari, and N-back—that were utilized in 16 studies. However, these studies focused on suggesting different types of WM tasks and examining their effects rather than making those tasks user-friendly or providing practical guidelines for the end-user. To address this gap, the principles of the Human–Computer Interaction, with a focus on usability and user experience as well as relevant cognitive theories, and the design recommendations from the selected studies have been reviewed to extract a set of proposed guidelines. A total of 15 guidelines have been extracted that can be utilized to design WM training programs specifically for children with LDs. 


https://www.mdpi.com/2227-7102/14/11/1260#

#AI resource for #educators and #psychologists: Lockwood Educational and Psychological Consutling

I just connected (via LinkedIn) with Lockwood Educational and Psychological Consulting.  The group describes itself below.  Given the considerable interest in AI in education and psychology, I would suggest checking out their web page. I’ve not yet conducted a deep dive into the website, but it appears to be a solid AI-related resource.  I plan to take a closer look.


Is your district, organization, or practice considering implementing AI but concerned about the ethical and practical implications? You've come to the right place. With expertise in both AI, education, and psychology, I provide guidance to navigate these complex waters, ensuring ethical, effective, and confidence-inspiring AI integration in educational, and psychological practice settings.


 

Research Byte: Domain-specific and domain-general skills as predictors of #arithmetic #fluency development—New #WJV will have similar measure—#MagnitudeComparison test

 Domain-specific and domain-general skills as predictors of arithmetic fluency development

Link to PDF appears available at journal page (click here to go directly to PDF)

Abstract

We investigated Norwegian children's (n = 262) development in arithmetic fluency from first to third grade. Children's arithmetic fluency was measured at four time points, domain-specific (i.e., symbolic magnitude processing and number sequences) and domain-general skills (i.e., working memory, rapid naming, non-verbal reasoning, and sustained attention) once in the first grade. Based on a series of growth mixture models, one developmental trajectory best described the data. Multigroup latent growth curve models showed that girls and boys developed similarly in their arithmetic fluency over time. Symbolic magnitude processing and number sequence skills predicted both initial level and growth in arithmetic fluency, and working memory predicted only initial level, similarly for boys and girls. Mother's education level predicted the initial level of arithmetic fluency for boys, and rapid naming predicted growth for girls. Our findings highlight the role of domain-specific skills in the development of arithmetic fluency.

As an FYI, the forthcoming WJ V (Q1, 2025) has a new test (Magnitude Comparison) that measures abilities similar to the symbolic magnitude processing ability measure used in this study (COI - I’m a coauthor of the WJ V)


https://www.sciencedirect.com/science/article/pii/S104160802400178X

Tuesday, November 19, 2024

Research Byte: Revising Baddeley and Hitch’s #workingmemory (#Gwm) 50 years later—relevance to #children and #developmental models.

 EXPRESS: Revisiting Working Memory Fifty Years after Baddeley and Hitch: A Review of Field-specific Conceptualizations, Use and Misuse, and Paths Forward for Studying Children


As trained educational and developmental psychologists who study the role of working memory in educational outcomes, we know the various assumptions made about definitions and measurements of this cognitive ability. Considering the popularity of the Baddeley and Hitch working memory model (1974) in these fields, we raise challenges related to measurement, overlap with executive function, and adopting working memory measurement approaches from adult models. We propose that researchers consider how working memory tasks might tap multiple other abilities. This is problematic in the context of child cognitive development and in understanding which factors explain educational outcomes in children. We recommend giving greater attention to the central executive, acknowledging the overlap between the central executive and executive function in study design, and investigating a developmental model in the context of the broader abilities evoked in measurement. These recommendations may provide a fuller understanding of working memory's mechanistic role in children's learning and development and assist in developing reasonable adjustments for specific aspects of working memory for children who struggle

Occam’s razor and human #intelligence (and #cognitive ability tests)….yes…but sometimes no…food for thought for #schoolpsychologists

 


Occam's razor (also spelled Ockham's razor or Ocham's razorLatinnovacula Occami) is the problem-solving principle that recommends searching for explanations constructed with the smallest possible set of elements. It is also known as the principle of parsimony or the law of parsimony (Latinlex parsimoniae)”

In the context of fitting structural CFA models to intelligence test data, it can be summarized as “given two models with similar fit to the data, the simpler model is preferred” (Kline, 2011, p. 102).” The law of parsimony is frequently invoked in research articles when an investigator is faced with competing factor models regarding the underlying structure of a cognitive ability test battery. However, when complex human behavior is involved, especially something as complex as human intelligence and the brain, it is possible that Occam’s razor might interfer with a thourough understanding of human intelligence and test batteries designed to measure intelligence. The following quote2note has stuck with me as an important reminder that when faced with alternative and more complex statistical CFA models, these models should not be summarily dismissed based only on the parsimony principle. As stated by Stankov, Boyle, and Cattell (1995)


while we acknowledge the principle of parsimony and endorse it whenever applicable, the evidence points to relative complexity rather than simplicity…the insistence on parsimony at all costs can lead to bad science” (p. 16).


Stankov, L., Boyle, G. J., & Cattell, R. B. (1995). Models and paradigms in personality and intelligence research. In D. Saklofske & M. Zeidner (Eds.), International handbook of personality and intelligence (pp. 15–43). New York, NY: Plenum Press.

Monday, November 18, 2024

#ITC #ATP #Guidelines for #technology - based psychological and educational #assessments


With the every-increasing trend towards computer and tablet-based psychological and educational tests, authors, publishers and users need to be familiar with the International Test Comission (ITC) and Association of Test Publishers recent (2022) Guidelines for Technology Assessment (click here to download free PDF copy).   Also see post earlier today regarding the related (and very important) Joint Test Standards for both test authors, publishers, and users (especially to evaluate tests).




The new #WAIS-V and forthcoming #WJ V tests—time for a reminder: The Joint Test #Standards, the publisher and you.

With the recent publication of a new WAIS-V and the forthcoming publication (projected Q1 2025) of the WJ V (COI - I’m a coauthor of the WJ V), intelligence assessment professionals are excited.  With these new revised test batteries, it is time to (again) remind users of the critical importance of the Standards for Educational and Psychological Testing (2014; aka the Joint [AERA, APA, NCME] Test Standards).  Click here for a FREE PDF copy to add to your “must read” list.  Yes…you can download and read for free…just in time for the holidays!!!!!!


It is often not understood that these standards, which are of critical importance to those who author and sell psychological and educational tests (i.e, publishers), also include important user standards…that is, assessment professionals who select, evaluate, administer and interpret such tests have certain standards they should ascribe to.

As stated in the Joint Test Standards, “validation is the joint responsibility of the test developer and the test user. The test developer is responsible for furnishing relevant evidence and a rationale in support of any test score interpretations for specified uses intended by the developer. The test user is ultimately responsible for evaluating the evidence in the particular setting in which the test is to be used. When a test user proposes an interpretation or use of test scores that differs from those supported by the test developer, the responsibility for providing validity evidence in support of that interpretation for the specified use is the responsibility of the user.   It should be noted that important contributions to the validity evidence may be made as other researchers report findings of investigations that are related to the meaning of test scores” (p.13; emphasis added).

The publication of a technical manual is the start..but not the end.  The publication of a tests technical manual provides the starting point or foundation (as per the Joint Test Standards) for an ongoing “never ending story” of additional validity evidence that accumulates post-publication through additional research studies.  A thorough and well written technical manual is a “must” at publication of a new test, adhering as close as reasonably possible to the relevant Joint Test Standards.


“It is commonly observed that the validation process never ends, as there is always additional information that can be gathered to more fully understand a test and the inferences that an be drawn from it” (p .21).

If the reader has access to the latest edition of Flanagan and McDonough’s Contemporary Intellectual Assessment (2018) book, the Montgomery, Torres, and Eiseman chapter on “Using the Joint Test Standards to Evaluate the Validity Evidence for Intelligence Tests”) is highly recommended as a concise summary of the ins-and-outs of the Joint Test Standards.



Psychological and educational assessment tools provide information that often result in crucial, and at times, life altering decisions for individuals, especial school age students.  All parties (authors, publisher, users) must take these Joint Test Standards seriously.  

Saturday, November 16, 2024

Research Byte: Wait, Where’s the #FlynnEffect on the #WAIS-5?

 

Emily L. Winter, 
Sierra M. Trudel, 
Alan S. Kaufman
J. Intell. 2024, 12(11), 118; https://doi.org/10.3390/jintelligence12110118

Click on the link to down open access copy of the PDF article.
The recent release of the WAIS-5, a decade and a half after its predecessor, the WAIS-IV, raises immediate questions about the Flynn effect (FE). Does the traditional FE of points per decade in the U.S. for children and adults, identified for the Full Scale IQs of all Wechsler scales and for other global IQ scores as well, persist into the 2020s? The WAIS-5 Technical and Interpretive Manual provides two counterbalanced validity studies that address the Flynn effect directly—N = 186 adolescents and adults (16–90 years, mean age = 47.8) tested on the WAIS-IV and WAIS-5; and N = 98 16-year-olds tested on the WISC-V and WAIS-5. The FE is incorporated into the diagnostic criteria for intellectual disabilities by the American Association on Intellectual and Developmental Disabilities (AAIDD), by DSM-5-TR, and in capital punishment cases. The unexpected result of the two counterbalanced studies was a reduction in the Flynn effect from the expected value of 3 IQ points to 1.2 points. These findings raise interesting questions regarding whether the three point adjustment to FSIQs should be continued for intellectual disability diagnosis and whether the federal courts should rethink its guidelines for capital punishment cases and other instances of high stakes decision-making. Limitations include a lack of generalization to children, the impact of the practice effects, and a small sample size.

Friday, November 15, 2024

#WJIV Geometric-Quantoid (#geoquant) #intelligence art: A geoquant interpretation of #cognitive tests is worth a 1000 words—some similar “art parts” will be in #WJV technical manual


(You will need to click on image to enlarge figure to read)

I frequently complete data analyses that never see the light-of-day in a journal article. The results are all I need (at the time) to answer intriguing questions for me, and I then move on…or tantalize psychologists during a workshop or conference presentation.  Thus, this is non-peer reviewed information.  Below is one of my geoquant figures from a series of 2016 analyses (later updated in 2020) I completed on a portion of the WJ IV norm data.  To interpret you should have knowledge of the WJ IV tests—so you can understand the test variable abbbreviation names.  This MDS figure includes numerous interesting cognitive psychology constructs and theoretical principles based on multiple methodological lenses and supporting theory/research.  This was completed before I was introduced to psychometric network analysis methods as yet another visual means to understand intelligence test data.  You can play “where’s Waldo” and look for the following

  • CHC broad cognitive factors
  • Cognitive complexity information re WJ IV tests
  • Kahneman’s two systems of cognition (System I/II thinking)
  • Berlin BIS ability x content facet framework
  • Two of Ackerman’s intelligence dimensions as per PPIK theory (intelligence-as-process; intelligence-as-knowledge)
  • Cattell’s general fluid (gf) and general crystallized (gc) abilities, the two major domains in his five domain triadic theory of intelligence.…..lower case gf/gc notation is deliberate and indicates more “general” capacities (akin, in breadth, to Spearman’s g, who was Cattell’s mentor) and not the Horn and Carroll-like broad Gf and Gc
  • Newlands process and product dominant distinction of cognitive abilities.
Enjoy.  MDS analyses and figures will also be in the forthcoming (Q1 2025)  WJ V technical manual (LaForte, Dailey, & McGrew, 2025, in preparation) but not in the form of these mutiple method/theory synthesis grand figures….stay tunned.  I may create such beautiful geoquant WJ V masterpieces once the WJ V is launched in Q1 2025.  We shall see.  I find these grand synthesis figures particularly useful when interpreting test rests…all critical information in one single figure…would you?

Thursday, November 14, 2024

Stay tunned!!!! #WJV g and non-g multiple #CHC theoretical models to be presented in the forthcoming (2025) technical manual: Senior author’s (McGrew) position re the #pscyhometric #g factor and #bifactorg models.

(c) Copyright, Dr. Kevin S. McGrew, Institute for Applied Psychometrics (11-14-24)

Warning, may be TLDR for many. :).  Also, I will be rereading again multiple times and may tweak minor (not substantive) errors and post updates….hey….blogging has an earthy quality to it:)

        In a recent publication, Scott Decker, Joel Schneider, Okan Bulut and I (McGrew, 2023; click here to download and read) presented structural analysis of the WJ IV norm data using contemporary psychometric network analysis (PNA) methods.  As noted in a clip from the article below, we recommended that intelligence test researchers, and particularly authors and publishers of the respective technical manuals for cognitive test batteries, needed to broaden the psychometric structural analysis of a test battery beyond the traditional (and almost exclusive) relieance on “common cause” factor analysis (EFA and CFA) methods to include PNA analysis…to compliment, not supplant factor based analyses.

(Click on image to enlarge for easier reading)


         Our (McGrew et al., 2023) recommendation is consistent with some critics of intelligence test structural research (e.g., see Dombrowski et al., 2018, 2019; Farmer et al., 2020) who have cogently argued that most intelligence test technical manuals typically present only one of the major classes of possible structural models of cognitive ability test batteries.  Interestingly, many school psychology scholars who conduct and report independent structural analysis of a test battery also do something similar…they often only present one form of structural analysis—-namely, bifactor g analyses.  
        In McGrew et al. (2023) we recommended future cognitive ability test technical manuals embrace a more ecumenical multiple method approach and include, when possible, most all major classes of factor analysis models, as well as PNA. A multiple-methods research approach in test manuals (and journal publications by independent researchers) can better inform users of the strengths and limitations of IQ test interpretations based on whatever conceptualization of psychometric general intelligence (including models with no such construct) underlies each type of dimensional analysis. Leaving PNA methods aside for now, the figure below presents the four major families of traditional CHC theoretical structural models.  These figures are conceptual and are not intended to represent all nuances of factor models. 



(Click on image for a larger image to view)


         Briefly, the four major families of traditional “common cause” CHC CFA structural models (Carroll, 2003; McGrew et al., 2023) vary primarily in the specification (or lack thereof) of a psychometric g factor. The different families of CHC models are conceptually represented in the figure above. In these conceptual representations the rectangles represent individual (sub)tests, the circles latent ability factors at different levels of breadth or generality (stratum levels as per Carroll, 1993), the path arrows the direction of influence (the effect) of the latent CHC ability factors on the tests or lower-order factors, and the single double headed arrow all possible correlations between all CHC broad CHC factors (in the Horn no-g model in panel D).  
        The classic hierarchical g model “places a psychometric g stratum III ability at the apex over multiple broad stratum II CHC abilities” (McGrew et al., 2023, p. 2)This model is most often associated with Carroll (1993; 2003) and is called (in panel A in the above figure) the Carroll hierarchical g broad CHC model. In this model the shared variance of subsets of moderately to highly correlated tests are first specified as 10 CHC broad ability factors (i.e., the measurement model; Gf, Gc, Gv, etc.)Next the covariances (latent factor correlations) among the broad CHC factors are specified as being the direct result of a higher-order psychometric g factor (i.e., the structural model). 
        A sub-model under the Carroll hierarchical g broad CHC model includes three levels of factors—several first-order narrow (stratum I) factors, 10 second-order broad (stratum II) CHC factors, and the psychometric g factor (stratum III). This is called the Carroll hierarchical g broad+narrow CHC model in panel B in the figure above. In the above example, two first-order narrow CHC factors (auditory short-term storage-Wa; and auditory working memory capacity-Wc, which, in simple terms, is a factor defining auditory short-term memory tasks that also include heavy attentional control-based (AC as per Schneider & McGrew, 2018) active manipulation of stimuli—the essence of Gwm or working memory).  For illustrative purposes, a narrow naming facility (NA) first-order factor, which has higher-order effects or influences from broad Gs and Gr is specified for evaluation.  Wouldn’t you like to see the results of this hierarchical broad+narrow CHC model?  Well……..stay tunned for the forthcoming WJ V technical manual (Q1 2025; LaForte, Dailey, & McGrew, 2025, in preparation) and your dream will come true.
        The third model is the Horn no-g model (McGrew, et al., 2023).  John Horn long argued that psychometric g was nothing more than a statistical abstraction or artifact (Horn, 1998; Horn & Noll, 1997; McArdle, 2007; McArdle & Hofner, 2014; Ortiz, 2015) and did not represent a brain or biologically based real cognitive abilityThis is represented by the Horn no-g broad CHC model in panel D. The Horn no-broad CHC model is like the Carroll hierarchical g broad CHC model, but the 10 broad CHC factor intercorrelations are retained instead of specifying a higher- or second-order psychometric g factorIn other words, the measurement models are the same but the structural models are different. In some respects the Horn no-g broad CHC model is like contemporary no-g psychometric network analysis models (see McGrew, 2023) that eschew the notion of a higher-order latent psychometric g factor to explain the positive definite correlation variance between individual tests (or first-order latent factors in the case of the Horn no-model) in an intelligence battery (Burgoyne et al. 2022; Conway &Kovacs, 2015; Euler et al., 2023; Fried, 2020; Kan et al. 2019; Kievit et al. 2016; Kovacs & Conway, 2016, 2019; McGrew, 2023; McGrew et al., 2023; Protzko & Colom 2021a, 2021b, van der Maas et al. 2006, 2014, 2019).  Over the past decade I’ve become more aligned with no-g psychometric network CHC models (e.g, process overlap theory or POT) or Horn’s no-g CHC model, and have, tongue-in-check, referred to the elusive psychometric g ability (not the psychometric g factor)  as the “Loch Ness Monster of Psychology” (McGrew, 2021, 2022).



        Three of these common cause CHC structural models (viz., Carroll hierarchical g broad CHC model, Carroll hierarchical g broad+narrow CHC, and Horn no-g broad CHC), as well as Dr. Hudson Golino and colleagues hierarchical exploratory graph analysis psychometric network analysis models (that topic is saved for another day), are to be presented in the structural analysis section of the forthcoming WJ V technical manual validity chapter.  Stay tunned for some interesting analysis and interpretations in the “must read” WJ V technical manual. Yes….assessment professionals, a well written and thourough technical manual can be your BFF!
        Finally, the fourth family of models, which McGrew et al. (2023) called g-centric models, are commonly known as bifactor g models. In the bifactor g broad CHC model (panel C in figure) the variance associated with a dominant psychometric factor is first extracted from all individual tests. The residual (remaining) variance is modeled as 10 uncorrelated (orthogonal) CHC broad factors. The bifactor model was excluded from the WJ V structural analysisWhy…..after I (McGrew et al., 2023) recommended that all four classes of traditional CHC structural analysis models should be presented in a test batteries technical manual????
        Because…the complexity involved in specifying and evaluating bi-factor g models with 60 cognitive and achievement tests was found to be extremely complex and fraught with statistical convergence issues.  Trust me…I tried hard and long to run bifactor g models for the WJ V norm data.  It was possible to run bifactor g models separately on the cognitive and achievement sets of WJ V tests, but that does not allow for the direct comparison to the other three structural models that utilized all 60 cognitive and achievement tests in single CFA models.  Instead, at of the time the WJ V technical manual analyses were being completed and are now being summarized, the Riverside Insights (RI) internal psychometric research team was tackling the complex issues involved in completing WJ V bifactor g models, first in the separate sets of cognitive and achievement tests.  Stay tunned for future professional conference paper presentations, white papers, or journal article submissions by the RI research team.
        Furthermore, the decision to not include bifactor g models does not suggest that the evaluation of WJ V bifactor g-centric CHC models is not important. As noted by Reynolds and Keith (2017), “bifactor models may serve as a useful mathematical convenience for partitioning variance in test scores” (p. 45; emphasis added)The bifactor g model pre-ordains “that the statistically significant lions share of IQ battery test variance must be of the form of a dominant psychometric g factor (Decker et al., 2021)” (McGrew, et al., 2023, p. 3)Of the four families of CHC structural models, the bifactor g model is the conceptual and statistical model that supports the importance of general intelligence (psychometric g) and the preeminence of the full-scale or global IQ score over broad CHC test scores (e.g., see Dobrowski et al., 2021; Farmer et al., 2021a, 2021b; McGrew et al., 2023)—a theoretical position inconsistent with the position of the WJ V senior author (yours truly) and with Dr. Richard Woodcock’s legacy (see additional footnote comments at the end). It is important to note that there is a growing body of research that has questioned the preference for bifactor g cognitive models based only on statistical fit indices, as structural model fit statistics frequently are biased in favor of bifactor solutions. Per Bonifay et al. (2017),“the superior performance of the bifactor model may be a symptom of ‘overfitting’—that is, modeling not only the important trends in data but also capturing unwanted noise” p. 184–185). For more on this, see Decker (2021), Dueber and Toland (2021), Eid et al., (2018), Greene et al. (2022), and Murray and Johnson(2013). See Dombroski et al. (2020) for a defense of some of the bifactor g criticisms.
        Recognizing the wisdom of Box’s (1976) well known axiom that “all models are wrong, but some are useful” the WJ V technical manual authors (LaForte, Dailey, McGrew, 2025, in preparation) encourage independent researchers to use the WJ V norm data to evaluate and compare bifactor g CHC models with the models presented in forthcoming WJ V technical, as well as  alternative models (e.g., PASS, process overlap theory, Cattell’s triadic Gf-Gc theory, etc.) suggested in the technical manual.


Footnote:  Woodcock’s original (and enduring) position (Woodcock, 1978, 1997, 2002) regarding the validity and purpose of a composite IQ-type g score is at odds with the bifactor g CHC model. With the publication of the original WJ battery, Woodcock (1978) acknowledged the pragmatic predictive value of statistically partitioning cognitive ability test score variance into a single psychometric g factor, with the manifest total IQ score serving as a proxy for psychometric g. Woodcock stated “it is frequently convenient to use some single index of cognitive ability that will predict the quality of cognitive behavior, on the average, across a wide variety of real-life situations. This is the [pragmatic] rationale for using a single score from a broad-based test of intelligence” (p.126). However, Woodcock further stated that “one of the most common misconceptions about the nature of cognitive ability (particularly in discussions characterized by such labels as ‘IQ’ and ‘intelligence’) is that it is a single quality or trait held in varying degrees by individuals, something like [mental] height” (p. 126). In several publications Woodcock’s position regarding the importance of an overall general intelligence or IQ score was clear—“The primary purpose for cognitive testing should be to find out more about the problem, not to obtain an IQ” (Woodcock, 2002, p.6; also see Woodcock, 1997, p. 235). Two of the primary WJ III, WJ IV, and WJ V authors have conducted research or published articles (see Mather & Schneider, 2023; McGrew, 2023; McGrew et al., 2023) consistent with Woodcock’s position and have advocated for a Horn no-g or emergent property no-g CHC network model. Additionally, based on the failure to identify a brain-based biological (i.e., neuro-g; Haier et al., 2024) in well over a century of research since Spearman first proposed in the early 1900’s, McGrew (2020, 2021) has suggested that g may be the “Loch Ness Monster of psychology.” This does not imply that psychometric g is unrelated to combinations of different neurocognitive mechanisms, such as brain-wide neural efficiency and the ability of the whole-brain network, which is comprised of various brain subnetworks and connections via white matter tracts, to efficiently adaptively reconfigure the global network in response to changing cognitive demands (see Ng et al., 2024 for recent compelling research linking psychometric g to multiple brain network mechanisms and various contemporary neurocognitive theories of intelligence; NOTE…click link to download PDF of article and read sufficiently to impress your psychologist friends!!!!).