Friday, January 03, 2025

The how old are you #schoolpsychologist object recognition test


Click on image to enlarge for better viewing

If you recognize these objects (especially the metal train engine [1], dogs, thimbles, and black shoes), you are likely one of the older school psychologists around and are very likely retired :)


Thursday, January 02, 2025

Quote2note: E. L. Thorndike on importance of psychological #measurement

 

Whatever exists at all exists in some amount.  To know it thoroughly involves knowing its quantity as well as its quality


E. L. Thorndike

Research byte: #NIH #toolbox for assessment of #neurocognitive, #motor and #emotional - behavioral function in childhood: A systematic review


NIH Toolbox for assessment of neurocognitive, motor and emotional-behavioral function in childhood: A systematic review

Click here to visit journal page.  Click here to visit NIH Toolbox web page. 

Abstract

The NIH Toolbox is used extensively in various research settings, including clinical trials, observational studies, and longitudinal studies. Its validity and reliability have been systematically appraised only in adults. The current study systematically evaluated the validity and reliability of the NIH Toolbox for assessing neurocognitive, motor and emotional-behavioral functioning in children. Based on 22 studies including over 60,000 participants, sufficient evidence was found for the validity and reliability of most tests in the Cognition Battery and Motor Battery. However, there was insufficient evidence to assess the validity and reliability of the Emotion Battery. Thus, this review supports the use of the NIH Toolbox Cognition and Motor Batteries in assessing neurocognitive functioning in 3–17-year-olds.

#Psychological folk #theories create an illusion of explanatory depth (#IOED)—#cognitivebias for understanding #intelligence theories in #schoolpsychology


Thanks to my colleague and friend Dr. Andrew Conway for drawing my attention to this 2002 article re problems with folk theories of psychology, and the illusion of explanatory depth.  Above cartoon is one of my favorites regarding this cognitive bias (click on image to enlarge for easier reading)


Select text

The incompleteness of everyday theories should not surprise most scientists. We frequently discover that a theory that seems crystal clear and complete in our head suddenly develops gaping holes and inconsistencies when we try to set it down on paper.

Folk theories, we claim, are even more fragmentary and skeletal, but laypeople, unlike some scientists, usually remain unaware of the incompleteness of their theories (Ahn & Kalish, 2000; Dunbar, 1995; diSessa, 1983). Laypeople rarely have to offer full explanations for most of the phenomena that they think they understand. Unlike many teachers, writers, and other professional “explainers,” laypeople rarely have cause to doubt their naïve intuitions. They believe that they can explain the world they live in fairly well. They are novices in two respects. First, they are novice “scientists”—their knowledge of most phenomena is not very deep. Second, they are novice epistemologists—their sense of the properties of knowledge itself (including how it is stored) is poor and potentially misleading.

We argue here that people's limited knowledge and their misleading intuitive epistemology combine to create an illusion of explanatory depth (IOED). Most people feel they understand the world with far greater detail, coherence, and depth than they really do. The illusion for explanatory knowledge–knowledge that involves complex causal patterns—is separate from, and additive with, people's general overconfidence about their knowledge and skills. We therefore propose that knowledge of complex causal relations is particularly susceptible to illusions of understanding.

Wednesday, January 01, 2025

Research Byte: Cognition about #cognition: Do scales from different fields assess #metacognition alike?—A general M factor?

 Cognition about cognition: Do scales from different fields assess metacognition alike?

PDF copy of article available by clicking here.

Abstract

Metacognition is a construct of long-lasting interest in multiple fields of research. Yet, exchange between fields has been limited, leaving it an open question to what extent this construct can be conceptualized as a general cognitive entity. We thus implemented a cross-disciplinary analysis investigating if self-report scales from four fields tap into the same underlying construct and give rise to a general factor of metacognition (M). In a preregistered online study (N = 661) and utilizing an analytical approach to mitigate overfitting, a systematic model comparison showed that a bifactor model including a general factor of metacognition performed best. This general factor explained 61 % of the systematic variance, suggesting that there exists an important general component of metacognition. We will discuss how the different subscales of the four scales relate to one another and to M, elaborate on a potential jingle-fallacy in metacognition research, and give recommendations on which subscales to use to best tap into M. In sum, our integrative approach contributes to a better understanding of metacognition and how to best measure it.
Comment:  The finding of a factor-analysis based M factor only reflects a general statistical factor in the collection of measures.  It does not reflect a real ability…just a summary index of variance.  Same as the general factor of intelligence (g) does not reflect a real brain-based ability…it is just a statistical index.