Friday, October 27, 2006

RTI and cognitive assessment--Guest post by John Garruto

The following is a guest post by John Garruto, school psychologist with the Oswego School District and member of the IQs Corner Virtual Community of Scholars. John reviewed the following article and has provided his comments below. [Blog dictator note - John's review is presented "as is" with only a few minor copy edits and the insertion of some URL links]

Hale, J.B., Kaufman, A., Naglieri, J.A. & Kavale, K.A. (2006). Implementation Of IDEA: Integrating Response To Intervention And Cognitive Assessment Methods. Psychology in the Schools, 43(7), 753-770. (click here to view)

This article (and the entire journal series in this special issue) has articulated much of what I have been saying and thinking for a long time. Hale and colleagues open up by discussing the RTI (response-to-intervention) and cognitive assessment “factions”. Although I had nothing to do with this article, I chuckled at the similarity to a PowerPoint I did for graduate study in July of 2005 (click here). I joked about these factions as having a paradigm that was analogous to “Star Wars”. I likened the idea of school psychologists who espoused both RTI and cognitive assessment as necessary requirements for the identification of SLD (Specific Learning Disability) as comprising “a rebel alliance”…primarily because it seemed we were advocating such a balanced approach. Clearly this Psychology in the Schools special issue suggests there is an increasing number of professionals who advocate this approach.

Before beginning with a general summary and sharing my overall impressions, it is important to acknowledge the obvious conflict of interest of most dissenters (in the special issue); both Kaufman (KABC-II) and Naglieri (CAS) are intelligence test authors. That said, it is important to note that two of the other authors are not test authors. In fact, Kavale (a.k.a., the intervention effect size guru) is frequently cited by many RTI-only proponents. Therefore, it is suggested that the scope of this article ending at a conflict of interest is very unlikely.

  • The Hale et al. article begins with the acknowledgment that there seem to be two factions in school psychology assessment circles--those who believe in response-to-intervention as the way to determine eligibility for SLD, and those who espouse the need for cognitive assessment. The Hale et al. article does not diminish the importance of RTI or the problem-solving model. In fact, it supports many of the changes noted in the regulations (e.g., the importance of looking at RTI as a part of the process for determining eligibility for learning disabilities.) It places emphasis on the use of empirically-based instruction and interventions. It also highlights the significance of formative assessment and ongoing progress monitoring. Such practices will illustrate the effect of interventions.
  • After supporting the importance of RTI, the authors contend that at Tier-III, a responsible individualized assessment (including cognitive assessment) needs to occur. Clearly, jumping to conclusions about a neurologically-based deficit based only on failure to RTI would lead to a significant number of false positives (Type I errors). The authors do an exemplary job of identifying the importance of cognitive processing deficits related to SLD in the problem-solving literature. This approach does not embrace the much maligned ability-achievement discrepancy LD identification procedure, but instead endorses examining those that processes are leading (if any) to the negative outcomes. The authors conclude with a case study that describes a child who seemed to have one problem on the surface, but via cognitive assessment was discovered to have an underlying latent problem (i.e., was not observably manifest.) The authors contended that this discovery, vis-à-vis appropriately designed cognitive assessment methods, facilitated the problem-solving model by allowing the team to implement new interventions. The beauty of this example is that the focus was not on eligibility as the end result, but instead, using individualized assessment to help piece the puzzle together.
  • I’ve spoken quite a bit about the authors and a possible conflict of interest. One thing I do want to mention is that I continue to be a school-based practitioner. This framework is one I have been endorsing (as a practitioner) for a long time (my presentation noted above has been online many months before this article went to press.) I’ve had many spirited debates with teachers, arguing that the spirit of formative assessment and research-based interventions has a very positive research history and we are remiss not to use these methods first. However, for those kids who are not responding, I can often complete a solid individualized assessment that provides logical reasons as to why they are not responding, and continue to provide interventions that are related to dynamics and skills that are not readily manifest. There is absolutely no doubt in my mind that combining both approaches will allow us to look beyond “eligibility” to determining what a child needs.
  • Another of my thoughts is that much of the criticism of cognitive assessment not leading to intervention has been the lack of research for establishing ATIs (aptitude-treatment-interactions). However, establishing individualized interventions based on the needs of the child (that might not have a huge history of published research) does not mean we throw it out. Many RTI-only proponents argue that we might was well go right to special education and simply intensify the research-based interventions that could be done with a special education paradigm. I argue that doing flash cards to aid sight-reading might have an empirical support base, but doing flash cards all day long (one-on-one) with a blind student isn’t going to do a thing. However, designing an intervention around the varied needs and interests of the child could (and has) lead to positive results.
  • Finally, my other concern with the RTI-only paradigm is it seems “stuck” on reading…and only on three out of the big five components of the National Reading Panel (Phonemic Awareness, Phonics, and Fluency). There is little research on using CBM for math reasoning or written expression (beyond spelling and perhaps writing fluency.) I believe the most recent edition of School Psychology Review, 35(3), which focused on CBM for reading, writing, and math might have provided practice-based school psychologists with the research we need. Quite the contrary, most all of the articles dealt with math calculations and fluency, as well as on spelling, mechanics, and writing fluency. Clearly CBM/RTI research on higher-level reasoning processes, vocabulary, induction, deduction, inferential reasoning, and writing organization, were lacking from this issue. Until RTI-only advocates start providing research and guidance in these areas, we would be remiss to discard relevant assessment techniques that provide insights into these important skills and abilities.
Technorati Tags: , , , , , , , , , , , , , , , ,

powered by performancing firefox