Abstract
This article provides a summary and discussion of major challenges and pitfalls in factor analysis as observed in psychological assessment research, as well as our recommendations within each of these areas. More specifically, we discuss a need to be more careful about item distribution properties in light of their potential impact on model estimation as well as providing a very strong caution against item parceling in the evaluation of psychological test instruments. Moreover, we consider the important issue of estimation, with a particular emphasis on selecting the most appropriate estimator to match the scaling properties of test item indicators. Next, we turn our attention to the issues of model fit and comparison of alternative models with the strong recommendation to allow for theoretical guidance rather than being overly influenced by model fit indices. In addition, since most models in psychological assessment research involve multidimensional items that often do not map neatly onto a priori confirmatory models, we provide recommendations about model respecification. Finally, we end our article with a discussion of alternative forms of model specification that have become particularly popular recently: exploratory structural equation modeling (ESEM) and bifactor modeling. We discuss various important areas of consideration for the applied use of these model specifications, with a conclusion that, whereas ESEM models can offer a useful avenue for the evaluation of internal structure of test items, researchers should be very careful about using bifactor models for this purpose. Instead, we highlight other, more appropriate applications of such models. (PsycINFO Database Record (c) 2019 APA, all rights reserved)
No comments:
Post a Comment