Thursday, November 07, 2013

Fwd: Web of Knowledge Alert - EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT


 Web of Knowledge Table of Contents Alert

 Journal Name:   EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT (ISSN: 0013-1644)
 Issue:          Vol. 73 No. 6, 2013
 IDS#:           239RV
 Alert Expires:  10 JAN 2014
 Number of Articles in Issue:  8 (8 included in this e-mail)
 Organization ID:  c4f3d919329a46768459d3e35b8102e6
========================================================================
 Note:  Instructions on how to purchase the full text of an article and Thomson Reuters Science Contact information are at the end of the e-mail.
========================================================================


*Pages: 913-934 (Article)
*View Full Record: http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=Alerting&SrcApp=Alerting&DestApp=CCC&DestLinkType=FullRecord;KeyUT=CCC:000326044000001
*Order Full Text [ ]

Title:
Sample Size Requirements for Structural Equation Models: An Evaluation of Power, Bias, and Solution Propriety

Authors:
Wolf, EJ; Harrington, KM; Clark, SL; Miller, MW

Source:
*EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT*, 73 (6):913-934; DEC 2013

Abstract:
Determining sample size requirements for structural equation modeling
(SEM) is a challenge often faced by investigators, peer reviewers, and
grant writers. Recent years have seen a large increase in SEMs in the
behavioral science literature, but consideration of sample size
requirements for applied SEMs often relies on outdated rules-of-thumb.
This study used Monte Carlo data simulation techniques to evaluate
sample size requirements for common applied SEMs. Across a series of
simulations, we systematically varied key model properties, including
number of indicators and factors, magnitude of factor loadings and path
coefficients, and amount of missing data. We investigated how changes in
these parameters affected sample size requirements with respect to
statistical power, bias in the parameter estimates, and overall solution
propriety. Results revealed a range of sample size requirements (i.e.,
from 30 to 460 cases), meaningful patterns of association between
parameters and sample size, and highlight the limitations of commonly
cited rules-of-thumb. The broad lessons learned for determining SEM
sample size requirements are discussed.

========================================================================


*Pages: 935-955 (Article)
*View Full Record: http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=Alerting&SrcApp=Alerting&DestApp=CCC&DestLinkType=FullRecord;KeyUT=CCC:000326044000002
*Order Full Text [ ]

Title:
Piecewise Linear-Linear Latent Growth Mixture Models With Unknown Knots

Authors:
Kohli, N; Harring, JR; Hancock, GR

Source:
*EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT*, 73 (6):935-955; DEC 2013

Abstract:
Latent growth curve models with piecewise functions are flexible and
useful analytic models for investigating individual behaviors that
exhibit distinct phases of development in observed variables. As an
extension of this framework, this study considers a piecewise
linear-linear latent growth mixture model (LGMM) for describing
segmented change of individual behavior over time where the data come
from a mixture of two or more unobserved subpopulations (i.e., latent
classes). Thus, the focus of this article is to illustrate the practical
utility of piecewise linear-linear LGMM and then to demonstrate how this
model could be fit as one of many alternativesincluding the more
conventional LGMMs with functions such as linear and quadratic. To carry
out this study, data (N = 214) obtained from a procedural learning task
research were used to fit the three alternative LGMMs: (a) a two-class
LGMM using a linear function, (b) a two-class LGMM using a quadratic
function, and (c) a two-class LGMM using a piecewise linear-linear
function, where the time of transition from one phase to another (i.e.,
knot) is not known a priori, and thus is a parameter to be estimated.

========================================================================


*Pages: 956-972 (Article)
*View Full Record: http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=Alerting&SrcApp=Alerting&DestApp=CCC&DestLinkType=FullRecord;KeyUT=CCC:000326044000003
*Order Full Text [ ]

Title:
Coefficient Omega Bootstrap Confidence Intervals: Nonnormal Distributions

Authors:
Padilla, MA; Divers, J

Source:
*EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT*, 73 (6):956-972; DEC 2013

Abstract:
The performance of the normal theory bootstrap (NTB), the percentile
bootstrap (PB), and the bias-corrected and accelerated (BCa) bootstrap
confidence intervals (CIs) for coefficient omega was assessed through a
Monte Carlo simulation under conditions not previously investigated. Of
particular interests were nonnormal Likert-type and binary items. The
results show a clear order in performance. The NTB CI had the best
performance in that it had more consistent acceptable coverage under the
simulation conditions investigated. The results suggest that the NTB CI
can be used for sample sizes larger than 50. The NTB CI is still a good
choice for a sample size of 50 so long as there are more than 5 items.
If one does not wish to make the normality assumption about coefficient
omega, then the PB CI for sample sizes of 100 or more or the BCa CI for
samples sizes of 150 or more are good choices.

========================================================================


*Pages: 973-993 (Article)
*View Full Record: http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=Alerting&SrcApp=Alerting&DestApp=CCC&DestLinkType=FullRecord;KeyUT=CCC:000326044000004
*Order Full Text [ ]

Title:
Investigation of Specific Learning Disability and Testing Accommodations Based Differential Item Functioning Using a Multilevel Multidimensional Mixture Item Response Theory Model

Authors:
Finch, WH; Finch, ME

Source:
*EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT*, 73 (6):973-993; DEC 2013

Abstract:
The assessment of test data for the presence of differential item
functioning (DIF) is a key component of instrument development and
validation. Among the many methods that have been used successfully in
such analyses is the mixture modeling approach. Using this approach to
identify the presence of DIF has been touted as potentially superior for
gaining insights into the etiology of DIF, as compared to using intact
groups. Recently, researchers have expanded on this work to incorporate
multilevel mixture modeling, for cases in which examinees are nested
within schools. The current study further expands on this multilevel
mixture modeling for DIF detection by using a multidimensional
multilevel mixture model that incorporates multiple measured dimensions,
as well as the presence of multiple subgroups in the population. This
model was applied to a national sample of third-grade students who
completed math and language tests. Results of the analysis demonstrate
that the multidimensional model provides more complete information
regarding the nature of DIF than do separate unidimensional models.

========================================================================


*Pages: 994-1016 (Article)
*View Full Record: http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=Alerting&SrcApp=Alerting&DestApp=CCC&DestLinkType=FullRecord;KeyUT=CCC:000326044000005
*Order Full Text [ ]

Title:
l(z) Person-Fit Index to Identify Misfit Students With Achievement Test Data

Authors:
Seo, DG; Weiss, DJ

Source:
*EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT*, 73 (6):994-1016; DEC 2013

Abstract:
The usefulness of the l(z) person-fit index was investigated with
achievement test data from 20 exams given to more than 3,200 college
students. Results for three methods of estimating showed that the
distributions of l(z) were not consistent with its theoretical
distribution, resulting in general overfit to the item response theory
model and underidentification of potentially nonfitting response
vectors. The distributions of l(z) were not improved for the Bayesian
estimation method. A follow-up Monte Carlo simulation study using item
parameters estimated from real data resulted in mean l(z) approximating
the theoretical value of 0.0 for one of three estimation methods, but
all standard deviations were substantially below the theoretical value
of 1.0. Use of the l(z) distributions from these simulations resulted in
levels of identification of significant misfit consistent with the
nominal error rates. The reasons for the nonstandardized distributions
of l(z) observed in both these data sets were investigated in additional
Monte Carlo simulations. Previous studies showed that the distribution
of item difficulties was primarily responsible for the nonstandardized
distributions, with smaller effects for item discrimination and
guessing. It is recommended that with real tests, identification of
significantly nonfitting examinees be based on empirical distributions
of l(z) generated from Monte Carlo simulations using item parameters
estimated from real data.

========================================================================


*Pages: 1017-1035 (Article)
*View Full Record: http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=Alerting&SrcApp=Alerting&DestApp=CCC&DestLinkType=FullRecord;KeyUT=CCC:000326044000006
*Order Full Text [ ]

Title:
Mutual Information Item Selection Method in Cognitive Diagnostic Computerized Adaptive Testing With Short Test Length

Authors:
Wang, C

Source:
*EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT*, 73 (6):1017-1035; DEC 2013

Abstract:
Cognitive diagnostic computerized adaptive testing (CD-CAT) purports to
combine the strengths of both CAT and cognitive diagnosis. Cognitive
diagnosis models aim at classifying examinees into the correct mastery
profile group so as to pinpoint the strengths and weakness of each
examinee whereas CAT algorithms choose items to determine those
strengths and weakness as efficiently as possible. Most of the existing
CD-CAT item selection algorithms are evaluated when test length is
relatively long whereas several applications of CD-CAT, such as in
interim assessment, require an item selection algorithm that is able to
accurately recover examinees' mastery profile with short test length. In
this article, we introduce the mutual information item selection method
in the context of CD-CAT and then provide a computationally easier
formula to make the method more amenable in real time. Mutual
information is then evaluated against common item selection methods,
such as Kullback-Leibler information, posterior weighted
Kullback-Leibler information, and Shannon entropy. Based on our
simulations, mutual information consistently results in nearly the
highest attribute and pattern recovery rate in more than half of the
conditions. We conclude by discussing how the number of attributes,
Q-matrix structure, correlations among the attributes, and item quality
affect estimation accuracy.

========================================================================


*Pages: 1036-1053 (Article)
*View Full Record: http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=Alerting&SrcApp=Alerting&DestApp=CCC&DestLinkType=FullRecord;KeyUT=CCC:000326044000007
*Order Full Text [ ]

Title:
A Method for Imputing Response Options for Missing Data on Multiple-Choice Assessments

Authors:
Wolkowitz, AA; Skorupski, WP

Source:
*EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT*, 73 (6):1036-1053; DEC 2013

Abstract:
When missing values are present in item response data, there are a
number of ways one might impute a correct or incorrect response to a
multiple-choice item. There are significantly fewer methods for imputing
the actual response option an examinee may have provided if he or she
had not omitted the item either purposely or accidentally. This article
applies the multiple-choice model, a multiparameter logistic model that
allows for in-depth distractor analyses, to impute response options for
missing data in multiple-choice items. Following a general introduction
of the issues involved with missing data, the article describes the
details of the multiple-choice model and demonstrates its use for
multiple imputation of missing item responses. A simple simulation
example is provided to demonstrate the accuracy of the imputation method
by comparing true item difficulties (p values) and item-total
correlations (r values) to those estimated after imputation. Missing
data are simulated according to three different types of missing
mechanisms: missing completely at random, missing at random, and missing
not at random.

========================================================================


*Pages: 1054-1068 (Article)
*View Full Record: http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=Alerting&SrcApp=Alerting&DestApp=CCC&DestLinkType=FullRecord;KeyUT=CCC:000326044000008
*Order Full Text [ ]

Title:
A Commentary on the Relationship Between Model Fit and Saturated Path Models in Structural Equation Modeling Applications

Authors:
Raykov, T; Lee, CL; Marcoulides, GA; Chang, C

Source:
*EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT*, 73 (6):1054-1068; DEC 2013

Abstract:
The relationship between saturated path-analysis models and their fit to
data is revisited. It is demonstrated that a saturated model need not
fit perfectly or even well a given data set when fit to the raw data is
examined, a criterion currently frequently overlooked by researchers
utilizing path analysis modeling techniques. The potential of individual
case residuals for saturated model fit assessment is revealed by showing
how they can be used to examine local fit, as opposed to overall fit, to
sense possible model deficiencies or misspecifications, and to suggest
model improvements when needed. The discussion is illustrated with
several numerical examples.