Journal of Educational Measurement

Papers
(The TQCC of Journal of Educational Measurement is 3. The table below lists those papers that are above that threshold based on CrossRef citation counts [max. 250 papers]. The publications cover those that have been published in the past four years, i.e., from 2021-08-01 to 2025-08-01.)
ArticleCitations
NCME Presidential Address 2022: Turning the Page to the Next Chapter of Educational Measurement21
The Automated Test Assembly and Routing Rule for Multistage Adaptive Testing with Multidimensional Item Response Theory16
A Statistical Test for the Detection of Item Compromise Combining Responses and Response Times15
How Many Plausible Values?14
14
Assessing Differential Bundle Functioning Using Meta‐Analysis13
Measuring the Uncertainty of Imputed Scores12
A Note on the Use of Categorical Subscores10
Optimal Calibration of Items for Multidimensional Achievement Tests10
Editorial for JEM issue 58‐39
Issue Information8
Linking Error on Achievement Levels Accounting for Dependencies and Complex Sampling8
Using Linkage Sets to Improve Connectedness in Rater Response Model Estimation7
A Deterministic Gated Lognormal Response Time Model to Identify Examinees with Item Preknowledge7
Two IRT Characteristic Curve Linking Methods Weighted by Information7
Briggs, Derek C.Historical and Conceptual Foundations of Measurement in the Human Sciences: Credos and Controversies6
Using Item Parameter Predictions for Reducing Calibration Sample Requirements—A Case Study Based on a High‐Stakes Admission Test6
Historical Perspectives on Score Comparability Issues Raised by Innovations in Testing6
An Exponentially Weighted Moving Average Procedure for Detecting Back Random Responding Behavior5
Validity Arguments for AI‐Based Automated Scores: Essay Scoring as an Illustration5
Model Selection Posterior Predictive Model Checking via Limited‐Information Indices for Bayesian Diagnostic Classification Modeling5
5
Differential and Functional Response Time Item Analysis: An Application to Understanding Paper versus Digital Reading Processes4
Information Functions of Rank‐2PL Models for Forced‐Choice Questionnaires4
On the Positive Correlation between DIF and Difficulty: A New Theory on the Correlation as Methodological Artifact4
Gender Bias in Test Item Formats: Evidence from PISA 2009, 2012, and 2015 Math and Reading Tests4
Likelihood‐Based Estimation of Model‐Derived Oral Reading Fluency4
Parametric Bootstrap Mantel‐Haenszel Statistic for Aggregated Testlet Effects4
An Item Response Tree Model for Items with Multiple‐Choice and Constructed‐Response Parts3
DIF Detection for Multiple Groups: Comparing Three‐Level GLMMs and Multiple‐Group IRT Models3
A Generalized Objective Function for Computer Adaptive Item Selection3
Addressing Bias in Spoken Language Systems Used in the Development and Implementation of Automated Child Language‐Based Assessment3
Utilizing Response Time for Item Selection in On‐the‐Fly Multistage Adaptive Testing for PISA Assessment3
Exploring the Impact of Random Guessing in Distractor Analysis3
Using Response Time in Multidimensional Computerized Adaptive Testing3
Using Eye‐Tracking Data as Part of the Validity Argument for Multiple‐Choice Questions: A Demonstration3
Score Comparability between Online Proctored and In‐Person Credentialing Exams3
Issue Information3
Controlling the Speededness of Assembled Test Forms: A Generalization to the Three‐Parameter Lognormal Response Time Model3
3
Detecting Group Collaboration Using Multiple Correspondence Analysis3
Sensemaking of Process Data from Evaluation Studies of Educational Games: An Application of Cross‐Classified Item Response Theory Modeling3
0.23041009902954