Journal of Educational Measurement

Papers
(The TQCC of Journal of Educational Measurement is 2. The table below lists those papers that are above that threshold based on CrossRef citation counts [max. 250 papers]. The publications cover those that have been published in the past four years, i.e., from 2020-05-01 to 2024-05-01.)
ArticleCitations
Using Retest Data to Evaluate and Improve Effort‐Moderated Scoring27
Model‐Based Treatment of Rapid Guessing24
A Response Time Process Model for Not‐Reached and Omitted Items15
Random Responders in the TIMSS 2015 Student Questionnaire: A Threat to Validity?9
Optimizing Implementation of Artificial‐Intelligence‐Based Automated Scoring: An Evidence Centered Design Approach for Designing Assessments for AI‐based Scoring8
Variation in Respondent Speed and its Implications: Evidence from an Adaptive Testing Scenario7
A Residual‐Based Differential Item Functioning Detection Framework in Item Response Theory6
Using Eye‐Tracking Data as Part of the Validity Argument for Multiple‐Choice Questions: A Demonstration6
Score Comparability between Online Proctored and In‐Person Credentialing Exams5
Psychometric Methods to Evaluate Measurement and Algorithmic Bias in Automated Scoring5
Using Item Scores and Distractors in Person‐Fit Assessment5
Examining the Impacts of Ignoring Rater Effects in Mixed‐Format Tests5
Linking and Comparability across Conditions of Measurement: Established Frameworks and Proposed Updates5
An Unsupervised‐Learning‐Based Approach to Compromised Items Detection4
Exploring the Impact of Random Guessing in Distractor Analysis4
The Impact of Cheating on Score Comparability via Pool‐Based IRT Pre‐equating4
Validity Arguments for AI‐Based Automated Scores: Essay Scoring as an Illustration4
Multiple‐Group Joint Modeling of Item Responses, Response Times, and Action Counts with the Conway‐Maxwell‐Poisson Distribution4
Score Comparability Issues with At‐Home Testing and How to Address Them4
Detecting Differential Item Functioning Using Posterior Predictive Model Checking: A Comparison of Discrepancy Statistics4
Generating Models for Item Preknowledge4
Toward Argument‐Based Fairness with an Application to AI‐Enhanced Educational Assessments4
On the Positive Correlation between DIF and Difficulty: A New Theory on the Correlation as Methodological Artifact3
NCME Presidential Address 2022: Turning the Page to the Next Chapter of Educational Measurement3
On Joining a Signal Detection Choice Model with Response Time Models3
Standard Errors of Variance Components, Measurement Errors and Generalizability Coefficients for Crossed Designs3
A Unified Comparison of IRT‐Based Effect Sizes for DIF Investigations3
Robust Estimation for Response Time Modeling3
Validity Arguments Meet Artificial Intelligence in Innovative Educational Assessment2
A Statistical Test for the Detection of Item Compromise Combining Responses and Response Times2
Anchoring Validity Evidence for Automated Essay Scoring2
Historical Perspectives on Score Comparability Issues Raised by Innovations in Testing2
The Automated Test Assembly and Routing Rule for Multistage Adaptive Testing with Multidimensional Item Response Theory2
Latent Space Model for Process Data2
A Recursion‐Based Analytical Approach to Evaluate the Performance of MST2
0.019889831542969