Advances in Methods and Practices in Psychological Science

Papers
(The median citation count of Advances in Methods and Practices in Psychological Science is 5. The table below lists those papers that are above that threshold based on CrossRef citation counts [max. 250 papers]. The publications cover those that have been published in the past four years, i.e., from 2020-05-01 to 2024-05-01.)
ArticleCitations
Measurement Schmeasurement: Questionable Measurement Practices and How to Avoid Them288
Your Coefficient Alpha Is Probably Wrong, but Which Coefficient Omega Is Right? A Tutorial on Using R to Obtain Better Reliability Estimates204
Power Analysis for Parameter Estimation in Structural Equation Modeling: A Discussion and Tutorial202
Simulation-Based Power Analysis for Factorial Analysis of Variance Designs175
An Introduction to Linear Mixed-Effects Modeling in R149
Visualization of Brain Statistics With R Packages ggseg and ggseg3d142
A Conceptual Introduction to Bayesian Model Averaging129
An Excess of Positive Results: Comparing the Standard Psychology Literature With Registered Reports121
Cross-Validation: A Method Every Psychologist Should Know85
That’s a Lot to Process! Pitfalls of Popular Path Models61
Statistical Control Requires Causal Justification61
Analysis of Open Data and Computational Reproducibility in Registered Reports in Psychology57
A Traveler’s Guide to the Multiverse: Promises, Pitfalls, and a Framework for the Evaluation of Analytic Decisions47
Persons as Effect Sizes46
Selection of the Number of Participants in Intensive Longitudinal Studies: A User-Friendly Shiny App and Tutorial for Performing Power Analysis in Multilevel Regression Models That Account for Tempora45
Making the Black Box Transparent: A Template and Tutorial for Registration of Studies Using Experience-Sampling Methods45
Why the Cross-Lagged Panel Model Is Almost Never the Right Choice45
Understanding Mixed-Effects Models Through Data Simulation43
Many Labs 5: Testing Pre-Data-Collection Peer Review as an Intervention to Increase Replicability39
The Failings of Conventional Mediation Analysis and a Design-Based Alternative36
Crud (Re)Defined36
Registered Replication Report on Fischer, Castel, Dodd, and Pratt (2003)30
The Percentile Bootstrap: A Primer With Step-by-Step Instructions in R30
A Primer on Bayesian Model-Averaged Meta-Analysis29
Justify Your Alpha: A Primer on Two Practical Approaches28
A Causal Framework for Cross-Cultural Generalizability28
Adjusting for Publication Bias in JASP and R: Selection Models, PET-PEESE, and Robust Bayesian Meta-Analysis27
Laypeople Can Predict Which Social-Science Studies Will Be Replicated Successfully23
Putting Psychology to the Test: Rethinking Model Evaluation Through Benchmarking and Prediction22
A Multilab Study of Bilingual Infants: Exploring the Preference for Infant-Directed Speech22
These Are Not the Effects You Are Looking for: Causality and the Within-/Between-Persons Distinction in Longitudinal Data Analysis19
Making Sense of Model Generalizability: A Tutorial on Cross-Validation in R and Shiny19
How Many Participants Do I Need to Test an Interaction? Conducting an Appropriate Power Analysis and Achieving Sufficient Power to Detect an Interaction19
ManyClasses 1: Assessing the Generalizable Effect of Immediate Feedback Versus Delayed Feedback Across Many College Classes18
A Conceptual Framework for Investigating and Mitigating Machine-Learning Measurement Bias (MLMB) in Psychological Assessment18
Precise Answers to Vague Questions: Issues With Interactions18
Average Power: A Cautionary Note16
Psychologists Should Use Brunner-Munzel’s Instead of Mann-Whitney’s U Test as the Default Nonparametric Procedure15
Multilab Direct Replication of Flavell, Beach, and Chinsky (1966): Spontaneous Verbal Rehearsal in a Memory Task as a Function of Age15
Corrigendum: Evaluating Effect Size in Psychological Research: Sense and Nonsense14
Summary Plots With Adjusted Error Bars: The superb Framework With an Implementation in R14
Citation Patterns Following a Strongly Contradictory Replication Result: Four Case Studies From Psychology13
Caution, Preprint! Brief Explanations Allow Nonscientists to Differentiate Between Preprints and Peer-Reviewed Journal Articles13
Assessing Change in Intervention Research: The Benefits of Composite Outcomes12
Bayesian Repeated-Measures Analysis of Variance: An Updated Methodology Implemented in JASP11
A Guide to Posting and Managing Preprints11
Simulation Studies as a Tool to Understand Bayes Factors10
Improving Transparency, Falsifiability, and Rigor by Making Hypothesis Tests Machine-Readable10
Many Labs 5: Registered Replication of Vohs and Schooler (2008), Experiment 110
Commentary on Hussey and Hughes (2020): Hidden Invalidity Among 15 Commonly Used Measures in Social and Personality Psychology9
Hybrid Experimental Designs for Intervention Development: What, Why, and How9
Analyzing GPS Data for Psychological Research: A Tutorial9
Data Visualization Using R for Researchers Who Do Not Use R9
SampleSizePlanner: A Tool to Estimate and Justify Sample Size for Two-Group Studies9
Doing Better Data Visualization8
Analyzing Individual Differences in Intervention-Related Changes8
Rock the MIC: The Matrix of Implied Causation, a Tool for Experimental Design and Model Checking8
Best Practices in Supervised Machine Learning: A Tutorial for Psychologists8
A Guide for Calculating Study-Level Statistical Power for Meta-Analyses8
A Cautionary Note on Estimating Effect Size7
Getting Started Creating Data Dictionaries: How to Create a Shareable Data Set7
Many Labs 5: Registered Replication of Payne, Burkley, and Stokes (2008), Study 46
Australian and Italian Psychologists’ View of Replication6
Experiment-Wise Type I Error Control: A Focus on 2 × 2 Designs6
A Primer on Structural Equation Model Diagrams and Directed Acyclic Graphs: When and How to Use Each in Psychological and Epidemiological Research6
Leveraging Containers for Reproducible Psychological Research6
The Unbearable Lightness of Attentional Cuing by Symbolic Magnitude: Commentary on the Registered Replication Report by Colling et al.6
Boundary Conditions for the Practical Importance of Small Effects in Long Runs: A Comment on Funder and Ozer (2019)6
Evaluating Response Shift in Statistical Mediation Analysis5
Many Labs 5: Registered Replication of Albarracín et al. (2008), Experiment 55
Why Bayesian “Evidence for H1” in One Condition and Bayesian “Evidence for H0” in Another Condition Does Not Mean Good-Enough Bayesian Evidence for a Difference Bet5
0.020977973937988