This study aims to evaluate the feasibility and accuracy of using an AI grading tool to score medical student narrative exam responses and to compare its performance with faculty grading using a standard rubric. It will also assess student and faculty perceptions of the accuracy, fairness, and acceptability of AI grading before and after exposure to AI-generated feedback. Through these analyses, the study seeks to determine how well AI aligns with human evaluators, identify the types of questions it grades most and least accurately, and understand the factors that shape willingness to use AI in assessment. The findings will help inform whether and how AI-assisted grading could be integrated into medical education.
Thank you for your interest, but this study is recruiting by invitation only.
North Carolina (Statewide)
Christina Shenvi
Emergency Medicine
Behavioral or Social
Observational
Healthy Volunteer or General Population
25-3265