Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Evaluation of bias and gender/racial concordance based on sentiment analysis of narrative evaluations of clinical clerkships using natural language processing

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • معلومة اضافية
    • بيانات النشر:
      eScholarship, University of California
    • الموضوع:
      2024
    • Collection:
      University of California: eScholarship
    • نبذة مختصرة :
      There is increasing interest in understanding potential bias in medical education. We used natural language processing (NLP) to evaluate potential bias in clinical clerkship evaluations. Data from medical evaluations and administrative databases for medical students enrolled in third-year clinical clerkship rotations across two academic years. We collected demographic information of students and faculty evaluators to determine gender/racial concordance (i.e., whether the student and faculty identified with the same demographic). We used a multinomial log-linear model for final clerkship grades, using predictors such as numerical evaluation scores, gender/racial concordance, and sentiment scores of narrative evaluations using the SentimentIntensityAnalyzer tool in Python. 2037 evaluations from 198 students were analyzed. Statistical significance was defined as P < 0.05. Sentiment scores for evaluations did not vary significantly by student gender, race, or ethnicity (P = 0.88, 0.64, and 0.06, respectively). Word choices were similar across faculty and student demographic groups. Modeling showed narrative evaluation sentiment scores were not predictive of an honors grade (odds ratio [OR] 1.23, P = 0.58). Numerical evaluation average (OR 1.45, P < 0.001) and gender concordance between faculty and student (OR 1.32, P = 0.049) were significant predictors of receiving honors. The lack of disparities in narrative text in our study contrasts with prior findings from other institutions. Ongoing efforts include comparative analyses with other institutions to understand what institutional factors may contribute to bias. NLP enables a systematic approach for investigating bias. The insights gained from the lack of association between word choices, sentiment scores, and final grades show potential opportunities to improve feedback processes for students.
    • File Description:
      application/pdf
    • Relation:
      qt58n0p8vs; https://escholarship.org/uc/item/58n0p8vs
    • Rights:
      public
    • الرقم المعرف:
      edsbas.16A6EC21