Item request has been placed! ×
Item request cannot be made. ×
loading  Processing Request

Evaluating ChatGPT-4 in medical education: an assessment of subject exam performance reveals limitations in clinical curriculum support for students

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • معلومة اضافية
    • بيانات النشر:
      Springer, 2024.
    • الموضوع:
      2024
    • Collection:
      LCC:Computational linguistics. Natural language processing
      LCC:Electronic computers. Computer science
    • نبذة مختصرة :
      Abstract This study evaluates the proficiency of ChatGPT-4 across various medical specialties and assesses its potential as a study tool for medical students preparing for the United States Medical Licensing Examination (USMLE) Step 2 and related clinical subject exams. ChatGPT-4 answered board-level questions with 89% accuracy, but showcased significant discrepancies in performance across specialties. Although it excelled in psychiatry, neurology, and obstetrics and gynecology, it underperformed in pediatrics, emergency medicine, and family medicine. These variations may be potentially attributed to the depth and recency of training data as well as the scope of the specialties assessed. Specialties with significant interdisciplinary overlap had lower performance, suggesting complex clinical scenarios pose a challenge to the AI. In terms of the future, the overall efficacy of ChatGPT-4 indicates a promising supplemental role in medical education, but performance inconsistencies across specialties in the current version lead us to recommend that medical students use AI with caution.
    • File Description:
      electronic resource
    • ISSN:
      2731-0809
    • Relation:
      https://doaj.org/toc/2731-0809
    • الرقم المعرف:
      10.1007/s44163-024-00135-2
    • الرقم المعرف:
      edsdoj.4a0d385712a414ea98c065c02225c8c