Documentdetail
ID kaart

oai:pubmedcentral.nih.gov:1062...

Onderwerp
Original Research
Auteur
Chen, Tse Chiang Multala, Evan Kearns, Patrick Delashaw, Johnny Dumont, Aaron Maraganore, Demetrius Wang, Arthur
Langue
en
Editor

BMJ Publishing Group

Categorie

BMJ Neurology Open

Jaar

2023

vermelding datum

13-12-2023

Trefwoorden
questions neurology question performance
Metriek

Beschrijving

BACKGROUND AND OBJECTIVES: ChatGPT has shown promise in healthcare.

To assess the utility of this novel tool in healthcare education, we evaluated ChatGPT’s performance in answering neurology board exam questions.

METHODS: Neurology board-style examination questions were accessed from BoardVitals, a commercial neurology question bank.

ChatGPT was provided a full question prompt and multiple answer choices.

First attempts and additional attempts up to three tries were given to ChatGPT to select the correct answer.

A total of 560 questions (14 blocks of 40 questions) were used, although any image-based questions were disregarded due to ChatGPT’s inability to process visual input.

The artificial intelligence (AI) answers were then compared with human user data provided by the question bank to gauge its performance.

RESULTS: Out of 509 eligible questions over 14 question blocks, ChatGPT correctly answered 335 questions (65.8%) on the first attempt/iteration and 383 (75.3%) over three attempts/iterations, scoring at approximately the 26th and 50th percentiles, respectively.

The highest performing subjects were pain (100%), epilepsy & seizures (85%) and genetic (82%) while the lowest performing subjects were imaging/diagnostic studies (27%), critical care (41%) and cranial nerves (48%).

DISCUSSION: This study found that ChatGPT performed similarly to its human counterparts.

The accuracy of the AI increased with multiple attempts and performance fell within the expected range of neurology resident learners.

This study demonstrates ChatGPT’s potential in processing specialised medical information.

Future studies would better define the scope to which AI would be able to integrate into medical decision making.

Chen, Tse Chiang,Multala, Evan,Kearns, Patrick,Delashaw, Johnny,Dumont, Aaron,Maraganore, Demetrius,Wang, Arthur, 2023, Assessment of ChatGPT’s performance on neurology written board examination questions, BMJ Publishing Group

Delen

Bron

Artikelen aanbevolen door ES/IODE AI

Choice Between Partial Trajectories: Disentangling Goals from Beliefs
agents models aligned based bootstrapped learning reward function model return choice choices partial