SciELO - Scientific Electronic Library Online

 
vol.52 issue1Anesthesia training: Are we doing enough in three years? Cross-sectional studyShould videolaryngoscopy be routinely used for airway management? An approach from different scenarios in medical practice author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

  • On index processCited by Google
  • Have no similar articlesSimilars in SciELO
  • On index processSimilars in Google

Share


Colombian Journal of Anestesiology

Print version ISSN 0120-3347On-line version ISSN 2256-2087

Abstract

CRUZ, Gustavo; PEDROZA, Santiago  and  ARIZA, Fredy. ChatGPT's learning and reasoning capacity in anesthesiology. Rev. colomb. anestesiol. [online]. 2024, vol.52, n.1, 3.  Epub Dec 22, 2023. ISSN 0120-3347.  https://doi.org/10.5554/22562087.e1092.

Introduction:

Over the past few months, ChatGPT has raised a lot of interest given its ability to perform complex tasks through natural language and conversation. However, its use in clinical decision-making is limited and its application in the field of anesthesiology is unknown.

Objective:

To assess ChatGPT's basic and clinical reasoning and its learning ability in a performance test on general and specific anesthesia topics.

Methods:

A three-phase assessment was conducted. Basic knowledge of anesthesia was assessed in the first phase, followed by a review of difficult airway management and, finally, measurement of decision-making ability in ten clinical cases. The second and the third phases were conducted before and after feeding ChatGPT with the 2022 guidelines of the American Society of Anesthesiologists on difficult airway management.

Results:

On average, ChatGPT succeded 65% of the time in the first phase and 48% of the time in the second phase. Agreement in clinical cases was 20%, with 90% relevance and 10% error rate. After learning, ChatGPT improved in the second phase, and was correct 59% of the time, with agreement in clinical cases also increasing to 40%.

Conclusions:

ChatGPT showed acceptable accuracy in the basic knowledge test, high relevance in the management of specific difficult airway clinical cases, and the ability to improve after learning.

Keywords : ChatGPT; Artificial intelligence; Anesthesiology; Difficult airway; Learning; Reasoning; Decision-making.

        · abstract in Spanish     · text in English | Spanish     · English ( pdf ) | Spanish ( pdf )