Utilize este identificador para referenciar este registo:
http://hdl.handle.net/10071/32611
Autoria: | Teixeira, B. N. Leitão, A. Nascimento, G. Campos-Fernandes, A. Cercas, F. |
Data: | 2024 |
Título próprio: | Can ChatGPT support clinical coding using the ICD-10-CM/PCS? |
Título da revista: | Informatics |
Volume: | 11 |
Número: | 4 |
Referência bibliográfica: | Teixeira, B. N., Leitão, A., Nascimento, G., Campos-Fernandes, A., & Cercas, F. (2024). Can ChatGPT support clinical coding using the ICD-10-CM/PCS? Informatics, 11(4), Article 84. https://doi.org/10.3390/informatics11040084 |
ISSN: | 2227-9709 |
DOI (Digital Object Identifier): | 10.3390/informatics11040084 |
Palavras-chave: | ChatGPT Artificial intelligence ICD-10-CM/PCS Clinical coding |
Resumo: | Introduction: With the growing development and adoption of artificial intelligence in healthcare and across other sectors of society, various user-friendly and engaging tools to support research have emerged, such as chatbots, notably ChatGPT. Objective: To investigate the performance of ChatGPT as an assistant to medical coders using the ICD-10-CM/PCS. Methodology: We conducted a prospective exploratory study between 2023 and 2024 over 6 months. A total of 150 clinical cases coded using the ICD-10-CM/PCS, extracted from technical coding books, were systematically randomized. All cases were translated into Portuguese (the native language of the authors) and English (the native language of the ICD-10-CM/PCS). These clinical cases varied in complexity levels regarding the quantity of diagnoses and procedures, as well as the nature of the clinical information. Each case was input into the 2023 ChatGPT free version. The coding obtained from ChatGPT was analyzed by a senior medical auditor/coder and compared with the expected results. Results: Regarding the correct codes, ChatGPT’s performance was higher by approximately 29 percentage points between diagnoses and procedures, with greater proficiency in diagnostic codes. The accuracy rate for codes was similar across languages, with rates of 31.0% and 31.9%. The error rate in procedure codes was substantially higher than that in diagnostic codes by almost four times. For missing information, a higher incidence was observed in diagnoses compared to procedures of slightly more than double the comparative rates. Additionally, there was a statistically significant excess of codes not related to clinical information, which was higher in procedures and nearly the same value in both languages under study. Conclusion: Given the ease of access to these tools, this investigation serves as an awareness factor, demonstrating that ChatGPT can assist the medical coder in directed research. However, it does not replace their technical validation in this process. Therefore, further developments of this tool are necessary to increase the quality and reliability of the results. |
Arbitragem científica: | yes |
Acesso: | Acesso Aberto |
Aparece nas coleções: | DRHCO-RI - Artigos em revistas internacionais com arbitragem científica IT-RI - Artigos em revistas científicas internacionais com arbitragem científica |
Ficheiros deste registo:
Ficheiro | Tamanho | Formato | |
---|---|---|---|
article_106391.pdf | 446,35 kB | Adobe PDF | Ver/Abrir |
Todos os registos no repositório estão protegidos por leis de copyright, com todos os direitos reservados.