ISSN: 1899-0967
Polish Journal of Radiology
Established by prof. Zygmunt Grudziński in 1926 Sun
Current issue Archive Manuscripts accepted About the journal Editorial board Abstracting and indexing Contact Instructions for authors Ethical standards and procedures
Editorial System
Submit your Manuscript
SCImago Journal & Country Rank
1/2023
vol. 88
 
Share:
Share:
Technology and contrast media
abstract:
Original paper

Will ChatGPT pass the Polish specialty exam in radiology and diagnostic imaging? Insights into strengths and limitations

Jakub Kufel
1
,
Iga Paszkiewicz
2
,
Michał Bielówka
3
,
Wiktoria Bartnikowska
4
,
Michał Janik
3
,
Magdalena Stencel
3
,
Łukasz Czogalik
3
,
Katarzyna Gruszczyńska
5
,
Sylwia Mielcarska
6

1.
Department of Biophysics, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Zabrze, Poland
2.
Tytus Chałubiński Hospital, Zakopane, Poland
3.
Professor Zbigniew Religa Student Scientific Association at the Department of Biophysics, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Poland
4.
Faculty of Medical Sciences in Katowice, Medical University of Silesia, Katowice, Poland
5.
Department of Radiology and Nuclear Medicine, Faculty of Medical Sciences in Katowice, Medical University of Silesia, Katowice, Poland
6.
Department of Medical and Molecular Biology, Faculty of Medical Sciences in Zabrze, Medical University of Silesia, Zabrze, Poland
© Pol J Radiol 2023; 88: e430-e434
Online publish date: 2023/09/18
View full text Get citation
 
PlumX metrics:
Purpose:
Rapid development of artificial intelligence has aroused curiosity regarding its potential applications in medical field. The purpose of this article was to present the performance of ChatGPT, a state-of-the-art language model in relation to pass rate of national specialty examination (PES) in radiology and imaging diagnostics within Polish education system. Additionally, the study aimed to identify the strengths and limitations of the model through a detailed analysis of issues raised by exam questions.

Material and methods:
he present study utilized a PES exam consisting of 120 questions, provided by Medical Exami­nations Center in Lodz. Questions were administered using openai.com platform that grants free access to GPT-3.5 model. All questions were categorized according to Bloom’s taxonomy to assess their complexity and difficulty. Following the answer to each exam question, ChatGPT was asked to rate its confidence on a scale of 1 to 5 to evaluate the accuracy of its response.

Results:
ChatGPT did not reach the pass rate threshold of PES exam (52%); however, it was close in certain question categories. No significant differences were observed in the percentage of correct answers across question types and sub-types.

Conclusions:
The performance of the ChatGPT model in the pass rate of PES exam in radiology and imaging diagnostics in Poland is yet to be determined, which requires further research on improved versions of ChatGPT.

keywords:

ChatGPT, deep learning, large language model, artificial intelligence




Quick links
© 2024 Termedia Sp. z o.o.
Developed by Bentus.