
Current issue
Archive
Manuscripts accepted
About the journal
Editorial board
Abstracting and indexing
Contact
Instructions for authors
Ethical standards and procedures
Editorial System
Submit your Manuscript
|
1/2022
vol. 87 Urogenital radiology
abstract:
Original paper
Differentiation of carcinosarcoma from endometrial carcinoma on magnetic resonance imaging using deep learning
Tsukasa Saida
1
,
Kensaku Mori
1
,
Sodai Hoshiai
1
,
Masafumi Sakai
1
,
Aiko Urushibara
2
,
Toshitaka Ishiguro
1
,
Toyomi Satoh
3
,
Takahito Nakajima
1
1.
Department of Radiology, Faculty of Medicine, University of Tsukuba, Tsukuba, Japan
2.
Department of Radiology, University of Tsukuba Hospital, Tsukuba, Japan
3.
Department of Obstetrics and Gynecology, Faculty of Medicine, University of Tsukuba, Tsukuba, Japan
© Pol J Radiol 2022; 87: e521-e529
Online publish date: 2022/09/21
View full text
Get citation
ENW EndNote
BIB JabRef, Mendeley
RIS Papers, Reference Manager, RefWorks, Zotero
AMA
APA
Chicago
Harvard
MLA
Vancouver
Introduction
To verify whether deep learning can be used to differentiate between carcinosarcomas (CSs) and endometrial carcinomas (ECs) using several magnetic resonance imaging (MRI) sequences. Material and methods This retrospective study included 52 patients with CS and 279 patients with EC. A deep-learning model that uses convolutional neural networks (CNN) was trained with 572 T2-weighted images (T2WI) from 42 patients, 488 apparent diffusion coefficient of water maps from 33 patients, and 539 fat-saturated contrast-enhanced T1-weighted images from 40 patients with CS, as well as 1612 images from 223 patients with EC for each sequence. These were tested with 9-10 images of 9-10 patients with CS and 56 images of 56 patients with EC for each sequence, respectively. Three experienced radiologists independently interpreted these test images. The sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve (AUC) for each sequence were compared between the CNN models and the radiologists. Results The CNN model of each sequence had sensitivity 0.89-0.93, specificity 0.44-0.70, accuracy 0.83-0.89, and AUC 0.80-0.94. It also showed an equivalent or better diagnostic performance than the 3 readers (sensitivity 0.43-0.91, specificity 0.30-0.78, accuracy 0.45-0.88, and AUC 0.49-0.92). The CNN model displayed the highest diagnostic performance on T2WI (sensitivity 0.93, specificity 0.70, accuracy 0.89, and AUC 0.94). Conclusions Deep learning provided diagnostic performance comparable to or better than experienced radiologists when distinguishing between CS and EC on MRI. keywords:
artificial intelligence, magnetic resonance imaging, uterus, carcinosarcoma, malignant mixed Müllerian tumours |