Machine Learning and Artificial Intelligence
Carlos Velasco, PhD
Research Associate in the School of Biomedical Engineering and Imaging Sciences
King's College London, England, United Kingdom
Carlos Velasco, PhD
Research Associate in the School of Biomedical Engineering and Imaging Sciences
King's College London, England, United Kingdom
Roman Jakubicek, PhD
Assistant Professor
Brno University of Technology, Czech Republic
Anastasia Fotaki, MD
Clinical Fellow in MRI for Congenital Heart Disease
King's College London
London, United Kingdom
Alina Hua, MD
Doctor
Guy's and St Thomas' NHS Foundation Trust, England, United Kingdom
Camila Munoz, PhD
Research Associate
King's College London
London, England, United Kingdom
Claudia Prieto, PhD
Professor
King's College London
London, United Kingdom
René M. Botnar, PhD
Professor
King's College London
London, England, United Kingdom
T1 and T2 mapping plays an important role in the evaluation of many cardiovascular diseases1,2.In recent years, novel approaches have shown the feasibility to produce co-registered 3D whole-heart T1 and T2 maps within a single free-breathing acquisition3,4. These 3D whole-heart multiparametric approaches promise a richer assessment of myocardial tissue alterations. Nevertheless, the higher amount of data obtained from a single 3D whole heart multiparametric scan (up to ~40 slices per parametric map) increases considerably the time required to segment and analyze the quantitative maps. Thus, an automated segmentation tool for these maps is desirable to perform this otherwise prohibitively laborious task. Here, we show the potential of an attention fully convolutional network to perform fast, automated segmentation of 3D whole-heart simultaneous T1 and T2 maps.
Methods:
The proposed DL approach consists of a U-net based network with attention modules (Fig.1). Input images are normalized to 1x1mm2 2D slices of T1, T2 maps and T1/T2-weighted images. The network was pre-trained on 2D multi-slice CINE data and trained on a manually annotated dataset consisting of a) ~1250 slices from 1x1x2 mm3 3D whole-heart joint T1/T2 maps and T1/T2 contrast-weighted images from N=34 subjects and b) ~500 1.6x1.6x8 mm3 slices from T1-MOLLI and T2-bSSFP maps from subjects with suspected CVD. The training dataset was further enriched with annotated T1 and T2 maps from Emidec4 (N=700 slices) and MyoPS4 (N=100) publicly available datasets. Training and validation was performed in an 8:2 ratio. The testing dataset consisted of a total of 72 slices from 3D whole-heart joint T1/T2 mapping scans from 8 subjects. Manual segmentations for these 72 slices were performed by an experienced clinical reader for comparison purposes. All scans included in datasets a) and b) were acquired on a 1.5T scanner (MAGNETOM Aera, Siemens Healthcare, Erlangen, Germany). Dice scores, Hausdorff distance (HD) and Bland-Altman plots were computed on the testing dataset to compare manual segmentation vs. predicted masks.
Results:
The predicted masks produced an average Dice score of 0.788 (0.783, 0.784 and 0.798 for apical, mid and basal levels), while average HD was 1.16 pixels (1.11, 1.16 and 1.20 for apical, mid and basal levels). Bland-Altman plots comparing T1 and T2 values for predicted vs. manually delineated ROIs resulted in a bias of -14.7±46.8ms and -3.0±6.9ms respectively (Fig.2). Importantly, segmentation time was ~45s per slice for manual segmentation vs. ~0.5s per slice for the proposed approach.
Conclusion:
We demonstrate the feasibility of an attention neural network to perform automated myocardial segmentation on 3D joint T1/T2 maps, shortening the segmentation and analysis time by ~100x, compared to manual segmentation. Predicted masks were visually comparable to manual segmentation and results with regard to T1 and T2 quantification were not significantly different to those within manual segmentation.