Congenital Heart Disease
Einar Heiberg, PhD
Associate Professor
Lund University
Lund, Skane Lan, Sweden
Einar Heiberg, PhD
Associate Professor
Lund University
Lund, Skane Lan, Sweden
Matilda Dahlström, MSc
MSc
Lund University, Sweden
Petru Liuba, MD, PhD
Associate professor
Lund University, Skåne University Hospital, Lund, Sweden
Lund, Sweden
3D printed models of congenital heart disease (CHD) are useful for interventional planning, multidisciplinary conferences, education, and communication with child and parents. Creation of high-quality 3D models is prohibitively time consuming for a broader clinical adoption. Therefore, the purpose of this study was to investigate the feasibility of using deep learning to automate the tedious creation of the 3D model.
Methods:
Twenty-seven subjects, age 5 days to 17 years, with complex CHD who had a CT as part of clinical routine (n=20) or who were referred for creation of 3D printed models (n=7), were included. All subjects were anonymized except for anatomy, age, and sex, Table 1.
Six subjects were set aside for testing. Image volumes were resampled to isotropic resolution. A single 2D U-net was trained to do multi-task semantical segmentation into four classes: “Background”, “Bone”, “Blood”, and “Tissue” (approximately all voxels inside the pericardial sack that was not “Blood”).
The model was trained by randomly extracting 256 x 256 patches from the image volume using Matlab R2019a. The patches were extracted in transversal, sagittal and coronal directions. Completely blank patches were excluded. Approximately 65 000 patches were extracted from the training subjects during each training loop (Epoch). Augmentation was performed for each Epoch and the networked was never trained on the same patch twice. Image augmentations are listed in Table 2. Hyper parameters were based on experience from previous projects and manually tuned for this project using the first 10 subjects. The number of Epochs were 100, initial learning rate 1e-3, learning drop period 4, learning drop rate was 0.8, L2-normalization was 1e-4.
The final algorithm performs segmentation in all three directions and the result is based on voxel-wise voting. The result is created by hollowing the blood pool and combining it with the tissue class followed by a smoothing operation to create a smooth outer surface.The only user interaction is to select region of interest and the vessel thickness. The method was integrated into clinically available software Segment 3DPrint (Medviso AB, Lund, Sweden).
Results:
A sample segmentation is shown in Figure 1. Typical computation time was around 40 seconds using an NVIDIA GeForce 3050 GPU. The Dice score of the 6 test cases were 0.93±0.03. One test case had a severely stenosed vessel where the segmentation failed and was incorrectly classified as bone. The user needs to review the blood pool segmentation to ensure a correct segmentation result, especially that separated vessels and chambers to do not bleed together, and if required perform manual corrections. Quality review and manual correction typically takes less than 5 minutes per case.
Conclusion:
An automated algorithm to generate 3D models from CT images using deep learning was implemented. The algorithm drastically reduces the time to generate high quality anatomical models for 3D printing or virtual reality.