4 research outputs found
Binary segmentation of medical images using implicit spline representations and deep learning
We propose a novel approach to image segmentation based on combining implicit
spline representations with deep convolutional neural networks. This is done by
predicting the control points of a bivariate spline function whose zero-set
represents the segmentation boundary. We adapt several existing neural network
architectures and design novel loss functions that are tailored towards
providing implicit spline curve approximations. The method is evaluated on a
congenital heart disease computed tomography medical imaging dataset.
Experiments are carried out by measuring performance in various standard
metrics for different networks and loss functions. We determine that splines of
bidegree with coefficient resolution performed optimally
for resolution CT images. For our best network, we achieve an
average volumetric test Dice score of almost 92%, which reaches the state of
the art for this congenital heart disease dataset.Comment: 17 pages, 5 figure
Automatic segmentation of human knee anatomy by a convolutional neural network applying a 3D MRI protocol
Abstract Background To study deep learning segmentation of knee anatomy with 13 anatomical classes by using a magnetic resonance (MR) protocol of four three-dimensional (3D) pulse sequences, and evaluate possible clinical usefulness. Methods The sample selection involved 40 healthy right knee volumes from adult participants. Further, a recently injured single left knee with previous known ACL reconstruction was included as a test subject. The MR protocol consisted of the following 3D pulse sequences: T1 TSE, PD TSE, PD FS TSE, and Angio GE. The DenseVNet neural network was considered for these experiments. Five input combinations of sequences (i) T1, (ii) T1 and FS, (iii) PD and FS, (iv) T1, PD, and FS and (v) T1, PD, FS and Angio were trained using the deep learning algorithm. The Dice similarity coefficient (DSC), Jaccard index and Hausdorff were used to compare the performance of the networks. Results Combining all sequences collectively performed significantly better than other alternatives. The following DSCs (±standard deviation) were obtained for the test dataset: Bone medulla 0.997 (±0.002), PCL 0.973 (±0.015), ACL 0.964 (±0.022), muscle 0.998 (±0.001), cartilage 0.966 (±0.018), bone cortex 0.980 (±0.010), arteries 0.943 (±0.038), collateral ligaments 0.919 (± 0.069), tendons 0.982 (±0.005), meniscus 0.955 (±0.032), adipose tissue 0.998 (±0.001), veins 0.980 (±0.010) and nerves 0.921 (±0.071). The deep learning network correctly identified the anterior cruciate ligament (ACL) tear of the left knee, thus indicating a future aid to orthopaedics. Conclusions The convolutional neural network proves highly capable of correctly labeling all anatomical structures of the knee joint when applied to 3D MR sequences. We have demonstrated that this deep learning model is capable of automatized segmentation that may give 3D models and discover pathology. Both useful for a preoperative evaluation
Advances in TEE-Centric Intraprocedural Multimodal Image Guidance for Congenital and Structural Heart Disease
Percutaneous interventions are gaining rapid acceptance in cardiology and revolutionizing the treatment of structural heart disease (SHD). As new percutaneous procedures of SHD are being developed, their associated complexity and anatomical variability demand a high-resolution special understanding for intraprocedural image guidance. During the last decade, three-dimensional (3D) transesophageal echocardiography (TEE) has become one of the most accessed imaging methods for structural interventions. Although 3D-TEE can assess cardiac structures and functions in real-time, its limitations (e.g., limited field of view, image quality at a large depth, etc.) must be addressed for its universal adaptation, as well as to improve the quality of its imaging and interventions. This review aims to present the role of TEE in the intraprocedural guidance of percutaneous structural interventions. We also focus on the current and future developments required in a multimodal image integration process when using TEE to enhance the management of congenital and SHD treatments
Automated algorithm for medical data structuring, and segmentation using artificial intelligence within secured environment for dataset creation
Objective: Routinely collected electronic health records using artificial intelligence (AI)-based systems bring out enormous benefits for patients, healthcare centers, and its industries. Artificial intelligence models can be used to structure a wide variety of unstructured data. Methods: We present a semi-automatic workflow for medical dataset management, including data structuring, research extraction, AI-ground truth creation, and updates. The algorithm creates directories based on keywords in new file names. Results: Our work focuses on organizing computed tomography (CT), magnetic resonance (MR) images, patient clinical data, and segmented annotations. In addition, an AI model is used to generate different initial labels that can be edited manually to create ground truth labels. The manually verified ground truth labels are later included in the structured dataset using an automated algorithm for future research. Conclusion: This is a workflow with an AI model trained on local hospital medical data with output based/adapted to the users and their preferences. The automated algorithms and AI model could be implemented inside a secondary secure environment in the hospital to produce inferences