183 research outputs found
Label-Set Loss Functions for Partial Supervision: Application to Fetal Brain 3D MRI Parcellation
Deep neural networks have increased the accuracy of automatic segmentation, however their accuracy depends on the availability of a large number of fully segmented images. Methods to train deep neural networks using images for which some, but not all, regions of interest are segmented are necessary to make better use of partially annotated datasets. In this paper, we propose the first axiomatic definition of label-set loss functions that are the loss functions that can handle partially segmented images. We prove that there is one and only one method to convert a classical loss function for fully segmented images into a proper label-set loss function. Our theory also allows us to define the leaf-Dice loss, a label-set generalisation of the Dice loss particularly suited for partial supervision with only missing labels. Using the leaf-Dice loss, we set a new state of the art in partially supervised learning for fetal brain 3D MRI segmentation. We achieve a deep neural network able to segment white matter, ventricles, cerebellum, extra-ventricular CSF, cortical gray matter, deep gray matter, brainstem, and corpus callosum based on fetal brain 3D MRI of anatomically normal fetuses or with open spina bifida. Our implementation of the proposed label-set loss functions is available at https://github.com/LucasFidon/label-set-loss-functions
Label-Set Loss Functions for Partial Supervision: Application to Fetal Brain 3D MRI Parcellation
Deep neural networks have increased the accuracy of automatic segmentation, however their accuracy depends on the availability of a large number of fully segmented images. Methods to train deep neural networks using images for which some, but not all, regions of interest are segmented are necessary to make better use of partially annotated datasets. In this paper, we propose the first axiomatic definition of label-set loss functions that are the loss functions that can handle partially segmented images. We prove that there is one and only one method to convert a classical loss function for fully segmented images into a proper label-set loss function. Our theory also allows us to define the leaf-Dice loss, a label-set generalisation of the Dice loss particularly suited for partial supervision with only missing labels. Using the leaf-Dice loss, we set a new state of the art in partially supervised learning for fetal brain 3D MRI segmentation. We achieve a deep neural network able to segment white matter, ventricles, cerebellum, extra-ventricular CSF, cortical gray matter, deep gray matter, brainstem, and corpus callosum based on fetal brain 3D MRI of anatomically normal fetuses or with open spina bifida. Our implementation of the proposed label-set loss functions is available at https://github.com/LucasFidon/label-set-loss-functions
Bruker2nifti: Magnetic Resonance Images converter from Bruker ParaVision to Nifti format
In clinical and pre-clinical research involving medical images, the first step following a Magnetic Resonance Imaging dataset acquisition, usually entails the conversion of image data from the native scanner format to a format suitable for the intended analysis. The proprietary [Bruker ParaVision](https://www.bruker.com/products/mr/preclinical-mri/software/service-support.html) software currently does not provide the tools for conversion of the data to suitable and open formats for research, such as nifti [@cox2004sort], for which most of the available tools for medical image analysis are implemented. For this purpose we have designed and developed [bruker2nifti](https://github.com/SebastianoF/bruker2nifti), a pip-installable Python tool provided with a Graphical User Interface to convert from the native MRI Bruker format to the nifti format, without any intermediate step through the DICOM standard formats [@Mildenberger2002]. Bruker2nifti is intended to be a tool to access the data structure and to parse all parameter files of the Bruker ParaVision format into python dictionaries, to select the relevant information to fill the Nifti header and data volume. Lastly it is meant to be a starting point where to integrate possible future variations in Bruker hardware and ParaVision software future releases
Medical-grade sterilizable target for fluid-immersed fetoscope optical distortion calibration
We have developed a calibration target for use with fluid-immersed endoscopes within the context of the GIFT-Surg (Guided Instrumentation for Fetal Therapy and Surgery) project. One of the aims of this project is to engineer novel, real-time image processing methods for intra-operative use in the treatment of congenital birth defects, such as spina bifida and the twin-to-twin transfusion syndrome. The developed target allows for the sterility-preserving optical distortion calibration of endoscopes within a few minutes. Good optical distortion calibration and compensation are important for mitigating undesirable effects like radial distortions, which not only hamper accurate imaging using existing endoscopic technology during fetal surgery, but also make acquired images less suitable for potentially very useful image computing applications, like real-time mosaicing. In this paper proposes a novel fabrication method to create an affordable, sterilizable calibration target suitable for use in a clinical setup. This method involves etching a calibration pattern by laser cutting a sandblasted stainless steel sheet. This target was validated using the camera calibration module provided by OpenCV, a state-of-the-art software library popular in the computer vision community
Automatic C-Plane Detection in Pelvic Floor Transperineal Volumetric Ultrasound
Transperineal volumetric ultrasound (US) imaging has become routine practice for diagnosing pelvic floor disease (PFD). Hereto, clinical guidelines stipulate to make measurements in an anatomically defined 2D plane within a 3D volume, the so-called C-plane. This task is currently performed manually in clinical practice, which is labour-intensive and requires expert knowledge of pelvic floor anatomy, as no computer-aided C-plane method exists. To automate this process, we propose a novel, guideline-driven approach for automatic detection of the C-plane. The method uses a convolutional neural network (CNN) to identify extreme coordinates of the symphysis pubis and levator ani muscle (which define the C-plane) directly via landmark regression. The C-plane is identified in a postprocessing step. When evaluated on 100 US volumes, our best performing method (multi-task regression with UNet) achieved a mean error of 6.05 mm and 4.81 ∘ and took 20 s. Two experts blindly evaluated the quality of the automatically detected planes and manually defined the (gold standard) C-plane in terms of their clinical diagnostic quality. We show that the proposed method performs comparably to the manual definition. The automatic method reduces the average time to detect the C-plane by 100 s and reduces the need for high-level expertise in PFD US assessment
Dynamically balanced online random forests for interactive scribble-based segmentation
Interactive scribble-and-learning-based segmentation is attractive for its good performance and reduced number of user interaction. Scribbles for foreground and background are often imbalanced. With the arrival of new scribbles,the imbalance ratio may change largely. Failing to deal with imbalanced training data and a changing imbalance ratio may lead to a decreased sensitivity and accuracy for segmentation. We propose a generic Dynamically Balanced Online Random Forest (DyBa ORF) to deal with these problems,with a combination of a dynamically balanced online Bagging method and a tree growing and shrinking strategy to update the random forests. We validated DyBa ORF on UCI machine learning data sets and applied it to two different clinical applications: 2D segmentation of the placenta from fetal MRI and adult lungs from radiographic images. Experiments show it outperforms traditional ORF in dealing with imbalanced data with a changing imbalance ratio,while maintaining a comparable accuracy and a higher efficiency compared with its offline counterpart. Our results demonstrate that DyBa ORF is more suitable than existing ORF for learning-based interactive image segmentation
Automatic Extraction of Hiatal Dimensions in 3-D Transperineal Pelvic Ultrasound Recordings
The aims of this work were to create a robust automatic software tool for measurement of the levator hiatal area on transperineal ultrasound (TPUS) volumes and to measure the potential reduction in variability and time taken for analysis in a clinical setting. The proposed tool automatically detects the C-plane (i.e., the plane of minimal hiatal dimensions) from a 3-D TPUS volume and subsequently uses the extracted plane to automatically segment the levator hiatus, using a convolutional neural network. The automatic pipeline was tested using 73 representative TPUS volumes. Reference hiatal outlines were obtained manually by two experts and compared with the pipeline's automated outlines. The Hausdorff distance, area, a clinical quality score, C-plane angle and C-plane Euclidean distance were used to evaluate C-plane detection and quantify levator hiatus segmentation accuracy. A visual Turing test was created to compare the performance of the software with that of the expert, based on the visual assessment of C-plane and hiatal segmentation quality. The overall time taken to extract the hiatal area with both measurement methods (i.e., manual and automatic) was measured. Each metric was calculated both for computer–observer differences and for inter-and intra-observer differences. The automatic method gave results similar to those of the expert when determining the hiatal outline from a TPUS volume. Indeed, the hiatal area measured by the algorithm and by an expert were within the intra-observer variability. Similarly, the method identified the C-plane with an accuracy of 5.76 ± 5.06° and 6.46 ± 5.18 mm in comparison to the inter-observer variability of 9.39 ± 6.21° and 8.48 ± 6.62 mm. The visual Turing test suggested that the automatic method identified the C-plane position within the TPUS volume visually as well as the expert. The average time taken to identify the C-plane and segment the hiatal area manually was 2 min and 35 ± 17 s, compared with 35 ± 4 s for the automatic result. This study presents a method for automatically measuring the levator hiatal area using artificial intelligence-based methodologies whereby the C-plane within a TPUS volume is detected and subsequently traced for the levator hiatal outline. The proposed solution was determined to be accurate, relatively quick, robust and reliable and, importantly, to reduce time and expertise required for pelvic floor disorder assessment
SARS-CoV2 (COVID-19) infection: is fetal surgery in times of national disasters reasonable?
Even though the global COVID‐19 pandemic may affect how medical care is delivered in general, most countries try to maintain steady access for women to routine pregnancy care, including fetal anomaly screening. This means that, also during this pandemic, fetal anomalies will be detected, and that discussions regarding invasive genetic testing and possibly fetal therapy will need to take place. For patients, concerns about Severe Acute Respiratory Syndrome‐Corona Virus 2 will add to the anxiety caused by the diagnosis of a serious fetal anomaly. Yet, also for fetal medicine teams the situation gets more complex as they must weigh up the risks and benefits to the fetus as well as the mother, while managing a changing evidence base and logistic challenges in their healthcare system
A spatio-temporal atlas of the developing fetal brain with spina bifida aperta
Background: Spina bifida aperta (SBA) is a birth defect associated with severe anatomical changes in the developing fetal brain. Brain magnetic resonance imaging (MRI) atlases are popular tools for studying neuropathology in the brain anatomy, but previous fetal brain MRI atlases have focused on the normal fetal brain. We aimed to develop a spatio-temporal fetal brain MRI atlas for SBA.
Methods: We developed a semi-automatic computational method to compute the first spatio-temporal fetal brain MRI atlas for SBA. We used 90 MRIs of fetuses with SBA with gestational ages ranging from 21 to 35 weeks. Isotropic and motion-free 3D reconstructed MRIs were obtained for all the examinations. We propose a protocol for the annotation of anatomical landmarks in brain 3D MRI of fetuses with SBA with the aim of making spatial alignment of abnormal fetal brain MRIs more robust. In addition, we propose a weighted generalized Procrustes method based on the anatomical landmarks for the initialization of the atlas. The proposed weighted generalized Procrustes can handle temporal regularization and missing annotations. After initialization, the atlas is refined iteratively using non-linear image registration based on the image intensity and the anatomical land-marks. A semi-automatic method is used to obtain a parcellation of our fetal brain atlas into eight tissue types: white matter, ventricular system, cerebellum, extra-axial cerebrospinal fluid, cortical gray matter, deep gray matter, brainstem, and corpus callosum.
Results: An intra-rater variability analysis suggests that the seven anatomical land-marks are sufficiently reliable. We find that the proposed atlas outperforms a normal fetal brain atlas for the automatic segmentation of brain 3D MRI of fetuses with SBA.
Conclusions: We make publicly available a spatio-temporal fetal brain MRI atlas for SBA, available here: https://doi.org/10.7303/syn25887675. This atlas can support future research on automatic segmentation methods for brain 3D MRI of fetuses with SBA
Deep learning-based fetoscopic mosaicking for field-of-view expansion
PURPOSE: Fetoscopic laser photocoagulation is a minimally invasive surgical procedure used to treat twin-to-twin transfusion syndrome (TTTS), which involves localization and ablation of abnormal vascular connections on the placenta to regulate the blood flow in both fetuses. This procedure is particularly challenging due to the limited field of view, poor visibility, occasional bleeding, and poor image quality. Fetoscopic mosaicking can help in creating an image with the expanded field of view which could facilitate the clinicians during the TTTS procedure. METHODS: We propose a deep learning-based mosaicking framework for diverse fetoscopic videos captured from different settings such as simulation, phantoms, ex vivo, and in vivo environments. The proposed mosaicking framework extends an existing deep image homography model to handle video data by introducing the controlled data generation and consistent homography estimation modules. Training is performed on a small subset of fetoscopic images which are independent of the testing videos. RESULTS: We perform both quantitative and qualitative evaluations on 5 diverse fetoscopic videos (2400 frames) that captured different environments. To demonstrate the robustness of the proposed framework, a comparison is performed with the existing feature-based and deep image homography methods. CONCLUSION: The proposed mosaicking framework outperformed existing methods and generated meaningful mosaic, while reducing the accumulated drift, even in the presence of visual challenges such as specular highlights, reflection, texture paucity, and low video resolution
- …