214 research outputs found

    Role of deep learning in infant brain MRI analysis

    Get PDF
    Deep learning algorithms and in particular convolutional networks have shown tremendous success in medical image analysis applications, though relatively few methods have been applied to infant MRI data due numerous inherent challenges such as inhomogenous tissue appearance across the image, considerable image intensity variability across the first year of life, and a low signal to noise setting. This paper presents methods addressing these challenges in two selected applications, specifically infant brain tissue segmentation at the isointense stage and presymptomatic disease prediction in neurodevelopmental disorders. Corresponding methods are reviewed and compared, and open issues are identified, namely low data size restrictions, class imbalance problems, and lack of interpretation of the resulting deep learning solutions. We discuss how existing solutions can be adapted to approach these issues as well as how generative models seem to be a particularly strong contender to address them

    Approches multi-atlas fondées sur l'appariement de blocs de voxels pour la segmentation et la synthèse d'images par résonance magnétique de tumeurs cérébrales

    Get PDF
    This thesis focuses on the development of automatic methods for the segmentation and synthesis of brain tumor Magnetic Resonance images. The main clinical perspective of glioma segmentation is growth velocity monitoring for patient therapy management. To this end, the thesis builds on the formalization of multi-atlas patch-based segmentation with probabilistic graphical models. A probabilistic model first extends classical multi-atlas approaches used for the segmentation of healthy brains structures to the automatic segmentation of pathological cerebral regions. An approximation of the marginalization step replaces the concept of local search windows with a stratification with respect to both atlases and labels. A glioma detection model based on a spatially-varying prior and patch pre-selection criteria are introduced to obtain competitive running times despite patch matching being non local. This work is validated and compared to state-of-the-art algorithms on publicly available datasets. A second probabilistic model mirrors the segmentation model in order to synthesize realistic MRI of pathological cases, based on a single label map. A heuristic method allows to solve for the maximum a posteriori and to estimate uncertainty of the image synthesis model. Iterating patch matching reinforces the spatial coherence of synthetic images. The realism of our synthetic images is assessed against real MRI, and against outputs of the state-of-the-art method. The junction of a tumor growth model to the proposed synthesis approach allows to generate databases of annotated synthetic cases.Cette thèse s'intéresse au développement de méthodes automatiques pour la segmentation et la synthèse d'images par résonance magnétique de tumeurs cérébrales. La principale perspective clinique de la segmentation des gliomes est le suivi de la vitesse d'expansion diamétrique dans le but d'adapter les solutions thérapeutiques. A cette fin, la thèse formalise au moyen de modèles graphiques probabilistes des approches de segmentation multi-atlas fondées sur l'appariement de blocs de voxels. Un premier modèle probabiliste prolonge à la segmentation automatique de régions cérébrales pathologiques les approches multi-atlas classiques de segmentation de structures anatomiques. Une approximation de l'étape de marginalisation remplace la notion de fenêtre de recherche locale par un tamisage par atlas et par étiquette. Un modèle de détection de gliomes fondé sur un a priori spatial et des critères de pré-sélection de blocs de voxels permettent d'obtenir des temps de calcul compétitifs malgré un appariement non local. Ce travail est validé et comparé à l'état de l'art sur des bases de données publiques. Un second modèle probabiliste, symétrique au modèle de segmentation, simule des images par résonance magnétique de cas pathologiques, à partir d'une unique segmentation. Une heuristique permet d'estimer le maximum a posteriori et l'incertitude du modèle de synthèse d'image. Un appariement itératif des blocs de voxels renforce la cohérence spatiale des images simulées. Le réalisme des images simulées est évalué avec de vraies IRM et des simulations de l'état de l'art. Le raccordement d'un modèle de croissance de tumeur permet de créer des bases d'images annotées synthétiques

    Brain Tumor Segmentation with Deep Neural Networks

    Full text link
    In this paper, we present a fully automatic brain tumor segmentation method based on Deep Neural Networks (DNNs). The proposed networks are tailored to glioblastomas (both low and high grade) pictured in MR images. By their very nature, these tumors can appear anywhere in the brain and have almost any kind of shape, size, and contrast. These reasons motivate our exploration of a machine learning solution that exploits a flexible, high capacity DNN while being extremely efficient. Here, we give a description of different model choices that we've found to be necessary for obtaining competitive performance. We explore in particular different architectures based on Convolutional Neural Networks (CNN), i.e. DNNs specifically adapted to image data. We present a novel CNN architecture which differs from those traditionally used in computer vision. Our CNN exploits both local features as well as more global contextual features simultaneously. Also, different from most traditional uses of CNNs, our networks use a final layer that is a convolutional implementation of a fully connected layer which allows a 40 fold speed up. We also describe a 2-phase training procedure that allows us to tackle difficulties related to the imbalance of tumor labels. Finally, we explore a cascade architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN. Results reported on the 2013 BRATS test dataset reveal that our architecture improves over the currently published state-of-the-art while being over 30 times faster

    Automated Distinct Bone Segmentation from Computed Tomography Images using Deep Learning

    Get PDF
    Large-scale CT scans are frequently performed for forensic and diagnostic purposes, to plan and direct surgical procedures, and to track the development of bone-related diseases. This often involves radiologists who have to annotate bones manually or in a semi-automatic way, which is a time consuming task. Their annotation workload can be reduced by automated segmentation and detection of individual bones. This automation of distinct bone segmentation not only has the potential to accelerate current workflows but also opens up new possibilities for processing and presenting medical data for planning, navigation, and education. In this thesis, we explored the use of deep learning for automating the segmentation of all individual bones within an upper-body CT scan. To do so, we had to find a network architec- ture that provides a good trade-off between the problem’s high computational demands and the results’ accuracy. After finding a baseline method and having enlarged the dataset, we set out to eliminate the most prevalent types of error. To do so, we introduced an novel method called binary-prediction-enhanced multi-class (BEM) inference, separating the task into two: Distin- guishing bone from non-bone is conducted separately from identifying the individual bones. Both predictions are then merged, which leads to superior results. Another type of error is tack- led by our developed architecture, the Sneaky-Net, which receives additional inputs with larger fields of view but at a smaller resolution. We can thus sneak more extensive areas of the input into the network while keeping the growth of additional pixels in check. Overall, we present a deep-learning-based method that reliably segments most of the over one hundred distinct bones present in upper-body CT scans in an end-to-end trained matter quickly enough to be used in interactive software. Our algorithm has been included in our groups virtual reality medical image visualisation software SpectoVR with the plan to be used as one of the puzzle piece in surgical planning and navigation, as well as in the education of future doctors

    Extended Modality Propagation: Image Synthesis of Pathological Cases

    Get PDF
    International audienceThis paper describes a novel generative model for the synthesis of multi-modal medical images of pathological cases based on a single label map. Our model builds upon i) a generative model commonly used for label fusion and multi-atlas patch-based segmentation of healthy anatomical structures, ii) the Modality Propagation iterative strategy used for a spatially-coherent synthesis of subject-specific scans of desired image modalities. The expression Extended Modality Propagation is coined to refer to the extension of Modality Propagation to the synthesis of images of pathological cases. Moreover, image synthesis uncertainty is estimated. An application to Magnetic Resonance Imaging synthesis of glioma-bearing brains is i) validated on the training dataset of a Multimodal Brain Tumor Image Segmentation challenge, ii) compared to the state-of-the-art in glioma image synthesis, and iii) illustrated using the output of two different tumor growth models. Such a generative model allows the generation of a large dataset of synthetic cases, which could prove useful for the training, validation, or benchmarking of image processing algorithms

    Patch individual filter layers in CNNs to harness the spatial homogeneity of neuroimaging data

    Get PDF
    Convolutional neural networks (CNNs)-as a type of deep learning-have been specifically designed for highly heterogeneous data, such as natural images. Neuroimaging data, however, is comparably homogeneous due to (1) the uniform structure of the brain and (2) additional efforts to spatially normalize the data to a standard template using linear and non-linear transformations. To harness spatial homogeneity of neuroimaging data, we suggest here a new CNN architecture that combines the idea of hierarchical abstraction in CNNs with a prior on the spatial homogeneity of neuroimaging data. Whereas early layers are trained globally using standard convolutional layers, we introduce patch individual filters (PIF) for higher, more abstract layers. By learning filters in individual latent space patches without sharing weights, PIF layers can learn abstract features faster and specific to regions. We thoroughly evaluated PIF layers for three different tasks and data sets, namely sex classification on UK Biobank data, Alzheimer's disease detection on ADNI data and multiple sclerosis detection on private hospital data, and compared it with two baseline models, a standard CNN and a patch-based CNN. We obtained two main results: First, CNNs using PIF layers converge consistently faster, measured in run time in seconds and number of iterations than both baseline models. Second, both the standard CNN and the PIF model outperformed the patch-based CNN in terms of balanced accuracy and receiver operating characteristic area under the curve (ROC AUC) with a maximal balanced accuracy (ROC AUC) of 94.21% (99.10%) for the sex classification task (PIF model), and 81.24% and 80.48% (88.89% and 87.35%) respectively for the Alzheimer's disease and multiple sclerosis detection tasks (standard CNN model). In conclusion, we demonstrated that CNNs using PIF layers result in faster convergence while obtaining the same predictive performance as a standard CNN. To the best of our knowledge, this is the first study that introduces a prior in form of an inductive bias to harness spatial homogeneity of neuroimaging data
    • …
    corecore