634 research outputs found

    A Generative Shape Compositional Framework to Synthesise Populations of Virtual Chimaeras

    Get PDF
    Generating virtual populations of anatomy that capture sufficient variability while remaining plausible is essential for conducting in-silico trials of medical devices. However, not all anatomical shapes of interest are always available for each individual in a population. Hence, missing/partially-overlapping anatomical information is often available across individuals in a population. We introduce a generative shape model for complex anatomical structures, learnable from datasets of unpaired datasets. The proposed generative model can synthesise complete whole complex shape assemblies coined virtual chimaeras, as opposed to natural human chimaeras. We applied this framework to build virtual chimaeras from databases of whole-heart shape assemblies that each contribute samples for heart substructures. Specifically, we propose a generative shape compositional framework which comprises two components - a part-aware generative shape model which captures the variability in shape observed for each structure of interest in the training population; and a spatial composition network which assembles/composes the structures synthesised by the former into multi-part shape assemblies (viz. virtual chimaeras). We also propose a novel self supervised learning scheme that enables the spatial composition network to be trained with partially overlapping data and weak labels. We trained and validated our approach using shapes of cardiac structures derived from cardiac magnetic resonance images available in the UK Biobank. Our approach significantly outperforms a PCA-based shape model (trained with complete data) in terms of generalisability and specificity. This demonstrates the superiority of the proposed approach as the synthesised cardiac virtual populations are more plausible and capture a greater degree of variability in shape than those generated by the PCA-based shape model

    A Generative Shape Compositional Framework: Towards Representative Populations of Virtual Heart Chimaeras

    Get PDF
    Generating virtual populations of anatomy that capture sufficient variability while remaining plausible is essential for conducting in-silico trials of medical devices. However, not all anatomical shapes of interest are always available for each individual in a population. Hence, missing/partially-overlapping anatomical information is often available across individuals in a population. We introduce a generative shape model for complex anatomical structures, learnable from datasets of unpaired datasets. The proposed generative model can synthesise complete whole complex shape assemblies coined virtual chimaeras, as opposed to natural human chimaeras. We applied this framework to build virtual chimaeras from databases of whole-heart shape assemblies that each contribute samples for heart substructures. Specifically, we propose a generative shape compositional framework which comprises two components - a part-aware generative shape model which captures the variability in shape observed for each structure of interest in the training population; and a spatial composition network which assembles/composes the structures synthesised by the former into multi-part shape assemblies (viz. virtual chimaeras). We also propose a novel self supervised learning scheme that enables the spatial composition network to be trained with partially overlapping data and weak labels. We trained and validated our approach using shapes of cardiac structures derived from cardiac magnetic resonance images available in the UK Biobank. Our approach significantly outperforms a PCA-based shape model (trained with complete data) in terms of generalisability and specificity. This demonstrates the superiority of the proposed approach as the synthesised cardiac virtual populations are more plausible and capture a greater degree of variability in shape than those generated by the PCA-based shape model.Comment: 15 pages, 4 figure

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Cortical enhanced tissue segmentation of neonatal brain MR images acquired by a dedicated phased array coil

    Get PDF
    pre-printThe acquisition of high quality MR images of neonatal brains is largely hampered by their characteristically small head size and low tissue contrast. As a result, subsequent image processing and analysis, especially for brain tissue segmentation, are often hindered. To overcome this problem, a dedicated phased array neonatal head coil is utilized to improve MR image quality by effectively combing images obtained from 8 coil elements without lengthening data acquisition time. In addition, a subject-specific atlas based tissue segmentation algorithm is specifically developed for the delineation of fine structures in the acquired neonatal brain MR images. The proposed tissue segmentation method first enhances the sheet-like cortical gray matter (GM) structures in neonatal images with a Hessian filter for generation of cortical GM prior. Then, the prior is combined with our neonatal population atlas to form a cortical enhanced hybrid atlas, which we refer to as the subject-specific atlas. Various experiments are conducted to compare the proposed method with manual segmentation results, as well as with additional two population atlas based segmentation methods. Results show that the proposed method is capable of segmenting the neonatal brain with the highest accuracy, compared to other two methods

    An End-to-End Deep Learning Generative Framework for Refinable Shape Matching and Generation

    Get PDF
    Generative modelling for shapes is a prerequisite for In-Silico Clinical Trials (ISCTs), which aim to cost-effectively validate medical device interventions using synthetic anatomical shapes, often represented as 3D surface meshes. However, constructing AI models to generate shapes closely resembling the real mesh samples is challenging due to variable vertex counts, connectivities, and the lack of dense vertex-wise correspondences across the training data. Employing graph representations for meshes, we develop a novel unsupervised geometric deep-learning model to establish refinable shape correspondences in a latent space, construct a population-derived atlas and generate realistic synthetic shapes. We additionally extend our proposed base model to a joint shape generative-clustering multi-atlas framework to incorporate further variability and preserve more details in the generated shapes. Experimental results using liver and left-ventricular models demonstrate the approach's applicability to computational medicine, highlighting its suitability for ISCTs through a comparative analysis

    Dense deformation field estimation for atlas registration using the active contour framework

    Get PDF
    A key research area in computer vision is image segmentation. Image segmentation aims at extracting objects of interest in images or video sequences. These objects contain relevant information for a given application. For example, a video surveillance application generally requires to extract moving objects (vehicles, persons or animals) from a sequence of images in order to check that their path stays conformed to the regulation rules set for the observed scene. Image segmentation is not an easy task. In many applications, the contours of the objects of interest are difficult to delineate, even manually. The problems linked to segmentation are often due to low contrast, fuzzy contours or too similar intensities with adjacent objects. In some cases, the objects to be extracted have no real contours in the image. This kind of objects is called virtual objects. Virtual objects appear especially in medical applications. To draw them, medical experts usually estimate their position from surrounding objects. The problems related to image segmentation can be greatly simplified with information known in advance on the objects to be extracted (the prior knowledge). A widely used method consists to extract the needed prior knowledge from a reference image often called atlas. The goal of the atlas is to describe the image to be segmented like a map would describe the components of a geographical area. An atlas can contain three types of information on each object being part of the image: an estimation of its position in the image, a description of its shape and texture, and the features of its adjacent objects. The atlas-based segmentation method is rather used when the atlas can characterize a range of images. This method is thus especially adapted to medical images due to the existing consistency between anatomical structures of same type. There exist two types of atlas: the determinist atlas and the statistical atlas. The determinist atlas is an image which has been selected or computed, to be the most representative of an image category to be segmented. This image is called intensity atlas. The contours of the objects of interest (the objects to be extracted in images of the same type) have been traced manually on the intensity atlas, or by using a semi-automatic method. A label is often attributed to each one of these objects in order to differentiate them. In this way, we obtain a labeled version of the atlas called labeled atlas. The statistical atlas is an atlas created from a database of images in order to be the most representative of a certain type of images to be segmented. In this atlas, the position and the features of the objects of interest depend on statistical measures. In this thesis, we are focused on the use of determinist atlases for image segmentation. The segmentation process with a determinist atlas consists to deform the objects delineated in the atlas in order to better align them with their corresponding objects in the image to be segmented. To perform this task, we have distinguished two types of approaches in the literature. The first approach consists to reduce the segmentation problem in an image registration problem. First of all, a dense deformation field that registers (i.e. puts in point-to-point spatial correspondence) the atlas to the image to be segmented, is explicitly computed. Then, this transformation is used to project the assigned labels onto each atlas structure on the image to be segmented. The advantage of this approach is that the deformation field computed from the registration of visible contours allows to easily estimate the position of virtual objects or objects with fuzzy contours. However, the methods currently used for the atlas registration are often only based on the intensity atlas. That means that they do not exploit the object-based information that can be obtained by combining the intensity atlas with its labeled version. In the second approach, the atlas contours selected by the labeled atlas are directly deformed without using a geometrical deformation. For that, this approach is based on matching contour techniques, generally called deformable models. In this thesis, we are interested to a particular type of deformable models, which are the active contour segmentation models. The advantage of the active contour method is that this segmentation technique has been designed to exploit the image information directly linked to the object to be delineated. By using object-based information, active contour models are frequently able to extract regions where the atlas-based segmentation method by registration fails. On the other hand, the result of this local segmentation method is very sensitive to the initial atlas contour position regarding to the target contours. On the other hand, this local segmentation method is very sensitive to the initial position of the atlas contours: the closer they are to the contours to be detected, the more robust the active contour-based segmentation will be. Besides, this segmentation technique needs prior shape models to be able to estimate the position of virtual objects. The main objective of this thesis is to design an algorithm for atlas-based segmentation which combines the advantages of the dense deformation field computed by the registration algorithms, with local segmentation constraints coming from the active contour framework. This implies to design a model where the registration and segmentation by active contours are jointly performed. The atlas registration algorithm that we propose is based on a formulation allowing the integration of any segmentation or contour regularization forces derived from the theory of the active contours in a non parametric registration process. Our algorithm led us to introduce the concept of hierarchical atlas registration. Its principle is that the registration of the main image objects helps the registration of depending objects. This allows to bring progressively the atlas contours closer to their target and thus, to limit the risk to be stuck in a local minimum. Our model had been designed to be easily adaptable to various types of segmentation problems. At the end of the thesis, we present several examples of atlas registration applications in medical imaging. These applications highlight the integration of manual constraints in an atlas registration process, the modeling of a tumor growth in the atlas, the labelization of the thalamus for a statistical study on neuronal connections, the localization of the subthalamic nucleus (STN) for deep brain stimulation (DBS) and the compensation of intra-operative brain shift for neuronavigation systems

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    Automating the multimodal analysis of musculoskeletal imaging in the presence of hip implants

    Get PDF
    In patients treated with hip arthroplasty, the muscular condition and presence of inflammatory reactions are assessed using magnetic resonance imaging (MRI). As MRI lacks contrast for bony structures, computed tomography (CT) is preferred for clinical evaluation of bone tissue and orthopaedic surgical planning. Combining the complementary information of MRI and CT could improve current clinical practice for diagnosis, monitoring and treatment planning. In particular, the different contrast of these modalities could help better quantify the presence of fatty infiltration to characterise muscular condition after hip replacement. In this thesis, I developed automated processing tools for the joint analysis of CT and MR images of patients with hip implants. In order to combine the multimodal information, a novel nonlinear registration algorithm was introduced, which imposes rigidity constraints on bony structures to ensure realistic deformation. I implemented and thoroughly validated a fully automated framework for the multimodal segmentation of healthy and pathological musculoskeletal structures, as well as implants. This framework combines the proposed registration algorithm with tailored image quality enhancement techniques and a multi-atlas-based segmentation approach, providing robustness against the large population anatomical variability and the presence of noise and artefacts in the images. The automation of muscle segmentation enabled the derivation of a measure of fatty infiltration, the Intramuscular Fat Fraction, useful to characterise the presence of muscle atrophy. The proposed imaging biomarker was shown to strongly correlate with the atrophy radiological score currently used in clinical practice. Finally, a preliminary work on multimodal metal artefact reduction, using an unsupervised deep learning strategy, showed promise for improving the postprocessing of CT and MR images heavily corrupted by metal artefact. This work represents a step forward towards the automation of image analysis in hip arthroplasty, supporting and quantitatively informing the decision-making process about patient’s management
    corecore