11 research outputs found

    Groupwise Non-Rigid Registration with Deep Learning: An Affordable Solution Applied to 2D Cardiac Cine MRI Reconstruction

    Get PDF
    Groupwise image (GW) registration is customarily used for subsequent processing in medical imaging. However, it is computationally expensive due to repeated calculation of transformations and gradients. In this paper, we propose a deep learning (DL) architecture that achieves GW elastic registration of a 2D dynamic sequence on an affordable average GPU. Our solution, referred to as dGW, is a simplified version of the well-known U-net. In our GW solution, the image that the other images are registered to, referred to in the paper as template image, is iteratively obtained together with the registered images. Design and evaluation have been carried out using 2D cine cardiac MR slices from 2 databases respectively consisting of 89 and 41 subjects. The first database was used for training and validation with 66.6–33.3% split. The second one was used for validation (50%) and testing (50%). Additional network hyperparameters, which are—in essence—those that control the transformation smoothness degree, are obtained by means of a forward selection procedure. Our results show a 9-fold runtime reduction with respect to an optimization-based implementation; in addition, making use of the well-known structural similarity (SSIM) index we have obtained significative differences with dGW with respect to an alternative DL solution based on Voxelmorph

    DeepReg: a deep learning toolkit for medical image registration

    Get PDF
    DeepReg (https://github.com/DeepRegNet/DeepReg) is a community-supported open-source toolkit for research and education in medical image registration using deep learning.Comment: Accepted in The Journal of Open Source Software (JOSS

    When Deep Learning Meets Data Alignment: A Review on Deep Registration Networks (DRNs)

    Get PDF
    Registration is the process that computes the transformation that aligns sets of data. Commonly, a registration process can be divided into four main steps: target selection, feature extraction, feature matching, and transform computation for the alignment. The accuracy of the result depends on multiple factors, the most significant are the quantity of input data, the presence of noise, outliers and occlusions, the quality of the extracted features, real-time requirements and the type of transformation, especially those ones defined by multiple parameters, like non-rigid deformations. Recent advancements in machine learning could be a turning point in these issues, particularly with the development of deep learning (DL) techniques, which are helping to improve multiple computer vision problems through an abstract understanding of the input data. In this paper, a review of deep learning-based registration methods is presented. We classify the different papers proposing a framework extracted from the traditional registration pipeline to analyse the new learning-based proposal strengths. Deep Registration Networks (DRNs) try to solve the alignment task either replacing part of the traditional pipeline with a network or fully solving the registration problem. The main conclusions extracted are, on the one hand, 1) learning-based registration techniques cannot always be clearly classified in the traditional pipeline. 2) These approaches allow more complex inputs like conceptual models as well as the traditional 3D datasets. 3) In spite of the generality of learning, the current proposals are still ad hoc solutions. Finally, 4) this is a young topic that still requires a large effort to reach general solutions able to cope with the problems that affect traditional approaches.Comment: Submitted to Pattern Recognitio

    Machine learning approaches to model cardiac shape in large-scale imaging studies

    Get PDF
    Recent improvements in non-invasive imaging, together with the introduction of fully-automated segmentation algorithms and big data analytics, has paved the way for large-scale population-based imaging studies. These studies promise to increase our understanding of a large number of medical conditions, including cardiovascular diseases. However, analysis of cardiac shape in such studies is often limited to simple morphometric indices, ignoring large part of the information available in medical images. Discovery of new biomarkers by machine learning has recently gained traction, but often lacks interpretability. The research presented in this thesis aimed at developing novel explainable machine learning and computational methods capable of better summarizing shape variability, to better inform association and predictive clinical models in large-scale imaging studies. A powerful and flexible framework to model the relationship between three-dimensional (3D) cardiac atlases, encoding multiple phenotypic traits, and genetic variables is first presented. The proposed approach enables the detection of regional phenotype-genotype associations that would be otherwise neglected by conventional association analysis. Three learning-based systems based on deep generative models are then proposed. In the first model, I propose a classifier of cardiac shapes which exploits task-specific generative shape features, and it is designed to enable the visualisation of the anatomical effect these features encode in 3D, making the classification task transparent. The second approach models a database of anatomical shapes via a hierarchy of conditional latent variables and it is capable of detecting, quantifying and visualising onto a template shape the most discriminative anatomical features that characterize distinct clinical conditions. Finally, a preliminary analysis of a deep learning system capable of reconstructing 3D high-resolution cardiac segmentations from a sparse set of 2D views segmentations is reported. This thesis demonstrates that machine learning approaches can facilitate high-throughput analysis of normal and pathological anatomy and of its determinants without losing clinical interpretability.Open Acces

    A Survey on Deep Learning in Medical Image Registration: New Technologies, Uncertainty, Evaluation Metrics, and Beyond

    Full text link
    Over the past decade, deep learning technologies have greatly advanced the field of medical image registration. The initial developments, such as ResNet-based and U-Net-based networks, laid the groundwork for deep learning-driven image registration. Subsequent progress has been made in various aspects of deep learning-based registration, including similarity measures, deformation regularizations, and uncertainty estimation. These advancements have not only enriched the field of deformable image registration but have also facilitated its application in a wide range of tasks, including atlas construction, multi-atlas segmentation, motion estimation, and 2D-3D registration. In this paper, we present a comprehensive overview of the most recent advancements in deep learning-based image registration. We begin with a concise introduction to the core concepts of deep learning-based image registration. Then, we delve into innovative network architectures, loss functions specific to registration, and methods for estimating registration uncertainty. Additionally, this paper explores appropriate evaluation metrics for assessing the performance of deep learning models in registration tasks. Finally, we highlight the practical applications of these novel techniques in medical imaging and discuss the future prospects of deep learning-based image registration

    {3D} Morphable Face Models -- Past, Present and Future

    No full text
    In this paper, we provide a detailed survey of 3D Morphable Face Models over the 20 years since they were first proposed. The challenges in building and applying these models, namely capture, modeling, image formation, and image analysis, are still active research topics, and we review the state-of-the-art in each of these areas. We also look ahead, identifying unsolved challenges, proposing directions for future research and highlighting the broad range of current and future applications

    Le recalage robuste d’images médicales et la modélisation du mouvement basée sur l’apprentissage profond

    Get PDF
    This thesis presents new computational tools for quantifying deformations and motion of anatomical structures from medical images as required by a large variety of clinical applications. Generic deformable registration tools are presented that enable deformation analysis useful for improving diagnosis, prognosis and therapy guidance. These tools were built by combining state-of-the-art medical image analysis methods with cutting-edge machine learning methods.First, we focus on difficult inter-subject registration problems. By learning from given deformation examples, we propose a novel agent-based optimization scheme inspired by deep reinforcement learning where a statistical deformation model is explored in a trial-and-error fashion showing improved registration accuracy. Second, we develop a diffeomorphic deformation model that allows for accurate multiscale registration and deformation analysis by learning a low-dimensional representation of intra-subject deformations. The unsupervised method uses a latent variable model in form of a conditional variational autoencoder (CVAE) for learning a probabilistic deformation encoding that is useful for the simulation, classification and comparison of deformations.Third, we propose a probabilistic motion model derived from image sequences of moving organs. This generative model embeds motion in a structured latent space, the motion matrix, which enables the consistent tracking of structures and various analysis tasks. For instance, it leads to the simulation and interpolation of realistic motion patterns allowing for faster data acquisition and data augmentation.Finally, we demonstrate the importance of the developed tools in a clinical application where the motion model is used for disease prognosis and therapy planning. It is shown that the survival risk for heart failure patients can be predicted from the discriminative motion matrix with a higher accuracy compared to classical image-derived risk factors.Cette thèse présente de nouveaux outils informatiques pour quantifier les déformations et le mouvement de structures anatomiques à partir d’images médicales dans le cadre d’une grande variété d’applications cliniques. Des outils génériques de recalage déformable sont présentés qui permettent l’analyse de la déformation de tissus anatomiques pour améliorer le diagnostic, le pronostic et la thérapie. Ces outils combinent des méthodes avancées d’analyse d’images médicales avec des méthodes d’apprentissage automatique performantes.Dans un premier temps, nous nous concentrons sur les problèmes de recalages inter-sujets difficiles. En apprenant à partir d’exemples de déformation donnés, nous proposons un nouveau schéma d’optimisation basé sur un agent inspiré de l’apprentissage par renforcement profond dans lequel un modèle de déformation statistique est exploré de manière itérative montrant une précision améliorée de recalage. Dans un second temps, nous développons un modèle de déformation difféomorphe qui permet un recalage multi-échelle précis et une analyse de déformation en apprenant une représentation de faible dimension des déformations intra-sujet. La méthode non supervisée utilise un modèle de variable latente sous la forme d’un autoencodeur variationnel conditionnel (CVAE) pour apprendre une représentation probabiliste des déformations qui est utile pour la simulation, la classification et la comparaison des déformations. Troisièmement, nous proposons un modèle de mouvement probabiliste dérivé de séquences d’images d’organes en mouvement. Ce modèle génératif décrit le mouvement dans un espace latent structuré, la matrice de mouvement, qui permet le suivi cohérent des structures ainsi que l’analyse du mouvement. Ainsi cette approche permet la simulation et l’interpolation de modèles de mouvement réalistes conduisant à une acquisition et une augmentation des données plus rapides.Enfin, nous démontrons l’intérêt des outils développés dans une application clinique où le modèle de mouvement est utilisé pour le pronostic de maladies et la planification de thérapies. Il est démontré que le risque de survie des patients souffrant d’insuffisance cardiaque peut être prédit à partir de la matrice de mouvement discriminant avec une précision supérieure par rapport aux facteurs de risque classiques dérivés de l’image

    Towards a framework for multi class statistical modelling of shape, intensity, and kinematics in medical images

    Get PDF
    Statistical modelling has become a ubiquitous tool for analysing of morphological variation of bone structures in medical images. For radiological images, the shape, relative pose between the bone structures and the intensity distribution are key features often modelled separately. A wide range of research has reported methods that incorporate these features as priors for machine learning purposes. Statistical shape, appearance (intensity profile in images) and pose models are popular priors to explain variability across a sample population of rigid structures. However, a principled and robust way to combine shape, pose and intensity features has been elusive for four main reasons: 1) heterogeneity of the data (data with linear and non-linear natural variation across features); 2) sub-optimal representation of three-dimensional Euclidean motion; 3) artificial discretization of the models; and 4) lack of an efficient transfer learning process to project observations into the latent space. This work proposes a novel statistical modelling framework for multiple bone structures. The framework provides a latent space embedding shape, pose and intensity in a continuous domain allowing for new approaches to skeletal joint analysis from medical images. First, a robust registration method for multi-volumetric shapes is described. Both sampling and parametric based registration algorithms are proposed, which allow the establishment of dense correspondence across volumetric shapes (such as tetrahedral meshes) while preserving the spatial relationship between them. Next, the framework for developing statistical shape-kinematics models from in-correspondence multi-volumetric shapes embedding image intensity distribution, is presented. The framework incorporates principal geodesic analysis and a non-linear metric for modelling the spatial orientation of the structures. More importantly, as all the features are in a joint statistical space and in a continuous domain; this permits on-demand marginalisation to a region or feature of interest without training separate models. Thereafter, an automated prediction of the structures in images is facilitated by a model-fitting method leveraging the models as priors in a Markov chain Monte Carlo approach. The framework is validated using controlled experimental data and the results demonstrate superior performance in comparison with state-of-the-art methods. Finally, the application of the framework for analysing computed tomography images is presented. The analyses include estimation of shape, kinematic and intensity profiles of bone structures in the shoulder and hip joints. For both these datasets, the framework is demonstrated for segmentation, registration and reconstruction, including the recovery of patient-specific intensity profile. The presented framework realises a new paradigm in modelling multi-object shape structures, allowing for probabilistic modelling of not only shape, but also relative pose and intensity as well as the correlations that exist between them. Future work will aim to optimise the framework for clinical use in medical image analysis
    corecore