362 research outputs found

    The role of the image phase in cardiac strain imaging

    Get PDF
    International audienceThis paper reviews our most recent contributions in the field of cardiac deformation imaging, which includes a motion estimation framework based on the conservation of the image phase over time and an open pipeline to benchmark algorithms for cardiac strain imaging in 2D and 3D ultrasound. The paper also shows an original evaluation of the proposed motion estimation technique based on the new benchmarking pipeline

    Cardiac motion assessement from echocardiographic image sequences by means of the structure multivector

    Get PDF
    International audienceWe recently contributed an algorithm for the estimation of cardiac deformation from echocardiographic image sequences based on the monogenic signal. By exploiting the phase information instead of the pixel intensity, the algorithm was robust to the temporal contrast variations normally encountered in cardiac ultrasound. In this paper we propose an improvement of that framework making use of an extension of the monogenic signal formalism, called structure multivector. The structure multivector models the image as the superposition of two perpendicular waves with associated amplitude, phase and orientation. Such a model is well adapted to describe the granular pattern of the characteristic speckle noise. The displacement is computed by solving the optical flow equation jointly for the two image phases. A local affine model accounts for typical cardiac motions as contraction/expansion and shearing; a coarse-to-fine B-spline scheme allows for a robust and effective computation of the model parameters and a pyramidal refinement scheme helps deal with large motions. Performance was evaluated on realistic simulated cardiac ultrasound sequences and compared to our previous monogenic-based algorithm and classical speckle tracking. Endpoint-error was used as accuracy metric. With respect to them we achieved error reductions of 13% and 30% respectively

    Multiscale optical flow computation from the monogenic signal

    Get PDF
    National audienceWe have developed an algorithm for the estimation of cardiac motion from medical images. The algorithm exploits monogenic signal theory, recently introduced as an N-dimensional generalization of the analytic signal. The displacement is computed locally by assuming the conservation of the monogenic phase over time. A local affine displacement model replaces the standard translation model to account for more complex motions as contraction/expansion and shear. A coarse-to-fine B-spline scheme allows a robust and effective computation of the models parameters and a pyramidal refinement scheme helps handle large motions. Robustness against noise is increased by replacing the standard pointwise computation of the monogenic orientation with a more robust least-squares orientation estimate. This paper reviews the results obtained on simulated cardiac images from different modalities, namely 2D and 3D cardiac ultrasound and tagged magnetic resonance. We also show how the proposed algorithm represents a valuable alternative to state-of-the-art algorithms in the respective fields

    Optical Flow Estimation in Ultrasound Images Using a Sparse Representation

    Get PDF
    This paper introduces a 2D optical flow estimation method for cardiac ultrasound imaging based on a sparse representation. The optical flow problem is regularized using a classical gradient-based smoothness term combined with a sparsity inducing regularization that uses a learned cardiac flow dictionary. A particular emphasis is put on the influence of the spatial and sparse regularizations on the optical flow estimation problem. A comparison with state-of-the-art methods using realistic simulations shows the competitiveness of the proposed method for cardiac motion estimation in ultrasound images

    The Assessment of left ventricular Function in MRI using the detection of myocardial borders and optical flow approaches: A Review

    Get PDF
    The evaluation of left ventricular wall motion in Magnetic Resonance Imaging (MRI) clinical practice is based on a visual assessment of cine-MRI sequences. In fact, clinical interpreters (radiologists) proceed with a global visual evaluation of multiple cine-MRI sequences acquired in the three standard views. In addition, some functional parameters are quantified following a manual or a semi-automatic contouring of the myocardial borders. Although these parameters give information about the functional state of the left ventricle, they are not able to provide the location and the extent of wall motion abnormalities, which are associated with many cardiovascular diseases. In the past years, several approaches were developed to overcome the limitations of the classical evaluation techniques of left ventricular function. The aim of this article is to present an overview of the different methods and to summarize the relevant techniques based on myocardial contour detection and optical flow for regional assessment of left ventricular abnormalities

    Computational Methods for Segmentation of Multi-Modal Multi-Dimensional Cardiac Images

    Get PDF
    Segmentation of the heart structures helps compute the cardiac contractile function quantified via the systolic and diastolic volumes, ejection fraction, and myocardial mass, representing a reliable diagnostic value. Similarly, quantification of the myocardial mechanics throughout the cardiac cycle, analysis of the activation patterns in the heart via electrocardiography (ECG) signals, serve as good cardiac diagnosis indicators. Furthermore, high quality anatomical models of the heart can be used in planning and guidance of minimally invasive interventions under the assistance of image guidance. The most crucial step for the above mentioned applications is to segment the ventricles and myocardium from the acquired cardiac image data. Although the manual delineation of the heart structures is deemed as the gold-standard approach, it requires significant time and effort, and is highly susceptible to inter- and intra-observer variability. These limitations suggest a need for fast, robust, and accurate semi- or fully-automatic segmentation algorithms. However, the complex motion and anatomy of the heart, indistinct borders due to blood flow, the presence of trabeculations, intensity inhomogeneity, and various other imaging artifacts, makes the segmentation task challenging. In this work, we present and evaluate segmentation algorithms for multi-modal, multi-dimensional cardiac image datasets. Firstly, we segment the left ventricle (LV) blood-pool from a tri-plane 2D+time trans-esophageal (TEE) ultrasound acquisition using local phase based filtering and graph-cut technique, propagate the segmentation throughout the cardiac cycle using non-rigid registration-based motion extraction, and reconstruct the 3D LV geometry. Secondly, we segment the LV blood-pool and myocardium from an open-source 4D cardiac cine Magnetic Resonance Imaging (MRI) dataset by incorporating average atlas based shape constraint into the graph-cut framework and iterative segmentation refinement. The developed fast and robust framework is further extended to perform right ventricle (RV) blood-pool segmentation from a different open-source 4D cardiac cine MRI dataset. Next, we employ convolutional neural network based multi-task learning framework to segment the myocardium and regress its area, simultaneously, and show that segmentation based computation of the myocardial area is significantly better than that regressed directly from the network, while also being more interpretable. Finally, we impose a weak shape constraint via multi-task learning framework in a fully convolutional network and show improved segmentation performance for LV, RV and myocardium across healthy and pathological cases, as well as, in the challenging apical and basal slices in two open-source 4D cardiac cine MRI datasets. We demonstrate the accuracy and robustness of the proposed segmentation methods by comparing the obtained results against the provided gold-standard manual segmentations, as well as with other competing segmentation methods

    Cardiac motion estimation in ultrasound images using a sparse representation and dictionary learning

    Get PDF
    Les maladies cardiovasculaires sont de nos jours un problème de santé majeur. L'amélioration des méthodes liées au diagnostic de ces maladies représente donc un réel enjeu en cardiologie. Le coeur étant un organe en perpétuel mouvement, l'analyse du mouvement cardiaque est un élément clé pour le diagnostic. Par conséquent, les méthodes dédiées à l'estimation du mouvement cardiaque à partir d'images médicales, plus particulièrement en échocardiographie, font l'objet de nombreux travaux de recherches. Cependant, plusieurs difficultés liées à la complexité du mouvement du coeur ainsi qu'à la qualité des images échographiques restent à surmonter afin d'améliorer la qualité et la précision des estimations. Dans le domaine du traitement d'images, les méthodes basées sur l'apprentissage suscitent de plus en plus d'intérêt. Plus particulièrement, les représentations parcimonieuses et l'apprentissage de dictionnaires ont démontré leur efficacité pour la régularisation de divers problèmes inverses. Cette thèse a ainsi pour but d'explorer l'apport de ces méthodes, qui allient parcimonie et apprentissage, pour l'estimation du mouvement cardiaque. Trois principales contributions sont présentées, chacune traitant différents aspects et problématiques rencontrées dans le cadre de l'estimation du mouvement en échocardiographie. Dans un premier temps, une méthode d'estimation du mouvement cardiaque se basant sur une régularisation parcimonieuse est proposée. Le problème d'estimation du mouvement est formulé dans le cadre d'une minimisation d'énergie, dont le terme d'attache aux données est construit avec l'hypothèse d'un bruit de Rayleigh multiplicatif. Une étape d'apprentissage de dictionnaire permet une régularisation exploitant les propriétés parcimonieuses du mouvement cardiaque, combinée à un terme classique de lissage spatial. Dans un second temps, une méthode robuste de flux optique est présentée. L'objectif de cette approche est de robustifier la méthode d'estimation développée au premier chapitre de manière à la rendre moins sensible aux éléments aberrants. Deux régularisations sont mises en oeuvre, imposant d'une part un lissage spatial et de l'autre la parcimonie des champs de mouvements dans un dictionnaire approprié. Afin d'assurer la robustesse de la méthode vis-à-vis des anomalies, une stratégie de minimisation récursivement pondérée est proposée. Plus précisément, les fonctions employées pour cette pondération sont basées sur la théorie des M-estimateurs. Le dernier travail présenté dans cette thèse, explore une méthode d'estimation du mouvement cardiaque exploitant une régularisation parcimonieuse combinée à un lissage à la fois dans les domaines spatial et temporel. Le problème est formulé dans un cadre général d'estimation de flux optique. La régularisation temporelle proposée impose des trajectoires de mouvement lisses entre images consécutives. De plus, une méthode itérative d'estimation permet d'incorporer les trois termes de régularisations, tout en rendant possible le traitement simultané d'un ensemble d'images. Dans cette thèse, les contributions proposées sont validées en employant des images synthétiques et des simulations réalistes d'images ultrasonores. Ces données avec vérité terrain permettent d'évaluer la précision des approches considérées, et de souligner leur compétitivité par rapport à des méthodes de l'état-del'art. Pour démontrer la faisabilité clinique, des images in vivo de patients sains ou atteints de pathologies sont également considérées pour les deux premières méthodes. Pour la dernière contribution de cette thèse, i.e., exploitant un lissage temporel, une étude préliminaire est menée en utilisant des données de simulation.Cardiovascular diseases have become a major healthcare issue. Improving the diagnosis and analysis of these diseases have thus become a primary concern in cardiology. The heart is a moving organ that undergoes complex deformations. Therefore, the quantification of cardiac motion from medical images, particularly ultrasound, is a key part of the techniques used for diagnosis in clinical practice. Thus, significant research efforts have been directed toward developing new cardiac motion estimation methods. These methods aim at improving the quality and accuracy of the estimated motions. However, they are still facing many challenges due to the complexity of cardiac motion and the quality of ultrasound images. Recently, learning-based techniques have received a growing interest in the field of image processing. More specifically, sparse representations and dictionary learning strategies have shown their efficiency in regularizing different ill-posed inverse problems. This thesis investigates the benefits that such sparsity and learning-based techniques can bring to cardiac motion estimation. Three main contributions are presented, investigating different aspects and challenges that arise in echocardiography. Firstly, a method for cardiac motion estimation using a sparsity-based regularization is introduced. The motion estimation problem is formulated as an energy minimization, whose data fidelity term is built using the assumption that the images are corrupted by multiplicative Rayleigh noise. In addition to a classical spatial smoothness constraint, the proposed method exploits the sparse properties of the cardiac motion to regularize the solution via an appropriate dictionary learning step. Secondly, a fully robust optical flow method is proposed. The aim of this work is to take into account the limitations of ultrasound imaging and the violations of the regularization constraints. In this work, two regularization terms imposing spatial smoothness and sparsity of the motion field in an appropriate cardiac motion dictionary are also exploited. In order to ensure robustness to outliers, an iteratively re-weighted minimization strategy is proposed using weighting functions based on M-estimators. As a last contribution, we investigate a cardiac motion estimation method using a combination of sparse, spatial and temporal regularizations. The problem is formulated within a general optical flow framework. The proposed temporal regularization enforces smoothness of the motion trajectories between consecutive images. Furthermore, an iterative groupewise motion estimation allows us to incorporate the three regularization terms, while enabling the processing of the image sequence as a whole. Throughout this thesis, the proposed contributions are validated using synthetic and realistic simulated cardiac ultrasound images. These datasets with available groundtruth are used to evaluate the accuracy of the proposed approaches and show their competitiveness with state-of-the-art algorithms. In order to demonstrate clinical feasibility, in vivo sequences of healthy and pathological subjects are considered for the first two methods. A preliminary investigation is conducted for the last contribution, i.e., exploiting temporal smoothness, using simulated data

    Development, Implementation and Pre-clinical Evaluation of Medical Image Computing Tools in Support of Computer-aided Diagnosis: Respiratory, Orthopedic and Cardiac Applications

    Get PDF
    Over the last decade, image processing tools have become crucial components of all clinical and research efforts involving medical imaging and associated applications. The imaging data available to the radiologists continue to increase their workload, raising the need for efficient identification and visualization of the required image data necessary for clinical assessment. Computer-aided diagnosis (CAD) in medical imaging has evolved in response to the need for techniques that can assist the radiologists to increase throughput while reducing human error and bias without compromising the outcome of the screening, diagnosis or disease assessment. More intelligent, but simple, consistent and less time-consuming methods will become more widespread, reducing user variability, while also revealing information in a more clear, visual way. Several routine image processing approaches, including localization, segmentation, registration, and fusion, are critical for enhancing and enabling the development of CAD techniques. However, changes in clinical workflow require significant adjustments and re-training and, despite the efforts of the academic research community to develop state-of-the-art algorithms and high-performance techniques, their footprint often hampers their clinical use. Currently, the main challenge seems to not be the lack of tools and techniques for medical image processing, analysis, and computing, but rather the lack of clinically feasible solutions that leverage the already developed and existing tools and techniques, as well as a demonstration of the potential clinical impact of such tools. Recently, more and more efforts have been dedicated to devising new algorithms for localization, segmentation or registration, while their potential and much intended clinical use and their actual utility is dwarfed by the scientific, algorithmic and developmental novelty that only result in incremental improvements over already algorithms. In this thesis, we propose and demonstrate the implementation and evaluation of several different methodological guidelines that ensure the development of image processing tools --- localization, segmentation and registration --- and illustrate their use across several medical imaging modalities --- X-ray, computed tomography, ultrasound and magnetic resonance imaging --- and several clinical applications: Lung CT image registration in support for assessment of pulmonary nodule growth rate and disease progression from thoracic CT images. Automated reconstruction of standing X-ray panoramas from multi-sector X-ray images for assessment of long limb mechanical axis and knee misalignment. Left and right ventricle localization, segmentation, reconstruction, ejection fraction measurement from cine cardiac MRI or multi-plane trans-esophageal ultrasound images for cardiac function assessment. When devising and evaluating our developed tools, we use clinical patient data to illustrate the inherent clinical challenges associated with highly variable imaging data that need to be addressed before potential pre-clinical validation and implementation. In an effort to provide plausible solutions to the selected applications, the proposed methodological guidelines ensure the development of image processing tools that help achieve sufficiently reliable solutions that not only have the potential to address the clinical needs, but are sufficiently streamlined to be potentially translated into eventual clinical tools provided proper implementation. G1: Reducing the number of degrees of freedom (DOF) of the designed tool, with a plausible example being avoiding the use of inefficient non-rigid image registration methods. This guideline addresses the risk of artificial deformation during registration and it clearly aims at reducing complexity and the number of degrees of freedom. G2: The use of shape-based features to most efficiently represent the image content, either by using edges instead of or in addition to intensities and motion, where useful. Edges capture the most useful information in the image and can be used to identify the most important image features. As a result, this guideline ensures a more robust performance when key image information is missing. G3: Efficient method of implementation. This guideline focuses on efficiency in terms of the minimum number of steps required and avoiding the recalculation of terms that only need to be calculated once in an iterative process. An efficient implementation leads to reduced computational effort and improved performance. G4: Commence the workflow by establishing an optimized initialization and gradually converge toward the final acceptable result. This guideline aims to ensure reasonable outcomes in consistent ways and it avoids convergence to local minima, while gradually ensuring convergence to the global minimum solution. These guidelines lead to the development of interactive, semi-automated or fully-automated approaches that still enable the clinicians to perform final refinements, while they reduce the overall inter- and intra-observer variability, reduce ambiguity, increase accuracy and precision, and have the potential to yield mechanisms that will aid with providing an overall more consistent diagnosis in a timely fashion
    • …
    corecore