20 research outputs found

    3D Matting: A Soft Segmentation Method Applied in Computed Tomography

    Full text link
    Three-dimensional (3D) images, such as CT, MRI, and PET, are common in medical imaging applications and important in clinical diagnosis. Semantic ambiguity is a typical feature of many medical image labels. It can be caused by many factors, such as the imaging properties, pathological anatomy, and the weak representation of the binary masks, which brings challenges to accurate 3D segmentation. In 2D medical images, using soft masks instead of binary masks generated by image matting to characterize lesions can provide rich semantic information, describe the structural characteristics of lesions more comprehensively, and thus benefit the subsequent diagnoses and analyses. In this work, we introduce image matting into the 3D scenes to describe the lesions in 3D medical images. The study of image matting in 3D modality is limited, and there is no high-quality annotated dataset related to 3D matting, therefore slowing down the development of data-driven deep-learning-based methods. To address this issue, we constructed the first 3D medical matting dataset and convincingly verified the validity of the dataset through quality control and downstream experiments in lung nodules classification. We then adapt the four selected state-of-the-art 2D image matting algorithms to 3D scenes and further customize the methods for CT images. Also, we propose the first end-to-end deep 3D matting network and implement a solid 3D medical image matting benchmark, which will be released to encourage further research.Comment: 12 pages, 7 figure

    Use of Multicomponent Non-Rigid Registration to Improve Alignment of Serial Oncological PET/CT Studies

    Get PDF
    Non-rigid registration of serial head and neck FDG PET/CT images from a combined scanner can be problematic. Registration techniques typically rely on similarity measures calculated from voxel intensity values; CT-CT registration is superior to PET-PET registration due to the higher quality of anatomical information present in this modality. However, when metal artefacts from dental fillings are present in a pair of CT images, a nonrigid registration will incorrectly attempt to register the two artefacts together since they are strong features compared to the features that represent the actual anatomy. This leads to localised registration errors in the deformation field in the vicinity of the artefacts. Our objective was to develop a registration technique which overcomes these limitations by using combined information from both modalities. To study the effect of artefacts on registration, metal artefacts were simulated with one CT image rotated by a small angle in the sagittal plane. Image pairs containing these simulated artifacts were then registered to evaluate the resulting errors. To improve the registration in the vicinity where there were artefacts, intensity information from the PET images was incorporated using several techniques. A well-established B-splines based non-rigid registration code was reworked to allow multicomponent registration. A similarity measure with four possible weighted components relating to the ways in which the CT and PET information can be combined to drive the registration of a pair of these dual-valued images was employed. Several registration methods based on using this multicomponent similarity measure were implemented with the goal of effectively registering the images containing the simulated artifacts. A method was also developed to swap control point displacements from the PET-derived transformation in the vicinity of the artefact. This method yielded the best result on the simulated images and was evaluated on images where actual dental artifacts were present

    Computational Methods for Segmentation of Multi-Modal Multi-Dimensional Cardiac Images

    Get PDF
    Segmentation of the heart structures helps compute the cardiac contractile function quantified via the systolic and diastolic volumes, ejection fraction, and myocardial mass, representing a reliable diagnostic value. Similarly, quantification of the myocardial mechanics throughout the cardiac cycle, analysis of the activation patterns in the heart via electrocardiography (ECG) signals, serve as good cardiac diagnosis indicators. Furthermore, high quality anatomical models of the heart can be used in planning and guidance of minimally invasive interventions under the assistance of image guidance. The most crucial step for the above mentioned applications is to segment the ventricles and myocardium from the acquired cardiac image data. Although the manual delineation of the heart structures is deemed as the gold-standard approach, it requires significant time and effort, and is highly susceptible to inter- and intra-observer variability. These limitations suggest a need for fast, robust, and accurate semi- or fully-automatic segmentation algorithms. However, the complex motion and anatomy of the heart, indistinct borders due to blood flow, the presence of trabeculations, intensity inhomogeneity, and various other imaging artifacts, makes the segmentation task challenging. In this work, we present and evaluate segmentation algorithms for multi-modal, multi-dimensional cardiac image datasets. Firstly, we segment the left ventricle (LV) blood-pool from a tri-plane 2D+time trans-esophageal (TEE) ultrasound acquisition using local phase based filtering and graph-cut technique, propagate the segmentation throughout the cardiac cycle using non-rigid registration-based motion extraction, and reconstruct the 3D LV geometry. Secondly, we segment the LV blood-pool and myocardium from an open-source 4D cardiac cine Magnetic Resonance Imaging (MRI) dataset by incorporating average atlas based shape constraint into the graph-cut framework and iterative segmentation refinement. The developed fast and robust framework is further extended to perform right ventricle (RV) blood-pool segmentation from a different open-source 4D cardiac cine MRI dataset. Next, we employ convolutional neural network based multi-task learning framework to segment the myocardium and regress its area, simultaneously, and show that segmentation based computation of the myocardial area is significantly better than that regressed directly from the network, while also being more interpretable. Finally, we impose a weak shape constraint via multi-task learning framework in a fully convolutional network and show improved segmentation performance for LV, RV and myocardium across healthy and pathological cases, as well as, in the challenging apical and basal slices in two open-source 4D cardiac cine MRI datasets. We demonstrate the accuracy and robustness of the proposed segmentation methods by comparing the obtained results against the provided gold-standard manual segmentations, as well as with other competing segmentation methods

    Robust density modelling using the student's t-distribution for human action recognition

    Full text link
    The extraction of human features from videos is often inaccurate and prone to outliers. Such outliers can severely affect density modelling when the Gaussian distribution is used as the model since it is highly sensitive to outliers. The Gaussian distribution is also often used as base component of graphical models for recognising human actions in the videos (hidden Markov model and others) and the presence of outliers can significantly affect the recognition accuracy. In contrast, the Student's t-distribution is more robust to outliers and can be exploited to improve the recognition rate in the presence of abnormal data. In this paper, we present an HMM which uses mixtures of t-distributions as observation probabilities and show how experiments over two well-known datasets (Weizmann, MuHAVi) reported a remarkable improvement in classification accuracy. © 2011 IEEE

    Méthodes multi-organes rapides avec a priori de forme pour la localisation et la segmentation en imagerie médicale 3D

    Get PDF
    With the ubiquity of imaging in medical applications (diagnostic, treatment follow-up, surgery planning. . . ), image processing algorithms have become of primary importance. Algorithms help clinicians extract critical information more quickly and more reliably from increasingly large and complex acquisitions. In this context, anatomy localization and segmentation is a crucial component in modern clinical workflows. Due to particularly high requirements in terms of robustness, accuracy and speed, designing such tools remains a challengingtask.In this work, we propose a complete pipeline for the segmentation of multiple organs in medical images. The method is generic, it can be applied to varying numbers of organs, on different imaging modalities. Our approach consists of three components: (i) an automatic localization algorithm, (ii) an automatic segmentation algorithm, (iii) a framework for interactive corrections. We present these components as a coherent processing chain, although each block could easily be used independently of the others. To fulfill clinical requirements, we focus on robust and efficient solutions. Our anatomy localization method is based on a cascade of Random Regression Forests (Cuingnet et al., 2012). One key originality of our work is the use of shape priors for each organ (thanks to probabilistic atlases). Combined with the evaluation of the trained regression forests, they result in shape-consistent confidence maps for each organ instead of simple bounding boxes. Our segmentation method extends the implicit template deformation framework of Mory et al. (2012) to multiple organs. The proposed formulation builds on the versatility of the original approach and introduces new non-overlapping constraintsand contrast-invariant forces. This makes our approach a fully automatic, robust and efficient method for the coherent segmentation of multiple structures. In the case of imperfect segmentation results, it is crucial to enable clinicians to correct them easily. We show that our automatic segmentation framework can be extended with simple user-driven constraints to allow for intuitive interactive corrections. We believe that this final component is key towards the applicability of our pipeline in actual clinical routine.Each of our algorithmic components has been evaluated on large clinical databases. We illustrate their use on CT, MRI and US data and present a user study gathering the feedback of medical imaging experts. The results demonstrate the interest in our method and its potential for clinical use.Avec l’utilisation de plus en plus répandue de l’imagerie dans la pratique médicale (diagnostic, suivi, planification d’intervention, etc.), le développement d’algorithmes d’analyse d’images est devenu primordial. Ces algorithmes permettent aux cliniciens d’analyser et d’interpréter plus facilement et plus rapidement des données de plus en plus complexes. Dans ce contexte, la localisation et la segmentation de structures anatomiques sont devenues des composants critiques dans les processus cliniques modernes. La conception de tels outils pour répondre aux exigences de robustesse, précision et rapidité demeure cependant un réel défi technique.Ce travail propose une méthode complète pour la segmentation de plusieurs organes dans des images médicales. Cette méthode, générique et pouvant être appliquée à un nombre varié de structures et dans différentes modalités d’imagerie, est constituée de trois composants : (i) un algorithme de localisation automatique, (ii) un algorithme de segmentation, (iii) un outil de correction interactive. Ces différentes parties peuvent s’enchaîner aisément pour former un outil complet et cohérent, mais peuvent aussi bien être utilisées indépendemment. L’accent a été mis sur des méthodes robustes et efficaces afin de répondre aux exigences cliniques. Notre méthode de localisation s’appuie sur une cascade de régression par forêts aléatoires (Cuingnet et al., 2012). Elle introduit l’utilisation d’informations a priori de forme, spécifiques à chaque organe (grâce à des atlas probabilistes) pour des résultats plus cohérents avec la réalité anatomique. Notre méthode de segmentation étend la méthode de segmentation par modèle implicite (Mory et al., 2012) à plusieurs modèles. La formulation proposée permet d’obtenir des déformations cohérentes, notamment en introduisant des contraintes de non recouvrement entre les modèles déformés. En s’appuyant sur des forces images polyvalentes, l’approche proposée se montre robuste et performante pour la segmentation de multiples structures. Toute méthode automatique n’est cependant jamais parfaite. Afin que le clinicien garde la main sur le résultat final, nous proposons d’enrichir la formulation précédente avec des contraintes fournies par l’utilisateur. Une optimisation localisée permet d’obtenir un outil facile à utiliser et au comportement intuitif. Ce dernier composant est crucial pour que notre outil soit réellement utilisable en pratique. Chacun de ces trois composants a été évalué sur plusieurs grandes bases de données cliniques (en tomodensitométrie, imagerie par résonance magnétique et ultrasons). Une étude avec des utilisateurs nous a aussi permis de recueillir des retours positifs de plusieurs experts en imagerie médicale. Les différents résultats présentés dans ce manuscrit montrent l’intérêt de notre méthode et son potentiel pour une utilisation clinique
    corecore