609 research outputs found

    Combining Shape and Learning for Medical Image Analysis

    Get PDF
    Automatic methods with the ability to make accurate, fast and robust assessments of medical images are highly requested in medical research and clinical care. Excellent automatic algorithms are characterized by speed, allowing for scalability, and an accuracy comparable to an expert radiologist. They should produce morphologically and physiologically plausible results while generalizing well to unseen and rare anatomies. Still, there are few, if any, applications where today\u27s automatic methods succeed to meet these requirements.\ua0The focus of this thesis is two tasks essential for enabling automatic medical image assessment, medical image segmentation and medical image registration. Medical image registration, i.e. aligning two separate medical images, is used as an important sub-routine in many image analysis tools as well as in image fusion, disease progress tracking and population statistics. Medical image segmentation, i.e. delineating anatomically or physiologically meaningful boundaries, is used for both diagnostic and visualization purposes in a wide range of applications, e.g. in computer-aided diagnosis and surgery.The thesis comprises five papers addressing medical image registration and/or segmentation for a diverse set of applications and modalities, i.e. pericardium segmentation in cardiac CTA, brain region parcellation in MRI, multi-organ segmentation in CT, heart ventricle segmentation in cardiac ultrasound and tau PET registration. The five papers propose competitive registration and segmentation methods enabled by machine learning techniques, e.g. random decision forests and convolutional neural networks, as well as by shape modelling, e.g. multi-atlas segmentation and conditional random fields

    Computational Anatomy for Multi-Organ Analysis in Medical Imaging: A Review

    Full text link
    The medical image analysis field has traditionally been focused on the development of organ-, and disease-specific methods. Recently, the interest in the development of more 20 comprehensive computational anatomical models has grown, leading to the creation of multi-organ models. Multi-organ approaches, unlike traditional organ-specific strategies, incorporate inter-organ relations into the model, thus leading to a more accurate representation of the complex human anatomy. Inter-organ relations are not only spatial, but also functional and physiological. Over the years, the strategies 25 proposed to efficiently model multi-organ structures have evolved from the simple global modeling, to more sophisticated approaches such as sequential, hierarchical, or machine learning-based models. In this paper, we present a review of the state of the art on multi-organ analysis and associated computation anatomy methodology. The manuscript follows a methodology-based classification of the different techniques 30 available for the analysis of multi-organs and multi-anatomical structures, from techniques using point distribution models to the most recent deep learning-based approaches. With more than 300 papers included in this review, we reflect on the trends and challenges of the field of computational anatomy, the particularities of each anatomical region, and the potential of multi-organ analysis to increase the impact of 35 medical imaging applications on the future of healthcare.Comment: Paper under revie

    Machine learning-based automated segmentation with a feedback loop for 3D synchrotron micro-CT

    Get PDF
    Die Entwicklung von Synchrotronlichtquellen der dritten Generation hat die Grundlage für die Untersuchung der 3D-Struktur opaker Proben mit einer Auflösung im Mikrometerbereich und höher geschaffen. Dies führte zur Entwicklung der Röntgen-Synchrotron-Mikro-Computertomographie, welche die Schaffung von Bildgebungseinrichtungen zur Untersuchung von Proben verschiedenster Art förderte, z.B. von Modellorganismen, um die Physiologie komplexer lebender Systeme besser zu verstehen. Die Entwicklung moderner Steuerungssysteme und Robotik ermöglichte die vollständige Automatisierung der Röntgenbildgebungsexperimente und die Kalibrierung der Parameter des Versuchsaufbaus während des Betriebs. Die Weiterentwicklung der digitalen Detektorsysteme führte zu Verbesserungen der Auflösung, des Dynamikbereichs, der Empfindlichkeit und anderer wesentlicher Eigenschaften. Diese Verbesserungen führten zu einer beträchtlichen Steigerung des Durchsatzes des Bildgebungsprozesses, aber auf der anderen Seite begannen die Experimente eine wesentlich größere Datenmenge von bis zu Dutzenden von Terabyte zu generieren, welche anschließend manuell verarbeitet wurden. Somit ebneten diese technischen Fortschritte den Weg für die Durchführung effizienterer Hochdurchsatzexperimente zur Untersuchung einer großen Anzahl von Proben, welche Datensätze von besserer Qualität produzierten. In der wissenschaftlichen Gemeinschaft besteht daher ein hoher Bedarf an einem effizienten, automatisierten Workflow für die Röntgendatenanalyse, welcher eine solche Datenlast bewältigen und wertvolle Erkenntnisse für die Fachexperten liefern kann. Die bestehenden Lösungen für einen solchen Workflow sind nicht direkt auf Hochdurchsatzexperimente anwendbar, da sie für Ad-hoc-Szenarien im Bereich der medizinischen Bildgebung entwickelt wurden. Daher sind sie nicht für Hochdurchsatzdatenströme optimiert und auch nicht in der Lage, die hierarchische Beschaffenheit von Proben zu nutzen. Die wichtigsten Beiträge der vorliegenden Arbeit sind ein neuer automatisierter Analyse-Workflow, der für die effiziente Verarbeitung heterogener Röntgendatensätze hierarchischer Natur geeignet ist. Der entwickelte Workflow basiert auf verbesserten Methoden zur Datenvorverarbeitung, Registrierung, Lokalisierung und Segmentierung. Jede Phase eines Arbeitsablaufs, die eine Trainingsphase beinhaltet, kann automatisch feinabgestimmt werden, um die besten Hyperparameter für den spezifischen Datensatz zu finden. Für die Analyse von Faserstrukturen in Proben wurde eine neue, hochgradig parallelisierbare 3D-Orientierungsanalysemethode entwickelt, die auf einem neuartigen Konzept der emittierenden Strahlen basiert und eine präzisere morphologische Analyse ermöglicht. Alle entwickelten Methoden wurden gründlich an synthetischen Datensätzen validiert, um ihre Anwendbarkeit unter verschiedenen Abbildungsbedingungen quantitativ zu bewerten. Es wurde gezeigt, dass der Workflow in der Lage ist, eine Reihe von Datensätzen ähnlicher Art zu verarbeiten. Darüber hinaus werden die effizienten CPU/GPU-Implementierungen des entwickelten Workflows und der Methoden vorgestellt und der Gemeinschaft als Module für die Sprache Python zur Verfügung gestellt. Der entwickelte automatisierte Analyse-Workflow wurde erfolgreich für Mikro-CT-Datensätze angewandt, die in Hochdurchsatzröntgenexperimenten im Bereich der Entwicklungsbiologie und Materialwissenschaft gewonnen wurden. Insbesondere wurde dieser Arbeitsablauf für die Analyse der Medaka-Fisch-Datensätze angewandt, was eine automatisierte Segmentierung und anschließende morphologische Analyse von Gehirn, Leber, Kopfnephronen und Herz ermöglichte. Darüber hinaus wurde die entwickelte Methode der 3D-Orientierungsanalyse bei der morphologischen Analyse von Polymergerüst-Datensätzen eingesetzt, um einen Herstellungsprozess in Richtung wünschenswerter Eigenschaften zu lenken

    Automatic Pancreas Segmentation and 3D Reconstruction for Morphological Feature Extraction in Medical Image Analysis

    Get PDF
    The development of highly accurate, quantitative automatic medical image segmentation techniques, in comparison to manual techniques, remains a constant challenge for medical image analysis. In particular, segmenting the pancreas from an abdominal scan presents additional difficulties: this particular organ has very high anatomical variability, and a full inspection is problematic due to the location of the pancreas behind the stomach. Therefore, accurate, automatic pancreas segmentation can consequently yield quantitative morphological measures such as volume and curvature, supporting biomedical research to establish the severity and progression of a condition, such as type 2 diabetes mellitus. Furthermore, it can also guide subject stratification after diagnosis or before clinical trials, and help shed additional light on detecting early signs of pancreatic cancer. This PhD thesis delivers a novel approach for automatic, accurate quantitative pancreas segmentation in mostly but not exclusively Magnetic Resonance Imaging (MRI), by harnessing the advantages of machine learning and classical image processing in computer vision. The proposed approach is evaluated on two MRI datasets containing 216 and 132 image volumes, achieving a mean Dice similarity coefficient (DSC) of 84:1 4:6% and 85:7 2:3% respectively. In order to demonstrate the universality of the approach, a dataset containing 82 Computer Tomography (CT) image volumes is also evaluated and achieves mean DSC of 83:1 5:3%. The proposed approach delivers a contribution to computer science (computer vision) in medical image analysis, reporting better quantitative pancreas segmentation results in comparison to other state-of-the-art techniques, and also captures detailed pancreas boundaries as verified by two independent experts in radiology and radiography. The contributions’ impact can support the usage of computational methods in biomedical research with a clinical translation; for example, the pancreas volume provides a prognostic biomarker about the severity of type 2 diabetes mellitus. Furthermore, a generalisation of the proposed segmentation approach successfully extends to other anatomical structures, including the kidneys, liver and iliopsoas muscles using different MRI sequences. Thus, the proposed approach can incorporate into the development of a computational tool to support radiological interpretations of MRI scans obtained using different sequences by providing a “second opinion”, help reduce possible misdiagnosis, and consequently, provide enhanced guidance towards targeted treatment planning

    The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)

    Get PDF
    In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low-and high-grade glioma patients-manually annotated by up to four raters-and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource

    Méthodes multi-organes rapides avec a priori de forme pour la localisation et la segmentation en imagerie médicale 3D

    Get PDF
    With the ubiquity of imaging in medical applications (diagnostic, treatment follow-up, surgery planning. . . ), image processing algorithms have become of primary importance. Algorithms help clinicians extract critical information more quickly and more reliably from increasingly large and complex acquisitions. In this context, anatomy localization and segmentation is a crucial component in modern clinical workflows. Due to particularly high requirements in terms of robustness, accuracy and speed, designing such tools remains a challengingtask.In this work, we propose a complete pipeline for the segmentation of multiple organs in medical images. The method is generic, it can be applied to varying numbers of organs, on different imaging modalities. Our approach consists of three components: (i) an automatic localization algorithm, (ii) an automatic segmentation algorithm, (iii) a framework for interactive corrections. We present these components as a coherent processing chain, although each block could easily be used independently of the others. To fulfill clinical requirements, we focus on robust and efficient solutions. Our anatomy localization method is based on a cascade of Random Regression Forests (Cuingnet et al., 2012). One key originality of our work is the use of shape priors for each organ (thanks to probabilistic atlases). Combined with the evaluation of the trained regression forests, they result in shape-consistent confidence maps for each organ instead of simple bounding boxes. Our segmentation method extends the implicit template deformation framework of Mory et al. (2012) to multiple organs. The proposed formulation builds on the versatility of the original approach and introduces new non-overlapping constraintsand contrast-invariant forces. This makes our approach a fully automatic, robust and efficient method for the coherent segmentation of multiple structures. In the case of imperfect segmentation results, it is crucial to enable clinicians to correct them easily. We show that our automatic segmentation framework can be extended with simple user-driven constraints to allow for intuitive interactive corrections. We believe that this final component is key towards the applicability of our pipeline in actual clinical routine.Each of our algorithmic components has been evaluated on large clinical databases. We illustrate their use on CT, MRI and US data and present a user study gathering the feedback of medical imaging experts. The results demonstrate the interest in our method and its potential for clinical use.Avec l’utilisation de plus en plus répandue de l’imagerie dans la pratique médicale (diagnostic, suivi, planification d’intervention, etc.), le développement d’algorithmes d’analyse d’images est devenu primordial. Ces algorithmes permettent aux cliniciens d’analyser et d’interpréter plus facilement et plus rapidement des données de plus en plus complexes. Dans ce contexte, la localisation et la segmentation de structures anatomiques sont devenues des composants critiques dans les processus cliniques modernes. La conception de tels outils pour répondre aux exigences de robustesse, précision et rapidité demeure cependant un réel défi technique.Ce travail propose une méthode complète pour la segmentation de plusieurs organes dans des images médicales. Cette méthode, générique et pouvant être appliquée à un nombre varié de structures et dans différentes modalités d’imagerie, est constituée de trois composants : (i) un algorithme de localisation automatique, (ii) un algorithme de segmentation, (iii) un outil de correction interactive. Ces différentes parties peuvent s’enchaîner aisément pour former un outil complet et cohérent, mais peuvent aussi bien être utilisées indépendemment. L’accent a été mis sur des méthodes robustes et efficaces afin de répondre aux exigences cliniques. Notre méthode de localisation s’appuie sur une cascade de régression par forêts aléatoires (Cuingnet et al., 2012). Elle introduit l’utilisation d’informations a priori de forme, spécifiques à chaque organe (grâce à des atlas probabilistes) pour des résultats plus cohérents avec la réalité anatomique. Notre méthode de segmentation étend la méthode de segmentation par modèle implicite (Mory et al., 2012) à plusieurs modèles. La formulation proposée permet d’obtenir des déformations cohérentes, notamment en introduisant des contraintes de non recouvrement entre les modèles déformés. En s’appuyant sur des forces images polyvalentes, l’approche proposée se montre robuste et performante pour la segmentation de multiples structures. Toute méthode automatique n’est cependant jamais parfaite. Afin que le clinicien garde la main sur le résultat final, nous proposons d’enrichir la formulation précédente avec des contraintes fournies par l’utilisateur. Une optimisation localisée permet d’obtenir un outil facile à utiliser et au comportement intuitif. Ce dernier composant est crucial pour que notre outil soit réellement utilisable en pratique. Chacun de ces trois composants a été évalué sur plusieurs grandes bases de données cliniques (en tomodensitométrie, imagerie par résonance magnétique et ultrasons). Une étude avec des utilisateurs nous a aussi permis de recueillir des retours positifs de plusieurs experts en imagerie médicale. Les différents résultats présentés dans ce manuscrit montrent l’intérêt de notre méthode et son potentiel pour une utilisation clinique

    Minimally Interactive Segmentation with Application to Human Placenta in Fetal MR Images

    Get PDF
    Placenta segmentation from fetal Magnetic Resonance (MR) images is important for fetal surgical planning. However, accurate segmentation results are difficult to achieve for automatic methods, due to sparse acquisition, inter-slice motion, and the widely varying position and shape of the placenta among pregnant women. Interactive methods have been widely used to get more accurate and robust results. A good interactive segmentation method should achieve high accuracy, minimize user interactions with low variability among users, and be computationally fast. Exploiting recent advances in machine learning, I explore a family of new interactive methods for placenta segmentation from fetal MR images. I investigate the combination of user interactions with learning from a single image or a large set of images. For learning from a single image, I propose novel Online Random Forests to efficiently leverage user interactions for the segmentation of 2D and 3D fetal MR images. I also investigate co-segmentation of multiple volumes of the same patient with 4D Graph Cuts. For learning from a large set of images, I first propose a deep learning-based framework that combines user interactions with Convolutional Neural Networks (CNN) based on geodesic distance transforms to achieve accurate segmentation and good interactivity. I then propose image-specific fine-tuning to make CNNs adaptive to different individual images and able to segment previously unseen objects. Experimental results show that the proposed algorithms outperform traditional interactive segmentation methods in terms of accuracy and interactivity. Therefore, they might be suitable for segmentation of the placenta in planning systems for fetal and maternal surgery, and for rapid characterization of the placenta by MR images. I also demonstrate that they can be applied to the segmentation of other organs from 2D and 3D images
    • …
    corecore