57 research outputs found

    Computational Anatomy for Multi-Organ Analysis in Medical Imaging: A Review

    Full text link
    The medical image analysis field has traditionally been focused on the development of organ-, and disease-specific methods. Recently, the interest in the development of more 20 comprehensive computational anatomical models has grown, leading to the creation of multi-organ models. Multi-organ approaches, unlike traditional organ-specific strategies, incorporate inter-organ relations into the model, thus leading to a more accurate representation of the complex human anatomy. Inter-organ relations are not only spatial, but also functional and physiological. Over the years, the strategies 25 proposed to efficiently model multi-organ structures have evolved from the simple global modeling, to more sophisticated approaches such as sequential, hierarchical, or machine learning-based models. In this paper, we present a review of the state of the art on multi-organ analysis and associated computation anatomy methodology. The manuscript follows a methodology-based classification of the different techniques 30 available for the analysis of multi-organs and multi-anatomical structures, from techniques using point distribution models to the most recent deep learning-based approaches. With more than 300 papers included in this review, we reflect on the trends and challenges of the field of computational anatomy, the particularities of each anatomical region, and the potential of multi-organ analysis to increase the impact of 35 medical imaging applications on the future of healthcare.Comment: Paper under revie

    A comparative evaluation for liver segmentation from spir images and a novel level set method using signed pressure force function

    Get PDF
    Thesis (Doctoral)--Izmir Institute of Technology, Electronics and Communication Engineering, Izmir, 2013Includes bibliographical references (leaves: 118-135)Text in English; Abstract: Turkish and Englishxv, 145 leavesDeveloping a robust method for liver segmentation from magnetic resonance images is a challenging task due to similar intensity values between adjacent organs, geometrically complex liver structure and injection of contrast media, which causes all tissues to have different gray level values. Several artifacts of pulsation and motion, and partial volume effects also increase difficulties for automatic liver segmentation from magnetic resonance images. In this thesis, we present an overview about liver segmentation methods in magnetic resonance images and show comparative results of seven different liver segmentation approaches chosen from deterministic (K-means based), probabilistic (Gaussian model based), supervised neural network (multilayer perceptron based) and deformable model based (level set) segmentation methods. The results of qualitative and quantitative analysis using sensitivity, specificity and accuracy metrics show that the multilayer perceptron based approach and a level set based approach which uses a distance regularization term and signed pressure force function are reasonable methods for liver segmentation from spectral pre-saturation inversion recovery images. However, the multilayer perceptron based segmentation method requires a higher computational cost. The distance regularization term based automatic level set method is very sensitive to chosen variance of Gaussian function. Our proposed level set based method that uses a novel signed pressure force function, which can control the direction and velocity of the evolving active contour, is faster and solves several problems of other applied methods such as sensitivity to initial contour or variance parameter of the Gaussian kernel in edge stopping functions without using any regularization term

    Méthodes multi-organes rapides avec a priori de forme pour la localisation et la segmentation en imagerie médicale 3D

    Get PDF
    With the ubiquity of imaging in medical applications (diagnostic, treatment follow-up, surgery planning. . . ), image processing algorithms have become of primary importance. Algorithms help clinicians extract critical information more quickly and more reliably from increasingly large and complex acquisitions. In this context, anatomy localization and segmentation is a crucial component in modern clinical workflows. Due to particularly high requirements in terms of robustness, accuracy and speed, designing such tools remains a challengingtask.In this work, we propose a complete pipeline for the segmentation of multiple organs in medical images. The method is generic, it can be applied to varying numbers of organs, on different imaging modalities. Our approach consists of three components: (i) an automatic localization algorithm, (ii) an automatic segmentation algorithm, (iii) a framework for interactive corrections. We present these components as a coherent processing chain, although each block could easily be used independently of the others. To fulfill clinical requirements, we focus on robust and efficient solutions. Our anatomy localization method is based on a cascade of Random Regression Forests (Cuingnet et al., 2012). One key originality of our work is the use of shape priors for each organ (thanks to probabilistic atlases). Combined with the evaluation of the trained regression forests, they result in shape-consistent confidence maps for each organ instead of simple bounding boxes. Our segmentation method extends the implicit template deformation framework of Mory et al. (2012) to multiple organs. The proposed formulation builds on the versatility of the original approach and introduces new non-overlapping constraintsand contrast-invariant forces. This makes our approach a fully automatic, robust and efficient method for the coherent segmentation of multiple structures. In the case of imperfect segmentation results, it is crucial to enable clinicians to correct them easily. We show that our automatic segmentation framework can be extended with simple user-driven constraints to allow for intuitive interactive corrections. We believe that this final component is key towards the applicability of our pipeline in actual clinical routine.Each of our algorithmic components has been evaluated on large clinical databases. We illustrate their use on CT, MRI and US data and present a user study gathering the feedback of medical imaging experts. The results demonstrate the interest in our method and its potential for clinical use.Avec l’utilisation de plus en plus rĂ©pandue de l’imagerie dans la pratique mĂ©dicale (diagnostic, suivi, planification d’intervention, etc.), le dĂ©veloppement d’algorithmes d’analyse d’images est devenu primordial. Ces algorithmes permettent aux cliniciens d’analyser et d’interprĂ©ter plus facilement et plus rapidement des donnĂ©es de plus en plus complexes. Dans ce contexte, la localisation et la segmentation de structures anatomiques sont devenues des composants critiques dans les processus cliniques modernes. La conception de tels outils pour rĂ©pondre aux exigences de robustesse, prĂ©cision et rapiditĂ© demeure cependant un rĂ©el dĂ©fi technique.Ce travail propose une mĂ©thode complĂšte pour la segmentation de plusieurs organes dans des images mĂ©dicales. Cette mĂ©thode, gĂ©nĂ©rique et pouvant ĂȘtre appliquĂ©e Ă  un nombre variĂ© de structures et dans diffĂ©rentes modalitĂ©s d’imagerie, est constituĂ©e de trois composants : (i) un algorithme de localisation automatique, (ii) un algorithme de segmentation, (iii) un outil de correction interactive. Ces diffĂ©rentes parties peuvent s’enchaĂźner aisĂ©ment pour former un outil complet et cohĂ©rent, mais peuvent aussi bien ĂȘtre utilisĂ©es indĂ©pendemment. L’accent a Ă©tĂ© mis sur des mĂ©thodes robustes et efficaces afin de rĂ©pondre aux exigences cliniques. Notre mĂ©thode de localisation s’appuie sur une cascade de rĂ©gression par forĂȘts alĂ©atoires (Cuingnet et al., 2012). Elle introduit l’utilisation d’informations a priori de forme, spĂ©cifiques Ă  chaque organe (grĂące Ă  des atlas probabilistes) pour des rĂ©sultats plus cohĂ©rents avec la rĂ©alitĂ© anatomique. Notre mĂ©thode de segmentation Ă©tend la mĂ©thode de segmentation par modĂšle implicite (Mory et al., 2012) Ă  plusieurs modĂšles. La formulation proposĂ©e permet d’obtenir des dĂ©formations cohĂ©rentes, notamment en introduisant des contraintes de non recouvrement entre les modĂšles dĂ©formĂ©s. En s’appuyant sur des forces images polyvalentes, l’approche proposĂ©e se montre robuste et performante pour la segmentation de multiples structures. Toute mĂ©thode automatique n’est cependant jamais parfaite. Afin que le clinicien garde la main sur le rĂ©sultat final, nous proposons d’enrichir la formulation prĂ©cĂ©dente avec des contraintes fournies par l’utilisateur. Une optimisation localisĂ©e permet d’obtenir un outil facile Ă  utiliser et au comportement intuitif. Ce dernier composant est crucial pour que notre outil soit rĂ©ellement utilisable en pratique. Chacun de ces trois composants a Ă©tĂ© Ă©valuĂ© sur plusieurs grandes bases de donnĂ©es cliniques (en tomodensitomĂ©trie, imagerie par rĂ©sonance magnĂ©tique et ultrasons). Une Ă©tude avec des utilisateurs nous a aussi permis de recueillir des retours positifs de plusieurs experts en imagerie mĂ©dicale. Les diffĂ©rents rĂ©sultats prĂ©sentĂ©s dans ce manuscrit montrent l’intĂ©rĂȘt de notre mĂ©thode et son potentiel pour une utilisation clinique

    Deep learning for medical image processing

    Get PDF
    Medical image segmentation represents a fundamental aspect of medical image computing. It facilitates measurements of anatomical structures, like organ volume and tissue thickness, critical for many classification algorithms which can be instrumental for clinical diagnosis. Consequently, enhancing the efficiency and accuracy of segmentation algorithms could lead to considerable improvements in patient care and diagnostic precision. In recent years, deep learning has become the state-of-the-art approach in various domains of medical image computing, including medical image segmentation. The key advantages of deep learning methods are their speed and efficiency, which have the potential to transform clinical practice significantly. Traditional algorithms might require hours to perform complex computations, but with deep learning, such computational tasks can be executed much faster, often within seconds. This thesis focuses on two distinct segmentation strategies: voxel-based and surface-based. Voxel-based segmentation assigns a class label to each individual voxel of an image. On the other hand, surface-based segmentation techniques involve reconstructing a 3D surface from the input images, then segmenting that surface into different regions. This thesis presents multiple methods for voxel-based image segmentation. Here, the focus is segmenting brain structures, white matter hyperintensities, and abdominal organs. Our approaches confront challenges such as domain adaptation, learning with limited data, and optimizing network architectures to handle 3D images. Additionally, the thesis discusses ways to handle the failure cases of standard deep learning approaches, such as dealing with rare cases like patients who have undergone organ resection surgery. Finally, the thesis turns its attention to cortical surface reconstruction and parcellation. Here, deep learning is used to extract cortical surfaces from MRI scans as triangular meshes and parcellate these surfaces on a vertex level. The challenges posed by this approach include handling irregular and topologically complex structures. This thesis presents novel deep learning strategies for voxel-based and surface-based medical image segmentation. By addressing specific challenges in each approach, it aims to contribute to the ongoing advancement of medical image computing.Die Segmentierung medizinischer Bilder stellt einen fundamentalen Aspekt der medizinischen Bildverarbeitung dar. Sie erleichtert Messungen anatomischer Strukturen, wie Organvolumen und Gewebedicke, die fĂŒr viele Klassifikationsalgorithmen entscheidend sein können und somit fĂŒr klinische Diagnosen von Bedeutung sind. Daher könnten Verbesserungen in der Effizienz und Genauigkeit von Segmentierungsalgorithmen zu erheblichen Fortschritten in der Patientenversorgung und diagnostischen Genauigkeit fĂŒhren. Deep Learning hat sich in den letzten Jahren als fĂŒhrender Ansatz in verschiedenen Be-reichen der medizinischen Bildverarbeitung etabliert. Die Hauptvorteile dieser Methoden sind Geschwindigkeit und Effizienz, die die klinische Praxis erheblich verĂ€ndern können. Traditionelle Algorithmen benötigen möglicherweise Stunden, um komplexe Berechnungen durchzufĂŒhren, mit Deep Learning können solche rechenintensiven Aufgaben wesentlich schneller, oft innerhalb von Sekunden, ausgefĂŒhrt werden. Diese Dissertation konzentriert sich auf zwei Segmentierungsstrategien, die voxel- und oberflĂ€chenbasierte Segmentierung. Die voxelbasierte Segmentierung weist jedem Voxel eines Bildes ein Klassenlabel zu, wĂ€hrend oberflĂ€chenbasierte Techniken eine 3D-OberflĂ€che aus den Eingabebildern rekonstruieren und segmentieren. In dieser Arbeit werden mehrere Methoden fĂŒr die voxelbasierte Bildsegmentierung vorgestellt. Der Fokus liegt hier auf der Segmentierung von Gehirnstrukturen, HyperintensitĂ€ten der weißen Substanz und abdominellen Organen. Unsere AnsĂ€tze begegnen Herausforderungen wie der Anpassung an verschiedene DomĂ€nen, dem Lernen mit begrenzten Daten und der Optimierung von Netzwerkarchitekturen, um 3D-Bilder zu verarbeiten. DarĂŒber hinaus werden in dieser Dissertation Möglichkeiten erörtert, mit den FehlschlĂ€gen standardmĂ€ĂŸiger Deep-Learning-AnsĂ€tze umzugehen, beispielsweise mit seltenen FĂ€llen nach einer Organresektion. Schließlich legen wir den Fokus auf die Rekonstruktion und Parzellierung von kortikalen OberflĂ€chen. Hier wird Deep Learning verwendet, um kortikale OberflĂ€chen aus MRT-Scans als Dreiecksnetz zu extrahieren und diese OberflĂ€chen auf Knoten-Ebene zu parzellieren. Zu den Herausforderungen dieses Ansatzes gehört der Umgang mit unregelmĂ€ĂŸigen und topologisch komplexen Strukturen. Diese Arbeit stellt neuartige Deep-Learning-Strategien fĂŒr die voxel- und oberflĂ€chenbasierte medizinische Segmentierung vor. Durch die BewĂ€ltigung spezifischer Herausforderungen in jedem Ansatz trĂ€gt sie so zur Weiterentwicklung der medizinischen Bildverarbeitung bei

    A Systematic Review of Few-Shot Learning in Medical Imaging

    Full text link
    The lack of annotated medical images limits the performance of deep learning models, which usually need large-scale labelled datasets. Few-shot learning techniques can reduce data scarcity issues and enhance medical image analysis, especially with meta-learning. This systematic review gives a comprehensive overview of few-shot learning in medical imaging. We searched the literature systematically and selected 80 relevant articles published from 2018 to 2023. We clustered the articles based on medical outcomes, such as tumour segmentation, disease classification, and image registration; anatomical structure investigated (i.e. heart, lung, etc.); and the meta-learning method used. For each cluster, we examined the papers' distributions and the results provided by the state-of-the-art. In addition, we identified a generic pipeline shared among all the studies. The review shows that few-shot learning can overcome data scarcity in most outcomes and that meta-learning is a popular choice to perform few-shot learning because it can adapt to new tasks with few labelled samples. In addition, following meta-learning, supervised learning and semi-supervised learning stand out as the predominant techniques employed to tackle few-shot learning challenges in medical imaging and also best performing. Lastly, we observed that the primary application areas predominantly encompass cardiac, pulmonary, and abdominal domains. This systematic review aims to inspire further research to improve medical image analysis and patient care.Comment: 48 pages, 29 figures, 10 tables, submitted to Elsevier on 19 Sep 202

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    Cloud-Based Benchmarking of Medical Image Analysis

    Get PDF
    Medical imagin

    An Information Tracking Approach to the Segmentation of Prostates in Ultrasound Imaging

    Get PDF
    Outlining of the prostate boundary in ultrasound images is a very useful procedure performed and subsequently used by clinicians. The contribution of the resulting segmentation is twofold. First of all, the segmentation of the prostate glands can be used to analyze the size, geometry, and volume of the gland. Such analysis is useful as it is known that the former quantities used in conjunction with a PSA blood test can be used as an indicator of malignancy in the gland itself. The second purpose of accurate segmentation is for treatment planning purposes. In brachetherapy, commonly used to treat localized prostate cancer, the accurate location of the prostate must be found so that the radioactive seeds can be placed precisely in the malignant regions. Unfortunately, the current method of segmentation of ultrasound images is performed manually by expert radiologists. Due to the abundance of ultrasound data, the process of manual segmentation can be extremely time consuming and inefficient. A much more desirable way to perform the segmentation process is through automatic procedures, which should be able to accurately and efficiently extract the boundary of the prostate gland with minimal user intervention. This is the ultimate goal of the proposed approach. The proposed segmentation algorithm uses a probability distribution tracking framework to accurately and efficiently perform the task at hand. The basis for this methodology is to extract image and shape features from available manually segmented ultrasound images for which the actual prostate region is known. Then, the segmentation algorithm seeks a region in new ultrasound images whose features closely mirror the learned features of known prostate regions. Promising results were achieved using this method in a series of in silico and in vivo experiments

    Learning strategies for improving neural networks for image segmentation under class imbalance

    Get PDF
    This thesis aims to improve convolutional neural networks (CNNs) for image segmentation under class imbalance, which is referred to the problem of training dataset when the class distributions are unequal. We particularly focus on medical image segmentation because of its imbalanced nature and clinical importance. Based on our observations of model behaviour, we argue that CNNs cannot generalize well on imbalanced segmentation tasks, mainly because of two counterintuitive reasons. CNNs are prone to overfit the under-represented foreground classes as it would memorize the regions of interest (ROIs) in the training data because they are so rare. Besides, CNNs could underfit the heterogenous background classes as it is difficult to learn from the samples with diverse and complex characteristics. Those behaviours of CNNs are not limited to specific loss functions. To address those limitations, firstly we propose novel asymmetric variants of popular loss functions and regularization techniques, which are explicitly designed to increase the variance of foreground samples to counter overfitting under class imbalance. Secondly we propose context label learning (CoLab) to tackle background underfitting by automatically decomposing the background class into several subclasses. This is achieved by optimizing an auxiliary task generator to generate context labels such that the main network will produce good ROIs segmentation performance. Then we propose a meta-learning based automatic data augmentation framework which builds a balance of foreground and background samples to alleviate class imbalance. Specifically, we learn class-specific training-time data augmentation (TRA) and jointly optimize TRA and test-time data augmentation (TEA) effectively aligning training and test data distribution for better generalization. Finally, we explore how to estimate model performance under domain shifts when trained with imbalanced dataset. We propose class-specific variants of existing confidence-based model evaluation methods which adapts separate parameters per class, enabling class-wise calibration to reduce model bias towards the minority classes.Open Acces
    • 

    corecore