518 research outputs found

    A review of segmentation and deformable registration methods applied to adaptive cervical cancer radiation therapy treatment planning

    Get PDF
    Objective: Manual contouring and registration for radiotherapy treatment planning and online adaptation for cervical cancer radiation therapy in computed tomography (CT) and magnetic resonance images (MRI) are often necessary. However manual intervention is time consuming and may suffer from inter or intra-rater variability. In recent years a number of computer-guided automatic or semi-automatic segmentation and registration methods have been proposed. Segmentation and registration in CT and MRI for this purpose is a challenging task due to soft tissue deformation, inter-patient shape and appearance variation and anatomical changes over the course of treatment. The objective of this work is to provide a state-of-the-art review of computer-aided methods developed for adaptive treatment planning and radiation therapy planning for cervical cancer radiation therapy. Methods: Segmentation and registration methods published with the goal of cervical cancer treatment planning and adaptation have been identified from the literature (PubMed and Google Scholar). A comprehensive description of each method is provided. Similarities and differences of these methods are highlighted and the strengths and weaknesses of these methods are discussed. A discussion about choice of an appropriate method for a given modality is provided. Results: In the reviewed papers a Dice similarity coefficient of around 0.85 along with mean absolute surface distance of 2-4. mm for the clinically treated volume were reported for transfer of contours from planning day to the treatment day. Conclusions: Most segmentation and non-rigid registration methods have been primarily designed for adaptive re-planning for the transfer of contours from planning day to the treatment day. The use of shape priors significantly improved segmentation and registration accuracy compared to other models

    Patch-based segmentation with spatial context for medical image analysis

    Get PDF
    Accurate segmentations in medical imaging form a crucial role in many applications from pa- tient diagnosis to population studies. As the amount of data generated from medical images increases, the ability to perform this task without human intervention becomes ever more de- sirable. One approach, known broadly as atlas-based segmentation, is to propagate labels from images which have already been manually labelled by clinical experts. Methods using this ap- proach have been shown to be e ective in many applications, demonstrating great potential for automatic labelling of large datasets. However, these methods usually require the use of image registration and are dependent on the outcome of the registration. Any registrations errors that occur are also propagated to the segmentation process and are likely to have an adverse e ect on segmentation accuracy. Recently, patch-based methods have been shown to allow a relaxation of the required image alignment, whilst achieving similar results. In general, these methods label each voxel of a target image by comparing the image patch centred on the voxel with neighbouring patches from an atlas library and assigning the most likely label according to the closest matches. The main contributions of this thesis focuses around this approach in providing accurate segmentation results whilst minimising the dependency on registration quality. In particular, this thesis proposes a novel kNN patch-based segmentation framework, which utilises both intensity and spatial information, and explore the use of spatial context in a diverse range of applications. The proposed methods extend the potential for patch-based segmentation to tolerate registration errors by rede ning the \locality" for patch selection and comparison, whilst also allowing similar looking patches from di erent anatomical structures to be di erentiated. The methods are evaluated on a wide variety of image datasets, ranging from the brain to the knees, demonstrating its potential with results which are competitive to state-of-the-art techniques.Open Acces

    Landmark Localization, Feature Matching and Biomarker Discovery from Magnetic Resonance Images

    Get PDF
    The work presented in this thesis proposes several methods that can be roughly divided into three different categories: I) landmark localization in medical images, II) feature matching for image registration, and III) biomarker discovery in neuroimaging. The first part deals with the identification of anatomical landmarks. The motivation stems from the fact that the manual identification and labeling of these landmarks is very time consuming and prone to observer errors, especially when large datasets must be analyzed. In this thesis we present three methods to tackle this challenge: A landmark descriptor based on local self-similarities (SS), a subspace building framework based on manifold learning and a sparse coding landmark descriptor based on data-specific learned dictionary basis. The second part of this thesis deals with finding matching features between a pair of images. These matches can be used to perform a registration between them. Registration is a powerful tool that allows mapping images in a common space in order to aid in their analysis. Accurate registration can be challenging to achieve using intensity based registration algorithms. Here, a framework is proposed for learning correspondences in pairs of images by matching SS features and random sample and consensus (RANSAC) is employed as a robust model estimator to learn a deformation model based on feature matches. Finally, the third part of the thesis deals with biomarker discovery using machine learning. In this section a framework for feature extraction from learned low-dimensional subspaces that represent inter-subject variability is proposed. The manifold subspace is built using data-driven regions of interest (ROI). These regions are learned via sparse regression, with stability selection. Also, probabilistic distribution models for different stages in the disease trajectory are estimated for different class populations in the low-dimensional manifold and used to construct a probabilistic scoring function.Open Acces

    Aligning 3D Curve with Surface Using Tangent and Normal Vectors for Computer-Assisted Orthopedic Surgery

    Get PDF
    Registration that aligns different views of one interested organ together is an essential technique and outstanding problem in medical robotics and image-guided surgery (IGS). This work introduces a novel rigid point set registration (PSR) approach that aims to accurately map the pre-operative space with the intra-operative space to enable successful image guidance for computer-assisted orthopaedic surgery (CAOS). The normal vectors and tangent vectors are first extracted from the pre-operative and intra-operative point sets (PSs) respectively, and are further utilized to enhance the registration accuracy and robustness. The contributions of this article are three-folds. First, we propose and formulate a novel distribution that describes the error between one normal vector and the corresponding tangent vector based on the von-Mises Fisher (vMF) distribution. Second, by modelling the anisotropic position localization error with the multi-variate Gaussian distribution, we formulate the PSR considering anisotropic localization error as a maximum likelihood estimation (MLE) problem and then solve it under the expectation maximization (EM) framework. Third, to facilitate the optimization process, the gradients of the objective function with respect to the desired parameters are computed and presented. Extensive experimental results on the human femur and pelvis models verify that the proposed approach outperforms the state-of-the-art methods, and demonstrate potential clinical values for relevant surgical navigation applications

    Computational Anatomy for Multi-Organ Analysis in Medical Imaging: A Review

    Full text link
    The medical image analysis field has traditionally been focused on the development of organ-, and disease-specific methods. Recently, the interest in the development of more 20 comprehensive computational anatomical models has grown, leading to the creation of multi-organ models. Multi-organ approaches, unlike traditional organ-specific strategies, incorporate inter-organ relations into the model, thus leading to a more accurate representation of the complex human anatomy. Inter-organ relations are not only spatial, but also functional and physiological. Over the years, the strategies 25 proposed to efficiently model multi-organ structures have evolved from the simple global modeling, to more sophisticated approaches such as sequential, hierarchical, or machine learning-based models. In this paper, we present a review of the state of the art on multi-organ analysis and associated computation anatomy methodology. The manuscript follows a methodology-based classification of the different techniques 30 available for the analysis of multi-organs and multi-anatomical structures, from techniques using point distribution models to the most recent deep learning-based approaches. With more than 300 papers included in this review, we reflect on the trends and challenges of the field of computational anatomy, the particularities of each anatomical region, and the potential of multi-organ analysis to increase the impact of 35 medical imaging applications on the future of healthcare.Comment: Paper under revie

    Deep learning in medical image registration: introduction and survey

    Full text link
    Image registration (IR) is a process that deforms images to align them with respect to a reference space, making it easier for medical practitioners to examine various medical images in a standardized reference frame, such as having the same rotation and scale. This document introduces image registration using a simple numeric example. It provides a definition of image registration along with a space-oriented symbolic representation. This review covers various aspects of image transformations, including affine, deformable, invertible, and bidirectional transformations, as well as medical image registration algorithms such as Voxelmorph, Demons, SyN, Iterative Closest Point, and SynthMorph. It also explores atlas-based registration and multistage image registration techniques, including coarse-fine and pyramid approaches. Furthermore, this survey paper discusses medical image registration taxonomies, datasets, evaluation measures, such as correlation-based metrics, segmentation-based metrics, processing time, and model size. It also explores applications in image-guided surgery, motion tracking, and tumor diagnosis. Finally, the document addresses future research directions, including the further development of transformers

    3D Deep Learning on Medical Images: A Review

    Full text link
    The rapid advancements in machine learning, graphics processing technologies and availability of medical imaging data has led to a rapid increase in use of deep learning models in the medical domain. This was exacerbated by the rapid advancements in convolutional neural network (CNN) based architectures, which were adopted by the medical imaging community to assist clinicians in disease diagnosis. Since the grand success of AlexNet in 2012, CNNs have been increasingly used in medical image analysis to improve the efficiency of human clinicians. In recent years, three-dimensional (3D) CNNs have been employed for analysis of medical images. In this paper, we trace the history of how the 3D CNN was developed from its machine learning roots, give a brief mathematical description of 3D CNN and the preprocessing steps required for medical images before feeding them to 3D CNNs. We review the significant research in the field of 3D medical imaging analysis using 3D CNNs (and its variants) in different medical areas such as classification, segmentation, detection, and localization. We conclude by discussing the challenges associated with the use of 3D CNNs in the medical imaging domain (and the use of deep learning models, in general) and possible future trends in the field.Comment: 13 pages, 4 figures, 2 table
    corecore