225 research outputs found
Image databases in medical applications
The number of medical images acquired yearly in hospitals increases all the time. These imaging data contain lots of information on the characteristics of anatomical structures and on their variations. This information can be utilized in numerous medical applications. In deformable model-based segmentation and registration methods, the information in the image databases can be used to give a priori information on the shape of the object studied and the gray-level values in the image, and on their variations. On the other hand, by studying the variations of the object of interest in different populations, the effects of, for example, aging, gender, and diseases on anatomical structures can be detected.
In the work described in this Thesis, methods that utilize image databases in medical applications were studied. Methods were developed and compared for deformable model-based segmentation and registration. Model selection procedure, mean models, and combination of classifiers were studied for the construction of a good a priori model. Statistical and probabilistic shape models were generated to constrain the deformations in segmentation and registration so that only the shapes typical to the object studied were accepted. In the shape analysis of the striatum, both volume and local shape changes were studied. The effects of aging and gender, and also the asymmetries were examined.
The results proved that the segmentation and registration accuracy of deformable model-based methods can be improved by utilizing the information in image databases. The databases used were relatively small. Therefore, the statistical and probabilistic methods were not able to model all the population-specific variation. On the other hand, the simpler methods, the model selection procedure, mean models, and combination of classifiers, gave good results also with the small image databases. Two main applications were the reconstruction of 3-D geometry from incomplete data and the segmentation of heart ventricles and atria from short- and long-axis magnetic resonance images. In both applications, the methods studied provided promising results. The shape analysis of the striatum showed that the volume of the striatum decreases in aging. Also, the shape of the striatum changes locally. Asymmetries in the shape were found, too, but any gender-related local shape differences were not found.reviewe
Atlas-Based Quantification of Cardiac Remodeling Due to Myocardial Infarction
Myocardial infarction leads to changes in the geometry (remodeling) of the left ventricle (LV) of the heart. The degree and type of remodeling provides important diagnostic information for the therapeutic management of ischemic heart disease. In this paper, we present a novel analysis framework for characterizing remodeling after myocardial infarction, using LV shape descriptors derived from atlas-based shape models. Cardiac magnetic resonance images from 300 patients with myocardial infarction and 1991 asymptomatic volunteers were obtained from the Cardiac Atlas Project. Finite element models were customized to the spatio-temporal shape and function of each case using guide-point modeling. Principal component analysis was applied to the shape models to derive modes of shape variation across all cases. A logistic regression analysis was performed to determine the modes of shape variation most associated with myocardial infarction. Goodness of fit results obtained from end-diastolic and end-systolic shapes were compared against the traditional clinical indices of remodeling: end-diastolic volume, end-systolic volume and LV mass. The combination of end-diastolic and endsystolic shape parameter analysis achieved the lowest deviance, Akaike information criterion and Bayesian information criterion, and the highest area under the receiver operating characteristic curve. Therefore, our framework quantitatively characterized remodeling features associated with myocardial infarction, better than current measures. These features enable quantification of the amount of remodeling, the progression of disease over time, and the effect of treatments designed to reverse remodeling effects
Computational Anatomy for Multi-Organ Analysis in Medical Imaging: A Review
The medical image analysis field has traditionally been focused on the
development of organ-, and disease-specific methods. Recently, the interest in
the development of more 20 comprehensive computational anatomical models has
grown, leading to the creation of multi-organ models. Multi-organ approaches,
unlike traditional organ-specific strategies, incorporate inter-organ relations
into the model, thus leading to a more accurate representation of the complex
human anatomy. Inter-organ relations are not only spatial, but also functional
and physiological. Over the years, the strategies 25 proposed to efficiently
model multi-organ structures have evolved from the simple global modeling, to
more sophisticated approaches such as sequential, hierarchical, or machine
learning-based models. In this paper, we present a review of the state of the
art on multi-organ analysis and associated computation anatomy methodology. The
manuscript follows a methodology-based classification of the different
techniques 30 available for the analysis of multi-organs and multi-anatomical
structures, from techniques using point distribution models to the most recent
deep learning-based approaches. With more than 300 papers included in this
review, we reflect on the trends and challenges of the field of computational
anatomy, the particularities of each anatomical region, and the potential of
multi-organ analysis to increase the impact of 35 medical imaging applications
on the future of healthcare.Comment: Paper under revie
Computational Methods for Segmentation of Multi-Modal Multi-Dimensional Cardiac Images
Segmentation of the heart structures helps compute the cardiac contractile function quantified via the systolic and diastolic volumes, ejection fraction, and myocardial mass, representing a reliable diagnostic value. Similarly, quantification of the myocardial mechanics throughout the cardiac cycle, analysis of the activation patterns in the heart via electrocardiography (ECG) signals, serve as good cardiac diagnosis indicators. Furthermore, high quality anatomical models of the heart can be used in planning and guidance of minimally invasive interventions under the assistance of image guidance.
The most crucial step for the above mentioned applications is to segment the ventricles and myocardium from the acquired cardiac image data. Although the manual delineation of the heart structures is deemed as the gold-standard approach, it requires significant time and effort, and is highly susceptible to inter- and intra-observer variability. These limitations suggest a need for fast, robust, and accurate semi- or fully-automatic segmentation algorithms. However, the complex motion and anatomy of the heart, indistinct borders due to blood flow, the presence of trabeculations, intensity inhomogeneity, and various other imaging artifacts, makes the segmentation task challenging.
In this work, we present and evaluate segmentation algorithms for multi-modal, multi-dimensional cardiac image datasets. Firstly, we segment the left ventricle (LV) blood-pool from a tri-plane 2D+time trans-esophageal (TEE) ultrasound acquisition using local phase based filtering and graph-cut technique, propagate the segmentation throughout the cardiac cycle using non-rigid registration-based motion extraction, and reconstruct the 3D LV geometry. Secondly, we segment the LV blood-pool and myocardium from an open-source 4D cardiac cine Magnetic Resonance Imaging (MRI) dataset by incorporating average atlas based shape constraint into the graph-cut framework and iterative segmentation refinement. The developed fast and robust framework is further extended to perform right ventricle (RV) blood-pool segmentation from a different open-source 4D cardiac cine MRI dataset. Next, we employ convolutional neural network based multi-task learning framework to segment the myocardium and regress its area, simultaneously, and show that segmentation based computation of the myocardial area is significantly better than that regressed directly from the network, while also being more interpretable. Finally, we impose a weak shape constraint via multi-task learning framework in a fully convolutional network and show improved segmentation performance for LV, RV and myocardium across healthy and pathological cases, as well as, in the challenging apical and basal slices in two open-source 4D cardiac cine MRI datasets.
We demonstrate the accuracy and robustness of the proposed segmentation methods by comparing the obtained results against the provided gold-standard manual segmentations, as well as with other competing segmentation methods
Contour-Driven Atlas-Based Segmentation
We propose new methods for automatic segmentation of images based on an atlas of manually labeled scans and contours in the image. First, we introduce a Bayesian framework for creating initial label maps from manually annotated training images. Within this framework, we model various registration- and patch-based segmentation techniques by changing the deformation field prior. Second, we perform contour-driven regression on the created label maps to refine the segmentation. Image contours and image parcellations give rise to non-stationary kernel functions that model the relationship between image locations. Setting the kernel to the covariance function in a Gaussian process establishes a distribution over label maps supported by image structures. Maximum a posteriori estimation of the distribution over label maps conditioned on the outcome of the atlas-based segmentation yields the refined segmentation. We evaluate the segmentation in two clinical applications: the segmentation of parotid glands in head and neck CT scans and the segmentation of the left atrium in cardiac MR angiography images
Patch-based segmentation with spatial context for medical image analysis
Accurate segmentations in medical imaging form a crucial role in many applications from pa-
tient diagnosis to population studies. As the amount of data generated from medical images
increases, the ability to perform this task without human intervention becomes ever more de-
sirable. One approach, known broadly as atlas-based segmentation, is to propagate labels from
images which have already been manually labelled by clinical experts. Methods using this ap-
proach have been shown to be e ective in many applications, demonstrating great potential for
automatic labelling of large datasets. However, these methods usually require the use of image
registration and are dependent on the outcome of the registration. Any registrations errors
that occur are also propagated to the segmentation process and are likely to have an adverse
e ect on segmentation accuracy. Recently, patch-based methods have been shown to allow a
relaxation of the required image alignment, whilst achieving similar results. In general, these
methods label each voxel of a target image by comparing the image patch centred on the voxel
with neighbouring patches from an atlas library and assigning the most likely label according
to the closest matches. The main contributions of this thesis focuses around this approach
in providing accurate segmentation results whilst minimising the dependency on registration
quality. In particular, this thesis proposes a novel kNN patch-based segmentation framework,
which utilises both intensity and spatial information, and explore the use of spatial context in
a diverse range of applications. The proposed methods extend the potential for patch-based
segmentation to tolerate registration errors by rede ning the \locality" for patch selection and
comparison, whilst also allowing similar looking patches from di erent anatomical structures
to be di erentiated. The methods are evaluated on a wide variety of image datasets, ranging
from the brain to the knees, demonstrating its potential with results which are competitive to
state-of-the-art techniques.Open Acces
Automated Morphometric Characterization of the Cerebral Cortex for the Developing and Ageing Brain
Morphometric characterisation of the cerebral cortex can provide information about patterns of brain development and ageing and may be relevant for diagnosis and estimation of the progression of diseases such as Alzheimer's, Huntington's, and schizophrenia. Therefore, understanding and describing the differences between populations in terms of structural volume, shape and thickness is of critical importance. Methodologically, due to data quality, presence of noise, PV effects, limited resolution and pathological variability, the automated, robust and time-consistent estimation of morphometric features is still an unsolved problem. This thesis focuses on the development of tools for robust cross-sectional and longitudinal morphometric characterisation of the human cerebral cortex. It describes techniques for tissue segmentation, structural and morphometric characterisation, cross-sectional and longitudinally cortical thickness estimation from serial MRI images in both adults and neonates. Two new probabilistic brain tissue segmentation techniques are introduced in order to accurately and robustly segment the brain of elderly and neonatal subjects, even in the presence of marked pathology. Two other algorithms based on the concept of multi-atlas segmentation propagation and fusion are also introduced in order to parcelate the brain into its multiple composing structures with the highest possible segmentation accuracy. Finally, we explore the use of the Khalimsky cubic complex framework for the extraction of topologically correct thickness measurements from probabilistic segmentations without explicit parametrisation of the edge. A longitudinal extension of this method is also proposed. The work presented in this thesis has been extensively validated on elderly and neonatal data from several scanners, sequences and protocols. The proposed algorithms have also been successfully applied to breast and heart MRI, neck and colon CT and also to small animal imaging. All the algorithms presented in this thesis are available as part of the open-source package NiftySeg
Segmentation of pelvic structures from preoperative images for surgical planning and guidance
Prostate cancer is one of the most frequently diagnosed malignancies globally and the second leading cause of cancer-related mortality in males in the developed world. In recent decades, many techniques have been proposed for prostate cancer diagnosis and treatment. With the development of imaging technologies such as CT and MRI, image-guided procedures have become increasingly important as a means to improve clinical outcomes. Analysis of the preoperative images and construction of 3D models prior to treatment would help doctors to better localize and visualize the structures of interest, plan the procedure, diagnose disease and guide the surgery or therapy. This requires efficient and robust medical image analysis and segmentation technologies to be developed.
The thesis mainly focuses on the development of segmentation techniques in pelvic MRI for image-guided robotic-assisted laparoscopic radical prostatectomy and external-beam radiation therapy. A fully automated multi-atlas framework is proposed for bony pelvis segmentation in MRI, using the guidance of MRI AE-SDM. With the guidance of the AE-SDM, a multi-atlas segmentation algorithm is used to delineate the bony pelvis in a new \ac{MRI} where there is no CT available. The proposed technique outperforms state-of-the-art algorithms for MRI bony pelvis segmentation. With the SDM of pelvis and its segmented surface, an accurate 3D pelvimetry system is designed and implemented to measure a comprehensive set of pelvic geometric parameters for the examination of the relationship between these parameters and the difficulty of robotic-assisted laparoscopic radical prostatectomy. This system can be used in both manual and automated manner with a user-friendly interface.
A fully automated and robust multi-atlas based segmentation has also been developed to delineate the prostate in diagnostic MR scans, which have large variation in both intensity and shape of prostate. Two image analysis techniques are proposed, including patch-based label fusion with local appearance-specific atlases and multi-atlas propagation via a manifold graph on a database of both labeled and unlabeled images when limited labeled atlases are available. The proposed techniques can achieve more robust and accurate segmentation results than other multi-atlas based methods.
The seminal vesicles are also an interesting structure for therapy planning, particularly for external-beam radiation therapy. As existing methods fail for the very onerous task of segmenting the seminal vesicles, a multi-atlas learning framework via random decision forests with graph cuts refinement has further been proposed to solve this difficult problem. Motivated by the performance of this technique, I further extend the multi-atlas learning to segment the prostate fully automatically using multispectral (T1 and T2-weighted) MR images via hybrid \ac{RF} classifiers and a multi-image graph cuts technique. The proposed method compares favorably to the previously proposed multi-atlas based prostate segmentation.
The work in this thesis covers different techniques for pelvic image segmentation in MRI. These techniques have been continually developed and refined, and their application to different specific problems shows ever more promising results.Open Acces
Automated Extraction of Biomarkers for Alzheimer's Disease from Brain Magnetic Resonance Images
In this work, different techniques for the automated extraction of biomarkers for
Alzheimer's disease (AD) from brain magnetic resonance imaging (MRI) are proposed.
The described work forms part of PredictAD (www.predictad.eu), a joined
European research project aiming at the identification of a unified biomarker for AD
combining different clinical and imaging measurements. Two different approaches are
followed in this thesis towards the extraction of MRI-based biomarkers: (I) the extraction
of traditional morphological biomarkers based on neuronatomical structures
and (II) the extraction of data-driven biomarkers applying machine-learning techniques.
A novel method for a unified and automated estimation of structural volumes
and volume changes is proposed. Furthermore, a new technique that allows the low-dimensional
representation of a high-dimensional image population for data analysis
and visualization is described. All presented methods are evaluated on images from
the Alzheimer's Disease Neuroimaging Initiative (ADNI), providing a large and diverse
clinical database. A rigorous evaluation of the power of all identified biomarkers to
discriminate between clinical subject groups is presented. In addition, the agreement
of automatically derived volumes with reference labels as well as the power of the
proposed method to measure changes in a subject's atrophy rate are assessed. The
proposed methods compare favorably to state-of-the art techniques in neuroimaging
in terms of accuracy, robustness and run-time
- …