117 research outputs found
Quantification of spinal cord atrophy in magnetic resonance images
Quantifying the volume of the spinal cord is of vital interest for studying and understanding diseases of the central nervous system such as multiple sclerosis (MS). In this thesis, which is motivated by MS research, we propose methods for measuring the spinal cord cross-sectional area and volume in magnetic resonance (MR) images. These measurements are used for determining neural atrophy and for performing both longitudinal and cross-sectional comparisons in clinical trials.
We present three evolutionary steps of our approach: In the first step, we use graph cut–based image segmentation on the intensities of T1-weighted MR images. In the second step, we combine a continuous max flow segmentation algorithm with a cross-sectional similarity prior and Hessian-based structural features, which we apply to T1- and T2-weighted images. The prior leverages the fact that the spinal cord is an elongated structure by constraining its cross-sectional shape to vary only slowly along one image axis. In conjunction with the additional features, the segmentation robustness is thus increased. In the third step, we combine continuous max flow with anisotropic total variation regularization, which enables us to direct the regularization of the cross-sectional shape of the spinal cord more flexibly.
We implement the proposed approach as a semi-automatic software toolchain that automatically segments the spinal cord, reconstructs its surface, and acquires the desired measurements. The software employs a user-provided anatomical landmark as well as hints for the location of the spinal cord and its surroundings. It accounts for the bending of the spine, MR-induced image distortions, and noise.
We evaluate the proposed methods in experiments on phantom, healthy subject, and patient data. Our measurement accuracy and precision are on par with the state of the art. At the same time, our measurements on MS patient data are in accordance with the medical literature
Foetal echocardiographic segmentation
Congenital heart disease affects just under one percentage of all live births [1].
Those defects that manifest themselves as changes to the cardiac chamber volumes
are the motivation for the research presented in this thesis.
Blood volume measurements in vivo require delineation of the cardiac chambers and
manual tracing of foetal cardiac chambers is very time consuming and operator
dependent. This thesis presents a multi region based level set snake deformable
model applied in both 2D and 3D which can automatically adapt to some extent
towards ultrasound noise such as attenuation, speckle and partial occlusion artefacts.
The algorithm presented is named Mumford Shah Sarti Collision Detection (MSSCD).
The level set methods presented in this thesis have an optional shape prior term for
constraining the segmentation by a template registered to the image in the presence
of shadowing and heavy noise.
When applied to real data in the absence of the template the MSSCD algorithm is
initialised from seed primitives placed at the centre of each cardiac chamber. The
voxel statistics inside the chamber is determined before evolution. The MSSCD stops
at open boundaries between two chambers as the two approaching level set fronts
meet. This has significance when determining volumes for all cardiac compartments
since cardiac indices assume that each chamber is treated in isolation. Comparison
of the segmentation results from the implemented snakes including a previous level
set method in the foetal cardiac literature show that in both 2D and 3D on both real
and synthetic data, the MSSCD formulation is better suited to these types of data.
All the algorithms tested in this thesis are within 2mm error to manually traced
segmentation of the foetal cardiac datasets. This corresponds to less than 10% of
the length of a foetal heart. In addition to comparison with manual tracings all the
amorphous deformable model segmentations in this thesis are validated using a
physical phantom. The volume estimation of the phantom by the MSSCD
segmentation is to within 13% of the physically determined volume
Biological image analysis
In biological research images are extensively used to monitor growth, dynamics and changes in biological specimen, such as cells or plants. Many of these images are used solely for observation or are manually annotated by an expert. In this dissertation we discuss several methods to automate the annotating and analysis of bio-images. Two large clusters of methods have been investigated and developed. A first set of methods focuses on the automatic delineation of relevant objects in bio-images, such as individual cells in microscopic images. Since these methods should be useful for many different applications, e.g. to detect and delineate different objects (cells, plants, leafs, ...) in different types of images (different types of microscopes, regular colour photographs, ...), the methods should be easy to adjust. Therefore we developed a methodology relying on probability theory, where all required parameters can easily be estimated by a biologist, without requiring any knowledge on the techniques used in the actual software.
A second cluster of investigated techniques focuses on the analysis of shapes. By defining new features that describe shapes, we are able to automatically classify shapes, retrieve similar shapes from a database and even analyse how an object deforms through time
Recommended from our members
Blood Vessel Segmentation and shape analysis for quantification of Coronary Artery Stenosis in CT Angiography
This thesis presents an automated framework for quantitative vascular shape analysis of the coronary arteries, which constitutes an important and fundamental component of an automated image-based diagnostic system. Firstly, an automated vessel segmentation algorithm is developed to extract the coronary arteries based on the framework of active contours. Both global and local intensity statistics are utilised in the energy functional calculation, which allows for dealing with non-uniform brightness conditions, while evolving the contour towards to the desired boundaries without being trapped in local minima. To suppress kissing vessel artifacts, a slice-by-slice correction scheme, based on multiple regions competition, is proposed to identify and track the kissing vessels throughout the transaxial images of the CTA data. Based on the resulting segmentation, we then present a dedicated algorithm to estimate the geometric parameters of the extracted arteries, with focus on vessel bifurcations. In particular, the centreline and associated reference surface of the coronary arteries, in the vicinity of arterial bifurcations, are determined by registering an elliptical cross sectional tube to the desired constituent branch. The registration problem is solved by a hybrid optimisation method, combining local greedy search and dynamic programming, which ensures the global optimality of the solution and permits the incorporation of any hard constraints posed to the tube model within a natural and direct framework. In contrast with conventional volume domain methods, this technique works directly on the mesh domain, thus alleviating the need for image upsampling. The performance of the proposed framework, in terms of efficiency and accuracy, is demonstrated on both synthetic and clinical image data. Experimental results have shown that our techniques are capable of extracting the major branches of the coronary arteries and estimating the related geometric parameters (i.e., the centreline and the reference surface) with a high degree of agreement to those obtained through manual delineation. Particularly, all of the major branches of coronary arteries are successfully detected by the proposed technique, with a voxel-wise error at 0.73 voxels to the manually delineated ground truth data. Through the application of the slice-by-slice correction scheme, the false positive metric, for those coronary segments affected by kissing vessel artifacts, reduces from 294% to 22.5%. In terms of the capability of the presented framework in defining the location of centrelines across vessel bifurcations, the mean square errors (MSE) of the resulting centreline, with respect to the ground truth data, is reduced by an average of 62.3%, when compared with initial estimation obtained using a topological thinning based algorithm
2D Fast Vessel Visualization Using a Vessel Wall Mask Guiding Fine Vessel Detection
The paper addresses the fine retinal-vessel's detection issue that is faced in diagnostic applications and aims at assisting in better recognizing fine vessel anomalies in 2D. Our innovation relies in separating key visual features vessels exhibit in order to make the diagnosis of eventual retinopathologies easier to detect. This allows focusing on vessel segments which present fine
changes detectable at different sampling scales. We advocate that these changes can be addressed as subsequent stages of the same
vessel detection procedure. We first carry out an initial estimate of the basic vessel-wall's network, define the main wall-body,
and then try to approach the ridges and branches of the vasculature's using fine detection. Fine vessel screening looks into local structural inconsistencies in vessels properties, into noise, or into not expected intensity variations observed inside pre-known vessel-body areas. The vessels are first modelled sufficiently but not precisely by their walls with a tubular model-structure that is the result of an initial segmentation. This provides a chart of likely Vessel Wall Pixels (VWPs) yielding a form of a likelihood vessel map mainly based on gradient filter's intensity and spatial arrangement parameters (e.g., linear consistency). Specific vessel parameters (centerline, width, location, fall-away rate, main orientation) are post-computed by convolving the image with a set of pre-tuned spatial filters called Matched Filters (MFs). These are easily computed as Gaussian-like 2D forms that use a limited range sub-optimal parameters adjusted to the dominant vessel characteristics obtained by Spatial Grey Level Difference statistics limiting the range of search into vessel widths of 16, 32, and 64 pixels. Sparse pixels are effectively eliminated by applying a limited range Hough Transform (HT) or region growing. Major benefits are limiting the range of parameters, reducing the search-space for post-convolution to only masked regions, representing almost 2% of the 2D volume, good speed versus accuracy/time trade-off. Results show the potentials of our approach in terms of time for detection ROC analysis and accuracy of vessel pixel (VP) detection
Methodology for extensive evaluation of semiautomatic and interactive segmentation algorithms using simulated Interaction models
Performance of semiautomatic and interactive segmentation(SIS) algorithms are usually evaluated by employing a small number of human operators to segment the images. The human operators typically provide the approximate location of objects of interest and their boundaries in an interactive phase, which is followed by an automatic phase where the segmentation is performed under the constraints of the operator-provided guidance. The segmentation results produced from this small set of interactions do not represent the true capability and potential of the algorithm being evaluated. For example, due to inter-operator variability, human operators may make choices that may provide either overestimated or underestimated results. As well, their choices may not be realistic when compared to how the algorithm is used in the field, since interaction may be influenced by operator fatigue and lapses in judgement. Other drawbacks to using human operators to assess SIS algorithms, include: human error, the lack of available expert users, and the expense. A methodology for evaluating segmentation performance is proposed here which uses simulated Interaction models to programmatically generate large numbers of interactions to ensure the presence of interactions throughout the object region. These interactions are used to segment the objects of interest and the resulting segmentations are then analysed using statistical methods. The large number of interactions generated by simulated interaction models capture the variabilities existing in the set of user interactions by considering each and every pixel inside the entire region of the object as a potential location for an interaction to be placed with equal probability. Due to the practical limitation imposed by the enormous amount of computation for the enormous number of possible interactions, uniform sampling of interactions at regular intervals is used to generate the subset of all possible interactions which still can represent the diverse pattern of the entire set of interactions.
Categorization of interactions into different groups, based on the position of the interaction inside the object region and texture properties of the image region where the interaction is located, provides the opportunity for fine-grained algorithm performance analysis based on these two criteria. Application of statistical hypothesis testing make the analysis more accurate, scientific and reliable in comparison to conventional evaluation of semiautomatic segmentation algorithms. The proposed methodology has been demonstrated by two case studies through implementation of seven different algorithms using three different types of interaction modes making a total of nine segmentation applications to assess the efficacy of the methodology. Application of this methodology has revealed in-depth, fine details about the performance of the segmentation algorithms which currently existing methods could not achieve due to the absence of a large, unbiased set of interactions. Practical application of the methodology for a number of algorithms and diverse interaction modes have shown its feasibility and generality for it to be established as an appropriate methodology. Development of this methodology to be used as a potential application for automatic evaluation of the performance of SIS algorithms looks very promising for users of image segmentation
- …