2,314 research outputs found

    Diffeomorphic image registration with applications to deformation modelling between multiple data sets

    Get PDF
    Over last years, the diffeomorphic image registration algorithms have been successfully introduced into the field of the medical image analysis. At the same time, the particular usability of these techniques, in majority derived from the solid mathematical background, has been only quantitatively explored for the limited applications such as longitudinal studies on treatment quality, or diseases progression. The thesis considers the deformable image registration algorithms, seeking out those that maintain the medical correctness of the estimated dense deformation fields in terms of the preservation of the object and its neighbourhood topology, offer the reasonable computational complexity to satisfy time restrictions coming from the potential applications, and are able to cope with low quality data typically encountered in Adaptive Radiotherapy (ART). The research has led to the main emphasis being laid on the diffeomorphic image registration to achieve one-to-one mapping between images. This involves introduction of the log-domain parameterisation of the deformation field by its approximation via a stationary velocity field. A quantitative and qualitative examination of existing and newly proposed algorithms for pairwise deformable image registration presented in this thesis, shows that the log-Euclidean parameterisation can be successfully utilised in the biomedical applications. Although algorithms utilising the log-domain parameterisation have theoretical justification for maintaining diffeomorphism, in general, the deformation fields produced by them have similar properties as these estimated by classical methods. Having this in mind, the best compromise in terms of the quality of the deformation fields has been found for the consistent image registration framework. The experimental results suggest also that the image registration with the symmetrical warping of the input images outperforms the classical approaches, and simultaneously can be easily introduced to most known algorithms. Furthermore, the log-domain implicit group-wise image registration is proposed. By linking the various sets of images related to the different subjects, the proposed image registration approach establishes a common subject space and between-subject correspondences therein. Although the correspondences between groups of images can be found by performing the classic image registration, the reference image selection (not required in the proposed implementation), may lead to a biased mean image being estimated and the corresponding common subject space not adequate to represent the general properties of the data sets. The approaches to diffeomorphic image registration have been also utilised as the principal elements for estimating the movements of the organs in the pelvic area based on the dense deformation field prediction system driven by the partial information coming from the specific type of the measurements parameterised using the implicit surface representation, and recognising facial expressions where the stationary velocity fields are used as the facial expression descriptors. Both applications have been extensively evaluated based on the real representative data sets of three-dimensional volumes and two-dimensional images, and the obtained results indicate the practical usability of the proposed techniques

    Robust Algorithms for Registration of 3D Images of Human Brain

    Get PDF
    This thesis is concerned with the process of automatically aligning 3D medical images of human brain. It concentrates on rigid-body matching of Positron Emission Tomography images (PET) and Magnetic Resonance images (MR) within one patient and on non-linear matching of PET images of different patients. In recent years, mutual information has proved to be an excellent criterion for automatic registration of intra-individual images from different modalities. We propose and evaluate a method that combines a multi-resolution optimization of mutual information with an efficient segmentation of background voxels and a modified principal axes algorithm. We show that an acceleration factor of 6-7 can be achieved without loss of accuracy and that the method significantly reduces the rate of unsuccessful registrations. Emphasis was also laid on creation of an automatic registration system that could be used routinely in clinical environment. Non-linear registration tries to reduce the inter-individual variability of shape and structure between two brain images by deforming one image so that homologous regions in both images get aligned. It is an important step of many procedures in medical image processing and analysis. We present a novel algorithm for an automatic non-linear registration of PET images based on hierarchical volume subdivisions and local affine optimizations. It produces a C2-continuous deformation function and guarantees that the deformation is one-to-one. Performance of the algorithm was evaluated on more than 600 clinical PET images

    Lung nodule modeling and detection for computerized image analysis of low dose CT imaging of the chest.

    Get PDF
    From a computerized image analysis prospective, early diagnosis of lung cancer involves detection of doubtful nodules and classification into different pathologies. The detection stage involves a detection approach, usually by template matching, and an authentication step to reduce false positives, usually conducted by a classifier of one form or another; statistical, fuzzy logic, support vector machines approaches have been tried. The classification stage matches, according to a particular approach, the characteristics (e.g., shape, texture and spatial distribution) of the detected nodules to common characteristics (again, shape, texture and spatial distribution) of nodules with known pathologies (confirmed by biopsies). This thesis focuses on the first step; i.e., nodule detection. Specifically, the thesis addresses three issues: a) understanding the CT data of typical low dose CT (LDCT) scanning of the chest, and devising an image processing approach to reduce the inherent artifacts in the scans; b) devising an image segmentation approach to isolate the lung tissues from the rest of the chest and thoracic regions in the CT scans; and c) devising a nodule modeling methodology to enhance the detection rate and lend benefits for the ultimate step in computerized image analysis of LDCT of the lungs, namely associating a pathology to the detected nodule. The methodology for reducing the noise artifacts is based on noise analysis and examination of typical LDCT scans that may be gathered on a repetitive fashion; since, a reduction in the resolution is inevitable to avoid excessive radiation. Two optimal filtering methods are tested on samples of the ELCAP screening data; the Weiner and the Anisotropic Diffusion Filters. Preference is given to the Anisotropic Diffusion Filter, which can be implemented on 7x7 blocks/windows of the CT data. The methodology for lung segmentation is based on the inherent characteristics of the LDCT scans, shown as distinct bi-modal gray scale histogram. A linear model is used to describe the histogram (the joint probability density function of the lungs and non-lungs tissues) by a linear combination of weighted kernels. The Gaussian kernels were chosen, and the classic Expectation-Maximization (EM) algorithm was employed to estimate the marginal probability densities of the lungs and non-lungs tissues, and select an optimal segmentation threshold. The segmentation is further enhanced using standard shape analysis based on mathematical morphology, which improves the continuity of the outer and inner borders of the lung tissues. This approach (a preliminary version of it appeared in [14]) is found to be adequate for lung segmentation as compared to more sophisticated approaches developed at the CVIP Lab (e.g., [15][16]) and elsewhere. The methodology developed for nodule modeling is based on understanding the physical characteristics of the nodules in LDCT scans, as identified by human experts. An empirical model is introduced for the probability density of the image intensity (or Hounsfield units) versus the radial distance measured from the centroid – center of mass - of typical nodules. This probability density showed that the nodule spatial support is within a circle/square of size 10 pixels; i.e., limited to 5 mm in length; which is within the range that the radiologist specify to be of concern. This probability density is used to fill in the intensity (or Hounsfield units) of parametric nodule models. For these models (e.g., circles or semi-circles), given a certain radius, we calculate the intensity (or Hounsfield units) using an exponential expression for the radial distance with parameters specified from the histogram of an ensemble of typical nodules. This work is similar in spirit to the earlier work of Farag et al., 2004 and 2005 [18][19], except that the empirical density of the radial distance and the histogram of typical nodules provide a data-driven guide for estimating the intensity (or Hounsfield units) of the nodule models. We examined the sensitivity and specificity of parametric nodules in a template-matching framework for nodule detection. We show that false positives are inevitable problems with typical machine learning methods of automatic lung nodule detection, which invites further efforts and perhaps fresh thinking into automatic nodule detection. A new approach for nodule modeling is introduced in Chapter 5 of this thesis, which brings high promise in both the detection, and the classification of nodules. Using the ELCAP study, we created an ensemble of four types of nodules and generated a nodule model for each type based on optimal data reduction methods. The resulting nodule model, for each type, has lead to drastic improvements in the sensitivity and specificity of nodule detection. This approach may be used as well for classification. In conclusion, the methodologies in this thesis are based on understanding the LDCT scans and what is to be expected in terms of image quality. Noise reduction and image segmentation are standard. The thesis illustrates that proper nodule models are possible and indeed a computerized approach for image analysis to detect and classify lung nodules is feasible. Extensions to the results in this thesis are immediate and the CVIP Lab has devised plans to pursue subsequent steps using clinical data

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    Pixel-level Image Fusion Algorithms for Multi-camera Imaging System

    Get PDF
    This thesis work is motivated by the potential and promise of image fusion technologies in the multi sensor image fusion system and applications. With specific focus on pixel level image fusion, the process after the image registration is processed, we develop graphic user interface for multi-sensor image fusion software using Microsoft visual studio and Microsoft Foundation Class library. In this thesis, we proposed and presented some image fusion algorithms with low computational cost, based upon spatial mixture analysis. The segment weighted average image fusion combines several low spatial resolution data source from different sensors to create high resolution and large size of fused image. This research includes developing a segment-based step, based upon stepwise divide and combine process. In the second stage of the process, the linear interpolation optimization is used to sharpen the image resolution. Implementation of these image fusion algorithms are completed based on the graphic user interface we developed. Multiple sensor image fusion is easily accommodated by the algorithm, and the results are demonstrated at multiple scales. By using quantitative estimation such as mutual information, we obtain the experiment quantifiable results. We also use the image morphing technique to generate fused image sequence, to simulate the results of image fusion. While deploying our pixel level image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also makes it hard to become deployed in system and applications that require real-time feedback, high flexibility and low computation abilit
    • …
    corecore