40,078 research outputs found

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    Image Segmentation, Registration, Compression, and Matching

    Get PDF
    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity/topology components of the generated models. The highly efficient triangular mesh compression compacts the connectivity information at the rate of 1.5-4 bits per vertex (on average for triangle meshes), while reducing the 3D geometry by 40-50 percent. Finally, taking into consideration the characteristics of 3D terrain data, and using the innovative, regularized binary decomposition mesh modeling, a multistage, pattern-drive modeling, and compression technique has been developed to provide an effective framework for compressing digital elevation model (DEM) surfaces, high-resolution aerial imagery, and other types of NASA data

    Development and evaluation of image registration and segmentation algorithms for long wavelength infrared and visible wavelength images

    Get PDF
    In this thesis, algorithms for image registration and segmentation are developed to locate and identify DU penetrators and associated metal projectile debris on or near the surface at the US DoD firing ranges and proving grounds. The proposed registration algorithm supports fusing the LWIR and visible images. Control points are indentified by area-base detection and followed by eliminating outliers. Associated with bilinear interpolation, the gravity centers of control points are used to estimate the transformation parameters. The segmentation with a statistical detector is developed to improve the fusion result. The power spectrum density is invoked to extract and identify the image properties, and the probability of each pixel classified as target further the decision. The final result is consistent with the true vision and carries distinguished target information. The combination of registration and segmentation approaches can effectively orientate and investigate the target area

    CortexMorph: fast cortical thickness estimation via diffeomorphic registration using VoxelMorph

    Full text link
    The thickness of the cortical band is linked to various neurological and psychiatric conditions, and is often estimated through surface-based methods such as Freesurfer in MRI studies. The DiReCT method, which calculates cortical thickness using a diffeomorphic deformation of the gray-white matter interface towards the pial surface, offers an alternative to surface-based methods. Recent studies using a synthetic cortical thickness phantom have demonstrated that the combination of DiReCT and deep-learning-based segmentation is more sensitive to subvoxel cortical thinning than Freesurfer. While anatomical segmentation of a T1-weighted image now takes seconds, existing implementations of DiReCT rely on iterative image registration methods which can take up to an hour per volume. On the other hand, learning-based deformable image registration methods like VoxelMorph have been shown to be faster than classical methods while improving registration accuracy. This paper proposes CortexMorph, a new method that employs unsupervised deep learning to directly regress the deformation field needed for DiReCT. By combining CortexMorph with a deep-learning-based segmentation model, it is possible to estimate region-wise thickness in seconds from a T1-weighted image, while maintaining the ability to detect cortical atrophy. We validate this claim on the OASIS-3 dataset and the synthetic cortical thickness phantom of Rusak et al.Comment: Accepted (early acceptance) at MICCAI 202

    Development and evaluation of image registration and segmentation algorithms for long wavelength infrared and visible wavelength images

    Get PDF
    In this thesis, algorithms for image registration and segmentation are developed to locate and identify DU penetrators and associated metal projectile debris on or near the surface at the US DoD firing ranges and proving grounds. The proposed registration algorithm supports fusing the LWIR and visible images. Control points are indentified by area-base detection and followed by eliminating outliers. Associated with bilinear interpolation, the gravity centers of control points are used to estimate the transformation parameters. The segmentation with a statistical detector is developed to improve the fusion result. The power spectrum density is invoked to extract and identify the image properties, and the probability of each pixel classified as target further the decision. The final result is consistent with the true vision and carries distinguished target information. The combination of registration and segmentation approaches can effectively orientate and investigate the target area

    Mesh-to-raster based non-rigid registration of multi-modal images

    Full text link
    Region of interest (ROI) alignment in medical images plays a crucial role in diagnostics, procedure planning, treatment, and follow-up. Frequently, a model is represented as triangulated mesh while the patient data is provided from CAT scanners as pixel or voxel data. Previously, we presented a 2D method for curve-to-pixel registration. This paper contributes (i) a general mesh-to-raster (M2R) framework to register ROIs in multi-modal images; (ii) a 3D surface-to-voxel application, and (iii) a comprehensive quantitative evaluation in 2D using ground truth provided by the simultaneous truth and performance level estimation (STAPLE) method. The registration is formulated as a minimization problem where the objective consists of a data term, which involves the signed distance function of the ROI from the reference image, and a higher order elastic regularizer for the deformation. The evaluation is based on quantitative light-induced fluoroscopy (QLF) and digital photography (DP) of decalcified teeth. STAPLE is computed on 150 image pairs from 32 subjects, each showing one corresponding tooth in both modalities. The ROI in each image is manually marked by three experts (900 curves in total). In the QLF-DP setting, our approach significantly outperforms the mutual information-based registration algorithm implemented with the Insight Segmentation and Registration Toolkit (ITK) and Elastix

    On the Property Rights System of the State Enterprises in China

    Get PDF
    Detailed analysis of spinal deformity is important within orthopaedic healthcare, in particular for assessment of idiopathic scoliosis. This paper addresses this challenge by proposing an image analysis method, capable of providing a full three-dimensional spine characterization. The proposed method is based on the registration of a highly detailed spine model to image data from computed tomography. The registration process provides an accurate segmentation of each individual vertebra and the ability to derive various measures describing the spinal deformity. The derived measures are estimated from landmarks attached to the spine model and transferred to the patient data according to the registration result. Evaluation of the method provides an average point-to-surface error of 0.9 mm ± 0.9 (comparing segmentations), and an average target registration error of 2.3 mm ± 1.7 (comparing landmarks). Comparing automatic and manual measurements of axial vertebral rotation provides a mean absolute difference of 2.5° ± 1.8, which is on a par with other computerized methods for assessing axial vertebral rotation. A significant advantage of our method, compared to other computerized methods for rotational measurements, is that it does not rely on vertebral symmetry for computing the rotational measures. The proposed method is fully automatic and computationally efficient, only requiring three to four minutes to process an entire image volume covering vertebrae L5 to T1. Given the use of landmarks, the method can be readily adapted to estimate other measures describing a spinal deformity by changing the set of employed landmarks. In addition, the method has the potential to be utilized for accurate segmentations of the vertebrae in routine computed tomography examinations, given the relatively low point-to-surface error

    Physiological basis and image processing in functional magnetic resonance imaging: Neuronal and motor activity in brain

    Get PDF
    Functional magnetic resonance imaging (fMRI) is recently developing as imaging modality used for mapping hemodynamics of neuronal and motor event related tissue blood oxygen level dependence (BOLD) in terms of brain activation. Image processing is performed by segmentation and registration methods. Segmentation algorithms provide brain surface-based analysis, automated anatomical labeling of cortical fields in magnetic resonance data sets based on oxygen metabolic state. Registration algorithms provide geometric features using two or more imaging modalities to assure clinically useful neuronal and motor information of brain activation. This review article summarizes the physiological basis of fMRI signal, its origin, contrast enhancement, physical factors, anatomical labeling by segmentation, registration approaches with examples of visual and motor activity in brain. Latest developments are reviewed for clinical applications of fMRI along with other different neurophysiological and imaging modalities

    CartiMorph: a framework for automated knee articular cartilage morphometrics

    Full text link
    We introduce CartiMorph, a framework for automated knee articular cartilage morphometrics. It takes an image as input and generates quantitative metrics for cartilage subregions, including the percentage of full-thickness cartilage loss (FCL), mean thickness, surface area, and volume. CartiMorph leverages the power of deep learning models for hierarchical image feature representation. Deep learning models were trained and validated for tissue segmentation, template construction, and template-to-image registration. We established methods for surface-normal-based cartilage thickness mapping, FCL estimation, and rule-based cartilage parcellation. Our cartilage thickness map showed less error in thin and peripheral regions. We evaluated the effectiveness of the adopted segmentation model by comparing the quantitative metrics obtained from model segmentation and those from manual segmentation. The root-mean-squared deviation of the FCL measurements was less than 8%, and strong correlations were observed for the mean thickness (Pearson's correlation coefficient ρ[0.82,0.97]\rho \in [0.82,0.97]), surface area (ρ[0.82,0.98]\rho \in [0.82,0.98]) and volume (ρ[0.89,0.98]\rho \in [0.89,0.98]) measurements. We compared our FCL measurements with those from a previous study and found that our measurements deviated less from the ground truths. We observed superior performance of the proposed rule-based cartilage parcellation method compared with the atlas-based approach. CartiMorph has the potential to promote imaging biomarkers discovery for knee osteoarthritis.Comment: To be published in Medical Image Analysi

    Medical image segmentation using object atlas versus object cloud models

    Get PDF
    Medical image segmentation is crucial for quantitative organ analysis and surgical planning. Since interactive segmentation is not practical in a production-mode clinical setting, automatic methods based on 3D object appearance models have been proposed. Among them, approaches based on object atlas are the most actively investigated. A key drawback of these approaches is that they require a time-costly image registration process to build and deploy the atlas. Object cloud models (OCM) have been introduced to avoid registration, considerably speeding up the whole process, but they have not been compared to object atlas models (OAM). The present paper fills this gap by presenting a comparative analysis of the two approaches in the task of individually segmenting nine anatomical structures of the human body. Our results indicate that OCM achieve a statistically significant better accuracy for seven anatomical structures, in terms of Dice Similarity Coefficient and Average Symmetric Surface Distance.Medical image segmentation is crucial for quantitative organ analysis and surgical planning. Since interactive segmentation is not practical in a production-mode clinical setting, automatic methods based on 3D object appearance models have been proposed.9415CNPQ - CONSELHO NACIONAL DE DESENVOLVIMENTO CIENTÍFICO E TECNOLÓGICOFAPESP - FUNDAÇÃO DE AMPARO À PESQUISA DO ESTADO DE SÃO PAULO303673/2010-9; 479070/2013-0; 131835/2013-0sem informaçãoSPIE - international society for optical engineering. medical imagin
    corecore