504 research outputs found

    Polarimetric SAR Image Segmentation with B-Splines and a New Statistical Model

    Full text link
    We present an approach for polarimetric Synthetic Aperture Radar (SAR) image region boundary detection based on the use of B-Spline active contours and a new model for polarimetric SAR data: the GHP distribution. In order to detect the boundary of a region, initial B-Spline curves are specified, either automatically or manually, and the proposed algorithm uses a deformable contours technique to find the boundary. In doing this, the parameters of the polarimetric GHP model for the data are estimated, in order to find the transition points between the region being segmented and the surrounding area. This is a local algorithm since it works only on the region to be segmented. Results of its performance are presented

    Computerized Analysis of Magnetic Resonance Images to Study Cerebral Anatomy in Developing Neonates

    Get PDF
    The study of cerebral anatomy in developing neonates is of great importance for the understanding of brain development during the early period of life. This dissertation therefore focuses on three challenges in the modelling of cerebral anatomy in neonates during brain development. The methods that have been developed all use Magnetic Resonance Images (MRI) as source data. To facilitate study of vascular development in the neonatal period, a set of image analysis algorithms are developed to automatically extract and model cerebral vessel trees. The whole process consists of cerebral vessel tracking from automatically placed seed points, vessel tree generation, and vasculature registration and matching. These algorithms have been tested on clinical Time-of- Flight (TOF) MR angiographic datasets. To facilitate study of the neonatal cortex a complete cerebral cortex segmentation and reconstruction pipeline has been developed. Segmentation of the neonatal cortex is not effectively done by existing algorithms designed for the adult brain because the contrast between grey and white matter is reversed. This causes pixels containing tissue mixtures to be incorrectly labelled by conventional methods. The neonatal cortical segmentation method that has been developed is based on a novel expectation-maximization (EM) method with explicit correction for mislabelled partial volume voxels. Based on the resulting cortical segmentation, an implicit surface evolution technique is adopted for the reconstruction of the cortex in neonates. The performance of the method is investigated by performing a detailed landmark study. To facilitate study of cortical development, a cortical surface registration algorithm for aligning the cortical surface is developed. The method first inflates extracted cortical surfaces and then performs a non-rigid surface registration using free-form deformations (FFDs) to remove residual alignment. Validation experiments using data labelled by an expert observer demonstrate that the method can capture local changes and follow the growth of specific sulcus

    Bayesian inference for structured additive regression models for large-scale problems with applications to medical imaging

    Get PDF
    In der angewandten Statistik können Regressionsmodelle mit hochdimensionalen Koeffizienten auftreten, die sich nicht mit gewöhnlichen Computersystemen schätzen lassen. Dies betrifft unter anderem die Analyse digitaler Bilder unter Berücksichtigung räumlich-zeitlicher Abhängigkeiten, wie sie innerhalb der medizinisch-biologischen Forschung häufig vorkommen. In der vorliegenden Arbeit wird ein Verfahren formuliert, das in der Lage ist, Regressionsmodelle mit hochdimensionalen Koeffizienten und nicht-normalverteilten Zielgrößen unter moderaten Anforderungen an die benötigte Hardware zu schätzen. Hierzu wird zunächst im Rahmen strukturiert additiver Regressionsmodelle aufgezeigt, worin die Limitationen aktueller Inferenzansätze bei der Anwendung auf hochdimensionale Problemstellungen liegen, sowie Möglichkeiten diskutiert, diese zu umgehen. Darauf basierend wird ein Algorithmus formuliert, dessen Stärken und Schwächen anhand von Simulationsstudien analysiert werden. Darüber hinaus findet das Verfahren Anwendung in drei verschiedenen Bereichen der medizinisch-biologischen Bildgebung und zeigt dadurch, dass es ein vielversprechender Kandidat für die Beantwortung hochdimensionaler Fragestellungen ist.In applied statistics regression models with high-dimensional coefficients can occur which cannot be estimated using ordinary computers. Amongst others, this applies to the analysis of digital images taking spatio-temporal dependencies into account as they commonly occur within bio-medical research. In this thesis a procedure is formulated which allows to fit regression models with high-dimensional coefficients and non-normal response values requiring only moderate computational equipment. To this end, limitations of different inference strategies for structured additive regression models are demonstrated when applied to high-dimensional problems and possible solutions are discussed. Based thereon an algorithm is formulated whose strengths and weaknesses are subsequently analyzed using simulation studies. Furthermore, the procedure is applied to three different fields of bio-medical imaging from which can be concluded that the algorithm is a promising candidate for answering high-dimensional problems

    Noise-Enhanced and Human Visual System-Driven Image Processing: Algorithms and Performance Limits

    Get PDF
    This dissertation investigates the problem of image processing based on stochastic resonance (SR) noise and human visual system (HVS) properties, where several novel frameworks and algorithms for object detection in images, image enhancement and image segmentation as well as the method to estimate the performance limit of image segmentation algorithms are developed. Object detection in images is a fundamental problem whose goal is to make a decision if the object of interest is present or absent in a given image. We develop a framework and algorithm to enhance the detection performance of suboptimal detectors using SR noise, where we add a suitable dose of noise into the original image data and obtain the performance improvement. Micro-calcification detection is employed in this dissertation as an illustrative example. The comparative experiments with a large number of images verify the efficiency of the presented approach. Image enhancement plays an important role and is widely used in various vision tasks. We develop two image enhancement approaches. One is based on SR noise, HVS-driven image quality evaluation metrics and the constrained multi-objective optimization (MOO) technique, which aims at refining the existing suboptimal image enhancement methods. Another is based on the selective enhancement framework, under which we develop several image enhancement algorithms. The two approaches are applied to many low quality images, and they outperform many existing enhancement algorithms. Image segmentation is critical to image analysis. We present two segmentation algorithms driven by HVS properties, where we incorporate the human visual perception factors into the segmentation procedure and encode the prior expectation on the segmentation results into the objective functions through Markov random fields (MRF). Our experimental results show that the presented algorithms achieve higher segmentation accuracy than many representative segmentation and clustering algorithms available in the literature. Performance limit, or performance bound, is very useful to evaluate different image segmentation algorithms and to analyze the segmentability of the given image content. We formulate image segmentation as a parameter estimation problem and derive a lower bound on the segmentation error, i.e., the mean square error (MSE) of the pixel labels considered in our work, using a modified Cramér-Rao bound (CRB). The derivation is based on the biased estimator assumption, whose reasonability is verified in this dissertation. Experimental results demonstrate the validity of the derived bound

    A Framework for Image Segmentation Using Shape Models and Kernel Space Shape Priors

    Get PDF
    ©2008 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or distribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.DOI: 10.1109/TPAMI.2007.70774Segmentation involves separating an object from the background in a given image. The use of image information alone often leads to poor segmentation results due to the presence of noise, clutter or occlusion. The introduction of shape priors in the geometric active contour (GAC) framework has proved to be an effective way to ameliorate some of these problems. In this work, we propose a novel segmentation method combining image information with prior shape knowledge, using level-sets. Following the work of Leventon et al., we propose to revisit the use of PCA to introduce prior knowledge about shapes in a more robust manner. We utilize kernel PCA (KPCA) and show that this method outperforms linear PCA by allowing only those shapes that are close enough to the training data. In our segmentation framework, shape knowledge and image information are encoded into two energy functionals entirely described in terms of shapes. This consistent description permits to fully take advantage of the Kernel PCA methodology and leads to promising segmentation results. In particular, our shape-driven segmentation technique allows for the simultaneous encoding of multiple types of shapes, and offers a convincing level of robustness with respect to noise, occlusions, or smearing
    • …
    corecore