164 research outputs found

    Bayesian methods for segmentation of objects from multimodal and complex shape densities using statistical shape priors

    Get PDF
    In many image segmentation problems involving limited and low-quality data, employing statistical prior information about the shapes of the objects to be segmented can significantly improve the segmentation result. However, defining probability densities in the space of shapes is an open and challenging problem, especially if the object to be segmented comes from a shape density involving multiple modes (classes). In the literature, there are some techniques that exploit nonparametric shape priors to learn multimodal prior densities from a training set. These methods solve the problem of segmenting objects of limited and low-quality to some extent by performing maximum a posteriori (MAP) estimation. However, these methods assume that the boundaries found by using the observed data can provide at least a good initialization for MAP estimation so that convergence to a desired mode of the posterior density is achieved. There are two major problems with this assumption that we focus in this thesis. First, as the data provide less information, these approaches can get stuck at a local optimum which may not be the desired solution. Second, even though a good initialization directs the segmenting curve to a local optimum solution that looks like the desired segmentation, it does not provide a picture of other probable solutions, potentially from different modes of the posterior density, based on the data and the priors. In this thesis, we propose methods for segmentation of objects that come from multimodal posterior densities and suffer from severe noise, occlusion and missing data. The first framework that we propose represents the segmentation problem in terms of the joint posterior density of shapes and features. We incorporate the learned joint shape and feature prior distribution into a maximum a posteri- ori estimation framework for segmentation. In our second proposed framework, we approach the segmentation problem from the approximate Bayesian inference perspective. We propose two different Markov chain Monte Carlo (MCMC) sampling based image segmentation approaches that generates samples from the posterior density. As a final contribution of this thesis, we propose a new shape model that learns binary shape distributions by exploiting local shape priors and the Boltzmann machine. Although the proposed generative shape model has not been used in the context of object segmentation in this thesis, it has great potential to be used for this purpose. The source code of the methods introduced in this thesis will be available in https://github.com/eerdil

    Inference via low-dimensional couplings

    Full text link
    We investigate the low-dimensional structure of deterministic transformations between random variables, i.e., transport maps between probability measures. In the context of statistics and machine learning, these transformations can be used to couple a tractable "reference" measure (e.g., a standard Gaussian) with a target measure of interest. Direct simulation from the desired measure can then be achieved by pushing forward reference samples through the map. Yet characterizing such a map---e.g., representing and evaluating it---grows challenging in high dimensions. The central contribution of this paper is to establish a link between the Markov properties of the target measure and the existence of low-dimensional couplings, induced by transport maps that are sparse and/or decomposable. Our analysis not only facilitates the construction of transformations in high-dimensional settings, but also suggests new inference methodologies for continuous non-Gaussian graphical models. For instance, in the context of nonlinear state-space models, we describe new variational algorithms for filtering, smoothing, and sequential parameter inference. These algorithms can be understood as the natural generalization---to the non-Gaussian case---of the square-root Rauch-Tung-Striebel Gaussian smoother.Comment: 78 pages, 25 figure

    Nonparametric Statistical Inference with an Emphasis on Information-Theoretic Methods

    Get PDF
    This book addresses contemporary statistical inference issues when no or minimal assumptions on the nature of studied phenomenon are imposed. Information theory methods play an important role in such scenarios. The approaches discussed include various high-dimensional regression problems, time series and dependence analyses

    Probabilistic Linear Algebra for Stochastic Optimization

    Get PDF
    The emergent field of machine learning has by now become the main proponent of data-driven discovery. Yet, with ever more data, it is also faced with new computational challenges. To make machines "learn", the desired task is oftentimes phrased as an empirical risk minimization problem that needs to be solved by numerical optimization routines. Optimization in ML deviates from the scope of traditional optimization in two regards. First, ML deals with large datasets that need to be subsampled to reduce the computational burden, inadvertently introducing noise into the optimization procedure. The second distinction is the sheer size of the parameter space which severely limits the amount of information that optimization algorithms store. Both aspects together have made first-order optimization routines a prevalent choice for model training in ML. First-order algorithms use only gradient information to determine a step direction and step length to update the parameters. Inclusion of second-order information about the local curvature has a great potential to improve the performance of the optimizer if done efficiently. Probabilistic curvature estimation for use in optimization is a recurring theme of this thesis and the problem is explored in three different directions that are relevant to ML training. By iteratively adapting the scale of an arbitrary curvature estimate it is possible to circumvent the tedious work of manually tuning the optimizer’s step length during model training. The general form of the curvature estimate naturally extends its applicability to various popular optimization algorithms. Curvature can also be inferred with matrix-variate distributions by projections of the curvature matrix. Noise can then be captured by a likelihood with non-vanishing width, leading to a novel update strategy that uses the inherent uncertainty to estimate the curvature. Finally, a new form of curvature estimate is derived from gradient observations of a nonparametric model. It expands the family of viable curvature estimates used in optimization. An important outcome of the research is to highlight the benefit of utilizing curvature information in stochastic optimization. By considering multiple ways of efficiently leveraging second-order information, the thesis advances the frontier of stochastic optimization and unlocks new avenues for research on the training of large scale ML models

    Quantitative measurement of pseudoexfoliation in the anterior segment of the eye performed in visible light

    Get PDF
    Introduction: Pseudoexfoliation syndrome (PEX) is a systemic disease involving the accumulation of pathological material deposits in the tissues of the anterior segment of the eye. The problem of modern ophthalmology is a quantitative assessment of the severity of PEX in the diagnosis and evaluation of the treatment progress in patients.Material and method: For the purposes of this study, 52 images of the anterior segment of the eye with the resolution of M Ă— N = 1280 Ă— 960 pixels were obtained in jpg format using the slit lamp CSO 450-SL. The patients were aged 50-80 and were recruited from Poland. All patients who participated in the study provided written informed consent after explanation of the nature and possible consequences of the study. The image analysis method proposed by the authors contains the calculation of the direction field, setting a straight perpendicular line passing through each pixel of the edge of the pupil, the calculation of the intersection of straight lines in order to determine the central point of the pupil position, the detection of the contour of PEX and the outer border of the iris with the use of the polar coordinate system. All analyzed parameters were set automatically with one exception parameter chosen manually depending on the slit lamp type.Results: A fully automatic measurement of PEX was carried out with the proposed method. Quantitative results enable to perform reproducible tests independently of the research centre. Owing to the image analysis method proposed by the authors, it is possible to obtain results in no more than 1 second on the Intel Core 2 Quad CPU 2.50 GHz with a measurement error below 3%. Other known methods of image analysis and processing that are compared in this paper give results with a greater error (4-35%) which depends on the degree of magnification (Ă—6, Ă—16, Ă—20) and are not fully automatic.Conclusions: The methods of image analysis and processing enable a quantitative, repeatable and automatic measurement of the severity and progress of PEX syndrome. They support medical diagnosis and automatic archiving of results

    Nonparametric Detection of Nonlinearly Mixed Pixels and Endmember Estimation in Hyperspectral Images

    Get PDF
    International audienceMixing phenomena in hyperspectral images depend on a variety of factors, such as the resolution of observation devices, the properties of materials, and how these materials interact with incident light in the scene. Different parametric and nonparametric models have been considered to address hyperspectral unmixing problems. The simplest one is the linear mixing model. Nevertheless, it has been recognized that the mixing phenomena can also be nonlinear. The corresponding nonlinear analysis techniques are necessarily more challenging and complex than those employed for linear unmixing. Within this context, it makes sense to detect the nonlinearly mixed pixels in an image prior to its analysis, and then employ the simplest possible unmixing technique to analyze each pixel. In this paper, we propose a technique for detecting nonlinearly mixed pixels. The detection approach is based on the comparison of the reconstruction errors using both a Gaussian process regression model and a linear regression model. The two errors are combined into a detection statistics for which a probability density function can be reasonably approximated. We also propose an iterative endmember extraction algorithm to be employed in combination with the detection algorithm. The proposed detect-then-unmix strategy, which consists of extracting endmembers, detecting nonlinearly mixed pixels and unmixing, is tested with synthetic and real images
    • …
    corecore