6,741 research outputs found

    A CONVEX AND SELECTIVE VARIATIONAL MODEL FOR IMAGE SEGMENTATION

    Get PDF
    Selective image segmentation is the task of extracting one object of interest from an image, based on minimal user input. Recent level set based variational models have shown to be effective and reliable, although they can be sensitive to initialization due to the minimization problems being nonconvex. This sometimes means that successful segmentation relies too heavily on user input or a solution found is only a local minimizer, i.e. not the correct solution. The same principle applies to variational models that extract all objects in an image (global segmentation); however, in recent years, some have been successfully reformulated as convex optimization problems, allowing global minimizers to be found. There are, however, problems associated with extending the convex formulation to the current selective models, which provides the motivation for the proposal of a new selective model. In this paper we propose a new selective segmentation model, combining ideas from global segmentation, that can be reformulated in a convex way such that a global minimizer can be found independently of initialization. Numerical results are given that demonstrate its reliability in terms of removing the sensitivity to initialization present in previous models, and its robustness to user input

    A Novel Euler's Elastica based Segmentation Approach for Noisy Images via using the Progressive Hedging Algorithm

    Get PDF
    Euler's Elastica based unsupervised segmentation models have strong capability of completing the missing boundaries for existing objects in a clean image, but they are not working well for noisy images. This paper aims to establish a Euler's Elastica based approach that properly deals with random noises to improve the segmentation performance for noisy images. We solve the corresponding optimization problem via using the progressive hedging algorithm (PHA) with a step length suggested by the alternating direction method of multipliers (ADMM). Technically, all the simplified convex versions of the subproblems derived from the major framework of PHA can be obtained by using the curvature weighted approach and the convex relaxation method. Then an alternating optimization strategy is applied with the merits of using some powerful accelerating techniques including the fast Fourier transform (FFT) and generalized soft threshold formulas. Extensive experiments have been conducted on both synthetic and real images, which validated some significant gains of the proposed segmentation models and demonstrated the advantages of the developed algorithm

    Unsupervised Multi Class Segmentation of 3D Images with Intensity Inhomogeneities

    Full text link
    Intensity inhomogeneities in images constitute a considerable challenge in image segmentation. In this paper we propose a novel biconvex variational model to tackle this task. We combine a total variation approach for multi class segmentation with a multiplicative model to handle the inhomogeneities. Our method assumes that the image intensity is the product of a smoothly varying part and a component which resembles important image structures such as edges. Therefore, we penalize in addition to the total variation of the label assignment matrix a quadratic difference term to cope with the smoothly varying factor. A critical point of our biconvex functional is computed by a modified proximal alternating linearized minimization method (PALM). We show that the assumptions for the convergence of the algorithm are fulfilled by our model. Various numerical examples demonstrate the very good performance of our method. Particular attention is paid to the segmentation of 3D FIB tomographical images which was indeed the motivation of our work

    A Two-stage Classification Method for High-dimensional Data and Point Clouds

    Full text link
    High-dimensional data classification is a fundamental task in machine learning and imaging science. In this paper, we propose a two-stage multiphase semi-supervised classification method for classifying high-dimensional data and unstructured point clouds. To begin with, a fuzzy classification method such as the standard support vector machine is used to generate a warm initialization. We then apply a two-stage approach named SaT (smoothing and thresholding) to improve the classification. In the first stage, an unconstraint convex variational model is implemented to purify and smooth the initialization, followed by the second stage which is to project the smoothed partition obtained at stage one to a binary partition. These two stages can be repeated, with the latest result as a new initialization, to keep improving the classification quality. We show that the convex model of the smoothing stage has a unique solution and can be solved by a specifically designed primal-dual algorithm whose convergence is guaranteed. We test our method and compare it with the state-of-the-art methods on several benchmark data sets. The experimental results demonstrate clearly that our method is superior in both the classification accuracy and computation speed for high-dimensional data and point clouds.Comment: 21 pages, 4 figure

    Active Mean Fields for Probabilistic Image Segmentation: Connections with Chan-Vese and Rudin-Osher-Fatemi Models

    Get PDF
    Segmentation is a fundamental task for extracting semantically meaningful regions from an image. The goal of segmentation algorithms is to accurately assign object labels to each image location. However, image-noise, shortcomings of algorithms, and image ambiguities cause uncertainty in label assignment. Estimating the uncertainty in label assignment is important in multiple application domains, such as segmenting tumors from medical images for radiation treatment planning. One way to estimate these uncertainties is through the computation of posteriors of Bayesian models, which is computationally prohibitive for many practical applications. On the other hand, most computationally efficient methods fail to estimate label uncertainty. We therefore propose in this paper the Active Mean Fields (AMF) approach, a technique based on Bayesian modeling that uses a mean-field approximation to efficiently compute a segmentation and its corresponding uncertainty. Based on a variational formulation, the resulting convex model combines any label-likelihood measure with a prior on the length of the segmentation boundary. A specific implementation of that model is the Chan-Vese segmentation model (CV), in which the binary segmentation task is defined by a Gaussian likelihood and a prior regularizing the length of the segmentation boundary. Furthermore, the Euler-Lagrange equations derived from the AMF model are equivalent to those of the popular Rudin-Osher-Fatemi (ROF) model for image denoising. Solutions to the AMF model can thus be implemented by directly utilizing highly-efficient ROF solvers on log-likelihood ratio fields. We qualitatively assess the approach on synthetic data as well as on real natural and medical images. For a quantitative evaluation, we apply our approach to the icgbench dataset
    • …
    corecore