1,296 research outputs found
Convex Variational Approaches to Image Motion Estimation, Denoising and Segmentation
Energy minimization and variational methods are widely used in image processing and computer vision, where most energy functions and related constraints can be expressed as, or at least relaxed to, a convex formulation. In this regard, the central role is played by convexity, which not only provides an elegant analytical tool in mathematics but also facilitates the derivation of fast and tractable numerical solvers. In this thesis, four challenging topics of computer vision and image processing are studied by means of modern convex optimization techniques: non-rigid motion decomposition and estimation, TV-L1 image approximation, image segmentation, and multi-class image partition. Some of them are originally modelled in a convex formulation and can be directly solved by convex optimization methods, such as non-rigid flow estimation and non-smooth flow decomposition. The others are first stated as a non-convex model, then studied and solved in a convex relaxation manner, for which their dual models are employed to derive both novel analytical results and fast numerical solvers
Combining local regularity estimation and total variation optimization for scale-free texture segmentation
Texture segmentation constitutes a standard image processing task, crucial to
many applications. The present contribution focuses on the particular subset of
scale-free textures and its originality resides in the combination of three key
ingredients: First, texture characterization relies on the concept of local
regularity ; Second, estimation of local regularity is based on new multiscale
quantities referred to as wavelet leaders ; Third, segmentation from local
regularity faces a fundamental bias variance trade-off: In nature, local
regularity estimation shows high variability that impairs the detection of
changes, while a posteriori smoothing of regularity estimates precludes from
locating correctly changes. Instead, the present contribution proposes several
variational problem formulations based on total variation and proximal
resolutions that effectively circumvent this trade-off. Estimation and
segmentation performance for the proposed procedures are quantified and
compared on synthetic as well as on real-world textures
Active Mean Fields for Probabilistic Image Segmentation: Connections with Chan-Vese and Rudin-Osher-Fatemi Models
Segmentation is a fundamental task for extracting semantically meaningful
regions from an image. The goal of segmentation algorithms is to accurately
assign object labels to each image location. However, image-noise, shortcomings
of algorithms, and image ambiguities cause uncertainty in label assignment.
Estimating the uncertainty in label assignment is important in multiple
application domains, such as segmenting tumors from medical images for
radiation treatment planning. One way to estimate these uncertainties is
through the computation of posteriors of Bayesian models, which is
computationally prohibitive for many practical applications. On the other hand,
most computationally efficient methods fail to estimate label uncertainty. We
therefore propose in this paper the Active Mean Fields (AMF) approach, a
technique based on Bayesian modeling that uses a mean-field approximation to
efficiently compute a segmentation and its corresponding uncertainty. Based on
a variational formulation, the resulting convex model combines any
label-likelihood measure with a prior on the length of the segmentation
boundary. A specific implementation of that model is the Chan-Vese segmentation
model (CV), in which the binary segmentation task is defined by a Gaussian
likelihood and a prior regularizing the length of the segmentation boundary.
Furthermore, the Euler-Lagrange equations derived from the AMF model are
equivalent to those of the popular Rudin-Osher-Fatemi (ROF) model for image
denoising. Solutions to the AMF model can thus be implemented by directly
utilizing highly-efficient ROF solvers on log-likelihood ratio fields. We
qualitatively assess the approach on synthetic data as well as on real natural
and medical images. For a quantitative evaluation, we apply our approach to the
icgbench dataset
A Novel Euler's Elastica based Segmentation Approach for Noisy Images via using the Progressive Hedging Algorithm
Euler's Elastica based unsupervised segmentation models have strong
capability of completing the missing boundaries for existing objects in a clean
image, but they are not working well for noisy images. This paper aims to
establish a Euler's Elastica based approach that properly deals with random
noises to improve the segmentation performance for noisy images. We solve the
corresponding optimization problem via using the progressive hedging algorithm
(PHA) with a step length suggested by the alternating direction method of
multipliers (ADMM). Technically, all the simplified convex versions of the
subproblems derived from the major framework of PHA can be obtained by using
the curvature weighted approach and the convex relaxation method. Then an
alternating optimization strategy is applied with the merits of using some
powerful accelerating techniques including the fast Fourier transform (FFT) and
generalized soft threshold formulas. Extensive experiments have been conducted
on both synthetic and real images, which validated some significant gains of
the proposed segmentation models and demonstrated the advantages of the
developed algorithm
- …