2,395 research outputs found
Multiclass Data Segmentation using Diffuse Interface Methods on Graphs
We present two graph-based algorithms for multiclass segmentation of
high-dimensional data. The algorithms use a diffuse interface model based on
the Ginzburg-Landau functional, related to total variation compressed sensing
and image processing. A multiclass extension is introduced using the Gibbs
simplex, with the functional's double-well potential modified to handle the
multiclass case. The first algorithm minimizes the functional using a convex
splitting numerical scheme. The second algorithm is a uses a graph adaptation
of the classical numerical Merriman-Bence-Osher (MBO) scheme, which alternates
between diffusion and thresholding. We demonstrate the performance of both
algorithms experimentally on synthetic data, grayscale and color images, and
several benchmark data sets such as MNIST, COIL and WebKB. We also make use of
fast numerical solvers for finding the eigenvectors and eigenvalues of the
graph Laplacian, and take advantage of the sparsity of the matrix. Experiments
indicate that the results are competitive with or better than the current
state-of-the-art multiclass segmentation algorithms.Comment: 14 page
Variational Image Segmentation Model Coupled with Image Restoration Achievements
Image segmentation and image restoration are two important topics in image
processing with great achievements. In this paper, we propose a new multiphase
segmentation model by combining image restoration and image segmentation
models. Utilizing image restoration aspects, the proposed segmentation model
can effectively and robustly tackle high noisy images, blurry images, images
with missing pixels, and vector-valued images. In particular, one of the most
important segmentation models, the piecewise constant Mumford-Shah model, can
be extended easily in this way to segment gray and vector-valued images
corrupted for example by noise, blur or missing pixels after coupling a new
data fidelity term which comes from image restoration topics. It can be solved
efficiently using the alternating minimization algorithm, and we prove the
convergence of this algorithm with three variables under mild condition.
Experiments on many synthetic and real-world images demonstrate that our method
gives better segmentation results in comparison to others state-of-the-art
segmentation models especially for blurry images and images with missing pixels
values.Comment: 23 page
Colour image segmentation by the vector-valued Allen-Cahn phase-field model: a multigrid solution
We propose a new method for the numerical solution of a PDE-driven model for
colour image segmentation and give numerical examples of the results. The
method combines the vector-valued Allen-Cahn phase field equation with initial
data fitting terms. This method is known to be closely related to the
Mumford-Shah problem and the level set segmentation by Chan and Vese. Our
numerical solution is performed using a multigrid splitting of a finite element
space, thereby producing an efficient and robust method for the segmentation of
large images.Comment: 17 pages, 9 figure
Tomography: mathematical aspects and applications
In this article we present a review of the Radon transform and the
instability of the tomographic reconstruction process. We show some new
mathematical results in tomography obtained by a variational formulation of the
reconstruction problem based on the minimization of a Mumford-Shah type
functional. Finally, we exhibit a physical interpretation of this new technique
and discuss some possible generalizations.Comment: 11 pages, 5 figure
A Two-stage Classification Method for High-dimensional Data and Point Clouds
High-dimensional data classification is a fundamental task in machine
learning and imaging science. In this paper, we propose a two-stage multiphase
semi-supervised classification method for classifying high-dimensional data and
unstructured point clouds. To begin with, a fuzzy classification method such as
the standard support vector machine is used to generate a warm initialization.
We then apply a two-stage approach named SaT (smoothing and thresholding) to
improve the classification. In the first stage, an unconstraint convex
variational model is implemented to purify and smooth the initialization,
followed by the second stage which is to project the smoothed partition
obtained at stage one to a binary partition. These two stages can be repeated,
with the latest result as a new initialization, to keep improving the
classification quality. We show that the convex model of the smoothing stage
has a unique solution and can be solved by a specifically designed primal-dual
algorithm whose convergence is guaranteed. We test our method and compare it
with the state-of-the-art methods on several benchmark data sets. The
experimental results demonstrate clearly that our method is superior in both
the classification accuracy and computation speed for high-dimensional data and
point clouds.Comment: 21 pages, 4 figure
Semantically Guided Depth Upsampling
We present a novel method for accurate and efficient up- sampling of sparse
depth data, guided by high-resolution imagery. Our approach goes beyond the use
of intensity cues only and additionally exploits object boundary cues through
structured edge detection and semantic scene labeling for guidance. Both cues
are combined within a geodesic distance measure that allows for
boundary-preserving depth in- terpolation while utilizing local context. We
model the observed scene structure by locally planar elements and formulate the
upsampling task as a global energy minimization problem. Our method determines
glob- ally consistent solutions and preserves fine details and sharp depth
bound- aries. In our experiments on several public datasets at different levels
of application, we demonstrate superior performance of our approach over the
state-of-the-art, even for very sparse measurements.Comment: German Conference on Pattern Recognition 2016 (Oral
- …