88 research outputs found
Exact algorithms for -TV regularization of real-valued or circle-valued signals
We consider -TV regularization of univariate signals with values on the
real line or on the unit circle. While the real data space leads to a convex
optimization problem, the problem is non-convex for circle-valued data. In this
paper, we derive exact algorithms for both data spaces. A key ingredient is the
reduction of the infinite search spaces to a finite set of configurations,
which can be scanned by the Viterbi algorithm. To reduce the computational
complexity of the involved tabulations, we extend the technique of distance
transforms to non-uniform grids and to the circular data space. In total, the
proposed algorithms have complexity where is the length
of the signal and is the number of different values in the data set. In
particular, the complexity is for quantized data. It is the
first exact algorithm for TV regularization with circle-valued data, and it is
competitive with the state-of-the-art methods for scalar data, assuming that
the latter are quantized
Model-based learning of local image features for unsupervised texture segmentation
Features that capture well the textural patterns of a certain class of images
are crucial for the performance of texture segmentation methods. The manual
selection of features or designing new ones can be a tedious task. Therefore,
it is desirable to automatically adapt the features to a certain image or class
of images. Typically, this requires a large set of training images with similar
textures and ground truth segmentation. In this work, we propose a framework to
learn features for texture segmentation when no such training data is
available. The cost function for our learning process is constructed to match a
commonly used segmentation model, the piecewise constant Mumford-Shah model.
This means that the features are learned such that they provide an
approximately piecewise constant feature image with a small jump set. Based on
this idea, we develop a two-stage algorithm which first learns suitable
convolutional features and then performs a segmentation. We note that the
features can be learned from a small set of images, from a single image, or
even from image patches. The proposed method achieves a competitive rank in the
Prague texture segmentation benchmark, and it is effective for segmenting
histological images
Total variation regularization for manifold-valued data
We consider total variation minimization for manifold valued data. We propose
a cyclic proximal point algorithm and a parallel proximal point algorithm to
minimize TV functionals with -type data terms in the manifold case.
These algorithms are based on iterative geodesic averaging which makes them
easily applicable to a large class of data manifolds. As an application, we
consider denoising images which take their values in a manifold. We apply our
algorithms to diffusion tensor images, interferometric SAR images as well as
sphere and cylinder valued images. For the class of Cartan-Hadamard manifolds
(which includes the data space in diffusion tensor imaging) we show the
convergence of the proposed TV minimizing algorithms to a global minimizer
Jump-sparse and sparse recovery using Potts functionals
We recover jump-sparse and sparse signals from blurred incomplete data
corrupted by (possibly non-Gaussian) noise using inverse Potts energy
functionals. We obtain analytical results (existence of minimizers, complexity)
on inverse Potts functionals and provide relations to sparsity problems. We
then propose a new optimization method for these functionals which is based on
dynamic programming and the alternating direction method of multipliers (ADMM).
A series of experiments shows that the proposed method yields very satisfactory
jump-sparse and sparse reconstructions, respectively. We highlight the
capability of the method by comparing it with classical and recent approaches
such as TV minimization (jump-sparse signals), orthogonal matching pursuit,
iterative hard thresholding, and iteratively reweighted minimization
(sparse signals)
A First Derivative Potts Model for Segmentation and Denoising Using ILP
Unsupervised image segmentation and denoising are two fundamental tasks in
image processing. Usually, graph based models such as multicut are used for
segmentation and variational models are employed for denoising. Our approach
addresses both problems at the same time. We propose a novel ILP formulation of
the first derivative Potts model with the data term, where binary
variables are introduced to deal with the norm of the regularization
term. The ILP is then solved by a standard off-the-shelf MIP solver. Numerical
experiments are compared with the multicut problem.Comment: 6 pages, 2 figures. To appear at Proceedings of International
Conference on Operations Research 2017, Berli
Total Generalized Variation for Manifold-valued Data
In this paper we introduce the notion of second-order total generalized
variation (TGV) regularization for manifold-valued data in a discrete setting.
We provide an axiomatic approach to formalize reasonable generalizations of TGV
to the manifold setting and present two possible concrete instances that
fulfill the proposed axioms. We provide well-posedness results and present
algorithms for a numerical realization of these generalizations to the manifold
setup. Further, we provide experimental results for synthetic and real data to
further underpin the proposed generalization numerically and show its potential
for applications with manifold-valued data
Joint Image Reconstruction and Segmentation Using the Potts Model
We propose a new algorithmic approach to the non-smooth and non-convex Potts
problem (also called piecewise-constant Mumford-Shah problem) for inverse
imaging problems. We derive a suitable splitting into specific subproblems that
can all be solved efficiently. Our method does not require a priori knowledge
on the gray levels nor on the number of segments of the reconstruction.
Further, it avoids anisotropic artifacts such as geometric staircasing. We
demonstrate the suitability of our method for joint image reconstruction and
segmentation. We focus on Radon data, where we in particular consider limited
data situations. For instance, our method is able to recover all segments of
the Shepp-Logan phantom from angular views only. We illustrate the
practical applicability on a real PET dataset. As further applications, we
consider spherical Radon data as well as blurred data
The L1-Potts functional for robust jump-sparse reconstruction
We investigate the non-smooth and non-convex -Potts functional in
discrete and continuous time. We show -convergence of discrete
-Potts functionals towards their continuous counterpart and obtain a
convergence statement for the corresponding minimizers as the discretization
gets finer. For the discrete -Potts problem, we introduce an time
and space algorithm to compute an exact minimizer. We apply -Potts
minimization to the problem of recovering piecewise constant signals from noisy
measurements It turns out that the -Potts functional has a quite
interesting blind deconvolution property. In fact, we show that mildly blurred
jump-sparse signals are reconstructed by minimizing the -Potts functional.
Furthermore, for strongly blurred signals and known blurring operator, we
derive an iterative reconstruction algorithm
- …