12,888 research outputs found
Mumford-Shah and Potts Regularization for Manifold-Valued Data with Applications to DTI and Q-Ball Imaging
Mumford-Shah and Potts functionals are powerful variational models for
regularization which are widely used in signal and image processing; typical
applications are edge-preserving denoising and segmentation. Being both
non-smooth and non-convex, they are computationally challenging even for scalar
data. For manifold-valued data, the problem becomes even more involved since
typical features of vector spaces are not available. In this paper, we propose
algorithms for Mumford-Shah and for Potts regularization of manifold-valued
signals and images. For the univariate problems, we derive solvers based on
dynamic programming combined with (convex) optimization techniques for
manifold-valued data. For the class of Cartan-Hadamard manifolds (which
includes the data space in diffusion tensor imaging), we show that our
algorithms compute global minimizers for any starting point. For the
multivariate Mumford-Shah and Potts problems (for image regularization) we
propose a splitting into suitable subproblems which we can solve exactly using
the techniques developed for the corresponding univariate problems. Our method
does not require any a priori restrictions on the edge set and we do not have
to discretize the data space. We apply our method to diffusion tensor imaging
(DTI) as well as Q-ball imaging. Using the DTI model, we obtain a segmentation
of the corpus callosum
A Unifying Framework in Vector-valued Reproducing Kernel Hilbert Spaces for Manifold Regularization and Co-Regularized Multi-view Learning
This paper presents a general vector-valued reproducing kernel Hilbert spaces
(RKHS) framework for the problem of learning an unknown functional dependency
between a structured input space and a structured output space. Our formulation
encompasses both Vector-valued Manifold Regularization and Co-regularized
Multi-view Learning, providing in particular a unifying framework linking these
two important learning approaches. In the case of the least square loss
function, we provide a closed form solution, which is obtained by solving a
system of linear equations. In the case of Support Vector Machine (SVM)
classification, our formulation generalizes in particular both the binary
Laplacian SVM to the multi-class, multi-view settings and the multi-class
Simplex Cone SVM to the semi-supervised, multi-view settings. The solution is
obtained by solving a single quadratic optimization problem, as in standard
SVM, via the Sequential Minimal Optimization (SMO) approach. Empirical results
obtained on the task of object recognition, using several challenging datasets,
demonstrate the competitiveness of our algorithms compared with other
state-of-the-art methods.Comment: 72 page
Total Generalized Variation for Manifold-valued Data
In this paper we introduce the notion of second-order total generalized
variation (TGV) regularization for manifold-valued data in a discrete setting.
We provide an axiomatic approach to formalize reasonable generalizations of TGV
to the manifold setting and present two possible concrete instances that
fulfill the proposed axioms. We provide well-posedness results and present
algorithms for a numerical realization of these generalizations to the manifold
setup. Further, we provide experimental results for synthetic and real data to
further underpin the proposed generalization numerically and show its potential
for applications with manifold-valued data
Total variation regularization for manifold-valued data
We consider total variation minimization for manifold valued data. We propose
a cyclic proximal point algorithm and a parallel proximal point algorithm to
minimize TV functionals with -type data terms in the manifold case.
These algorithms are based on iterative geodesic averaging which makes them
easily applicable to a large class of data manifolds. As an application, we
consider denoising images which take their values in a manifold. We apply our
algorithms to diffusion tensor images, interferometric SAR images as well as
sphere and cylinder valued images. For the class of Cartan-Hadamard manifolds
(which includes the data space in diffusion tensor imaging) we show the
convergence of the proposed TV minimizing algorithms to a global minimizer
A Second Order Non-Smooth Variational Model for Restoring Manifold-Valued Images
We introduce a new non-smooth variational model for the restoration of
manifold-valued data which includes second order differences in the
regularization term. While such models were successfully applied for
real-valued images, we introduce the second order difference and the
corresponding variational models for manifold data, which up to now only
existed for cyclic data. The approach requires a combination of techniques from
numerical analysis, convex optimization and differential geometry. First, we
establish a suitable definition of absolute second order differences for
signals and images with values in a manifold. Employing this definition, we
introduce a variational denoising model based on first and second order
differences in the manifold setup. In order to minimize the corresponding
functional, we develop an algorithm using an inexact cyclic proximal point
algorithm. We propose an efficient strategy for the computation of the
corresponding proximal mappings in symmetric spaces utilizing the machinery of
Jacobi fields. For the n-sphere and the manifold of symmetric positive definite
matrices, we demonstrate the performance of our algorithm in practice. We prove
the convergence of the proposed exact and inexact variant of the cyclic
proximal point algorithm in Hadamard spaces. These results which are of
interest on its own include, e.g., the manifold of symmetric positive definite
matrices
- …