3,253,855 research outputs found
Multiclass Total Variation Clustering
Ideas from the image processing literature have recently motivated a new set
of clustering algorithms that rely on the concept of total variation. While
these algorithms perform well for bi-partitioning tasks, their recursive
extensions yield unimpressive results for multiclass clustering tasks. This
paper presents a general framework for multiclass total variation clustering
that does not rely on recursion. The results greatly outperform previous total
variation algorithms and compare well with state-of-the-art NMF approaches
Total variation on a tree
We consider the problem of minimizing the continuous valued total variation
subject to different unary terms on trees and propose fast direct algorithms
based on dynamic programming to solve these problems. We treat both the convex
and the non-convex case and derive worst case complexities that are equal or
better than existing methods. We show applications to total variation based 2D
image processing and computer vision problems based on a Lagrangian
decomposition approach. The resulting algorithms are very efficient, offer a
high degree of parallelism and come along with memory requirements which are
only in the order of the number of image pixels.Comment: accepted to SIAM Journal on Imaging Sciences (SIIMS
Discrete MDL Predicts in Total Variation
The Minimum Description Length (MDL) principle selects the model that has the
shortest code for data plus model. We show that for a countable class of
models, MDL predictions are close to the true distribution in a strong sense.
The result is completely general. No independence, ergodicity, stationarity,
identifiability, or other assumption on the model class need to be made. More
formally, we show that for any countable class of models, the distributions
selected by MDL (or MAP) asymptotically predict (merge with) the true measure
in the class in total variation distance. Implications for non-i.i.d. domains
like time-series forecasting, discriminative learning, and reinforcement
learning are discussed.Comment: 15 LaTeX page
Asymptotic behaviour of total generalised variation
The recently introduced second order total generalised variation functional
has been a successful regulariser for image
processing purposes. Its definition involves two positive parameters
and whose values determine the amount and the quality of the
regularisation. In this paper we report on the behaviour of
in the cases where the parameters as well as their ratio becomes very large or very small.
Among others, we prove that for sufficiently symmetric two dimensional data and
large ratio , regularisation
coincides with total variation () regularisation
- …