3,253,855 research outputs found

    Multiclass Total Variation Clustering

    Get PDF
    Ideas from the image processing literature have recently motivated a new set of clustering algorithms that rely on the concept of total variation. While these algorithms perform well for bi-partitioning tasks, their recursive extensions yield unimpressive results for multiclass clustering tasks. This paper presents a general framework for multiclass total variation clustering that does not rely on recursion. The results greatly outperform previous total variation algorithms and compare well with state-of-the-art NMF approaches

    Total variation on a tree

    Full text link
    We consider the problem of minimizing the continuous valued total variation subject to different unary terms on trees and propose fast direct algorithms based on dynamic programming to solve these problems. We treat both the convex and the non-convex case and derive worst case complexities that are equal or better than existing methods. We show applications to total variation based 2D image processing and computer vision problems based on a Lagrangian decomposition approach. The resulting algorithms are very efficient, offer a high degree of parallelism and come along with memory requirements which are only in the order of the number of image pixels.Comment: accepted to SIAM Journal on Imaging Sciences (SIIMS

    Discrete MDL Predicts in Total Variation

    Get PDF
    The Minimum Description Length (MDL) principle selects the model that has the shortest code for data plus model. We show that for a countable class of models, MDL predictions are close to the true distribution in a strong sense. The result is completely general. No independence, ergodicity, stationarity, identifiability, or other assumption on the model class need to be made. More formally, we show that for any countable class of models, the distributions selected by MDL (or MAP) asymptotically predict (merge with) the true measure in the class in total variation distance. Implications for non-i.i.d. domains like time-series forecasting, discriminative learning, and reinforcement learning are discussed.Comment: 15 LaTeX page

    Asymptotic behaviour of total generalised variation

    Full text link
    The recently introduced second order total generalised variation functional TGVβ,α2\mathrm{TGV}_{\beta,\alpha}^{2} has been a successful regulariser for image processing purposes. Its definition involves two positive parameters α\alpha and β\beta whose values determine the amount and the quality of the regularisation. In this paper we report on the behaviour of TGVβ,α2\mathrm{TGV}_{\beta,\alpha}^{2} in the cases where the parameters α,β\alpha, \beta as well as their ratio β/α\beta/\alpha becomes very large or very small. Among others, we prove that for sufficiently symmetric two dimensional data and large ratio β/α\beta/\alpha, TGVβ,α2\mathrm{TGV}_{\beta,\alpha}^{2} regularisation coincides with total variation (TV\mathrm{TV}) regularisation
    • …
    corecore