93 research outputs found

    An Adaptive Total Variation Algorithm for Computing the Balanced Cut of a Graph

    Get PDF
    We propose an adaptive version of the total variation algorithm proposed in [3] for computing the balanced cut of a graph. The algorithm from [3] used a sequence of inner total variation minimizations to guarantee descent of the balanced cut energy as well as convergence of the algorithm. In practice the total variation minimization step is never solved exactly. Instead, an accuracy parameter is specified and the total variation minimization terminates once this level of accuracy is reached. The choice of this parameter can vastly impact both the computational time of the overall algorithm as well as the accuracy of the result. Moreover, since the total variation minimization step is not solved exactly, the algorithm is not guarantied to be monotonic. In the present work we introduce a new adaptive stopping condition for the total variation minimization that guarantees monotonicity. This results in an algorithm that is actually monotonic in practice and is also significantly faster than previous, non-adaptive algorithms

    Multiclass Total Variation Clustering

    Get PDF
    Ideas from the image processing literature have recently motivated a new set of clustering algorithms that rely on the concept of total variation. While these algorithms perform well for bi-partitioning tasks, their recursive extensions yield unimpressive results for multiclass clustering tasks. This paper presents a general framework for multiclass total variation clustering that does not rely on recursion. The results greatly outperform previous total variation algorithms and compare well with state-of-the-art NMF approaches

    Multiclass Semi-Supervised Learning on Graphs using Ginzburg-Landau Functional Minimization

    Full text link
    We present a graph-based variational algorithm for classification of high-dimensional data, generalizing the binary diffuse interface model to the case of multiple classes. Motivated by total variation techniques, the method involves minimizing an energy functional made up of three terms. The first two terms promote a stepwise continuous classification function with sharp transitions between classes, while preserving symmetry among the class labels. The third term is a data fidelity term, allowing us to incorporate prior information into the model in a semi-supervised framework. The performance of the algorithm on synthetic data, as well as on the COIL and MNIST benchmark datasets, is competitive with state-of-the-art graph-based multiclass segmentation methods.Comment: 16 pages, to appear in Springer's Lecture Notes in Computer Science volume "Pattern Recognition Applications and Methods 2013", part of series on Advances in Intelligent and Soft Computin

    Diameters, distortion and eigenvalues

    Full text link
    We study the relation between the diameter, the first positive eigenvalue of the discrete pp-Laplacian and the â„“p\ell_p-distortion of a finite graph. We prove an inequality relating these three quantities and apply it to families of Cayley and Schreier graphs. We also show that the â„“p\ell_p-distortion of Pascal graphs, approximating the Sierpinski gasket, is bounded, which allows to obtain estimates for the convergence to zero of the spectral gap as an application of the main result.Comment: Final version, to appear in the European Journal of Combinatoric

    Hardy-Muckenhoupt Bounds for Laplacian Eigenvalues

    Get PDF
    We present two graph quantities Psi(G,S) and Psi_2(G) which give constant factor estimates to the Dirichlet and Neumann eigenvalues, lambda(G,S) and lambda_2(G), respectively. Our techniques make use of a discrete Hardy-type inequality due to Muckenhoupt
    • …
    corecore