7 research outputs found

    On The Isoperimetric Spectrum of Graphs and Its Approximations

    Get PDF
    In this paper we consider higher isoperimetric numbers of a (finite directed) graph. In this regard we focus on the nnth mean isoperimetric constant of a directed graph as the minimum of the mean outgoing normalized flows from a given set of nn disjoint subsets of the vertex set of the graph. We show that the second mean isoperimetric constant in this general setting, coincides with (the mean version of) the classical Cheeger constant of the graph, while for the rest of the spectrum we show that there is a fundamental difference between the nnth isoperimetric constant and the number obtained by taking the minimum over all nn-partitions. In this direction, we show that our definition is the correct one in the sense that it satisfies a Federer-Fleming-type theorem, and we also define and present examples for the concept of a supergeometric graph as a graph whose mean isoperimetric constants are attained on partitions at all levels. Moreover, considering the NP{\bf NP}-completeness of the isoperimetric problem on graphs, we address ourselves to the approximation problem where we prove general spectral inequalities that give rise to a general Cheeger-type inequality as well. On the other hand, we also consider some algorithmic aspects of the problem where we show connections to orthogonal representations of graphs and following J.~Malik and J.~Shi (20002000) we study the close relationships to the well-known kk-means algorithm and normalized cuts method

    Extracting features from eigenfunctions: higher Cheeger constants and sparse eigenbasis approximation

    Full text link
    This thesis investigates links between the eigenvalues and eigenfunctions of the Laplace-Beltrami operator, and the higher Cheeger constants of smooth Riemannian manifolds, possibly with boundary. The higher Cheeger constants give a loose description of the major geometric features of a manifold. We obtain a new lower bound on the negative Laplace-Beltrami eigenvalues in terms of the corresponding higher Cheeger constant. The level sets of Laplace-Beltrami eigenfunctions sometimes reveal sets with small Cheeger ratio, representing well-separated features of the manifold. Some manifolds have their major features entwined across several eigenfunctions, and no single eigenfunction contains all the major features. In this case, there may exist carefully chosen linear combinations of the eigenfunctions, each with large values on a single feature, and small values elsewhere. We can then apply a soft-thresholding operator to these linear combinations to obtain new functions, each supported on a single feature. We show that the Cheeger ratios of the level sets of these functions also give an upper bound on the Laplace-Beltrami eigenvalues. We extend these level set results to nonautonomous dynamical systems, and show that the dynamic Laplacian eigenfunctions reveal sets with small dynamic Cheeger ratios. In a later chapter, we propose a numerical method for identifying features represented in eigenvectors arising from spectral clustering methods when those features are not cleanly represented in a single eigenvector. This method provides explicit candidates for the soft-thresholded linear combinations of eigenfunctions mentioned above. Many data clustering techniques produce collections of orthogonal vectors (e.g. eigenvectors) which contain connectivity information about the dataset. This connectivity information must be disentangled by some secondary procedure. We propose a method for finding an approximate sparse basis for the space spanned by the leading eigenvectors, by applying thresholding to linear combinations of eigenvectors. Our procedure is natural, robust and efficient, and it provides soft-thresholded linear combinations of the inputted eigenfunctions. We develop a new Weyl-inspired eigengap heuristic and heuristics based on the sparse basis vectors, suggesting how many eigenvectors to pass to our method

    A flexible framework for solving constrained ratio problems in machine learning

    Get PDF
    The (constrained) optimization of a ratio of non-negative set functions is a problem appearing frequently in machine learning. As these problems are typically NP hard, the usual approach is to approximate them through convex or spectral relaxations. While these relaxations can be solved globally optimal, they are often too loose and thus produce suboptimal results. In this thesis we present a flexible framework for solving such constrained fractional set programs (CFSP). The main idea is to transform the combinatorial problem into an equivalent unconstrained continuous problem. We show that such a tight relaxation exists for every CFSP. It turns out that the tight relaxations can be related to a certain type of nonlinear eigenproblem. We present a method to solve nonlinear eigenproblems and thus optimize the corresponding ratios of in general non-differentiable differences of convex functions. While the global optimality cannot be guaranteed, we can prove the convergence to a solution of the associated nonlinear eigenproblem. Moreover, in practice the loose spectral relaxations are outperformed by a large margin. Going over to constrained fractional set programs and the corresponding nonlinear eigenproblems leads to a greater modelling flexibility, as we demonstrate for several applications in data analysis, namely the optimization of balanced graph cuts, constrained local clustering, community detection via densest subgraphs and sparse principal component analysis.Die (beschränkte) Optimierung von nichtnegativen Bruchfunktionen über Mengen ist ein häufig auftretendes Problem im maschinellen Lernen. Da diese Probleme typischerweise NP-schwer sind, besteht der übliche Ansatz darin, sie durch konvexe oder spektrale Relaxierungen zu approximieren. Diese können global optimal gelöst werden, sind jedoch häufig zu schwach und führen deshalb zu suboptimalen Ergebnissen. In dieser Arbeit stellen wir ein flexibles Verfahren zur Lösung solcher beschränkten fraktionellen Mengenprogramme (BFMP) vor. Die Grundidee ist, das kombinatorische in ein equivalentes unbeschränktes kontinuerliches Problem umzuwandeln. Wir zeigen dass dies für jedes BFMP möglich ist. Die strenge Relaxierung kann dann mit einem nichtlinearen Eigenproblem in Bezug gebracht werden. Wir präsentieren ein Verfahren zur Lösung der nichtlinearen Eigenprobleme und damit der Optimierung der im Allgemeinen nichtdifferenzierbaren und nichtkonvexen Bruchfunktionen. Globale Optimalität kann nicht garantiert werden, jedoch die Lösung des nichtlinearen Eigenproblems. Darüberhinaus werden in der Praxis die schwachen spektralen Relaxierungen mit einem großen Vorsprung übertroffen. Der Übergang zu BFMPs und nichtlinearen Eigenproblemen führt zu einer verbesserten Flexibilität in der Modellbildung, die wir anhand von Anwendungen in Graphpartitionierung, beschränkter lokaler Clusteranalyse, dem Finden von dichten Teilgraphen, sowie dünnbesetzter Hauptkomponentenanalyse demonstrieren
    corecore