84 research outputs found

    Steklov Spectral Geometry for Extrinsic Shape Analysis

    Full text link
    We propose using the Dirichlet-to-Neumann operator as an extrinsic alternative to the Laplacian for spectral geometry processing and shape analysis. Intrinsic approaches, usually based on the Laplace-Beltrami operator, cannot capture the spatial embedding of a shape up to rigid motion, and many previous extrinsic methods lack theoretical justification. Instead, we consider the Steklov eigenvalue problem, computing the spectrum of the Dirichlet-to-Neumann operator of a surface bounding a volume. A remarkable property of this operator is that it completely encodes volumetric geometry. We use the boundary element method (BEM) to discretize the operator, accelerated by hierarchical numerical schemes and preconditioning; this pipeline allows us to solve eigenvalue and linear problems on large-scale meshes despite the density of the Dirichlet-to-Neumann discretization. We further demonstrate that our operators naturally fit into existing frameworks for geometry processing, making a shift from intrinsic to extrinsic geometry as simple as substituting the Laplace-Beltrami operator with the Dirichlet-to-Neumann operator.Comment: Additional experiments adde

    A flexible framework for solving constrained ratio problems in machine learning

    Get PDF
    The (constrained) optimization of a ratio of non-negative set functions is a problem appearing frequently in machine learning. As these problems are typically NP hard, the usual approach is to approximate them through convex or spectral relaxations. While these relaxations can be solved globally optimal, they are often too loose and thus produce suboptimal results. In this thesis we present a flexible framework for solving such constrained fractional set programs (CFSP). The main idea is to transform the combinatorial problem into an equivalent unconstrained continuous problem. We show that such a tight relaxation exists for every CFSP. It turns out that the tight relaxations can be related to a certain type of nonlinear eigenproblem. We present a method to solve nonlinear eigenproblems and thus optimize the corresponding ratios of in general non-differentiable differences of convex functions. While the global optimality cannot be guaranteed, we can prove the convergence to a solution of the associated nonlinear eigenproblem. Moreover, in practice the loose spectral relaxations are outperformed by a large margin. Going over to constrained fractional set programs and the corresponding nonlinear eigenproblems leads to a greater modelling flexibility, as we demonstrate for several applications in data analysis, namely the optimization of balanced graph cuts, constrained local clustering, community detection via densest subgraphs and sparse principal component analysis.Die (beschränkte) Optimierung von nichtnegativen Bruchfunktionen über Mengen ist ein häufig auftretendes Problem im maschinellen Lernen. Da diese Probleme typischerweise NP-schwer sind, besteht der übliche Ansatz darin, sie durch konvexe oder spektrale Relaxierungen zu approximieren. Diese können global optimal gelöst werden, sind jedoch häufig zu schwach und führen deshalb zu suboptimalen Ergebnissen. In dieser Arbeit stellen wir ein flexibles Verfahren zur Lösung solcher beschränkten fraktionellen Mengenprogramme (BFMP) vor. Die Grundidee ist, das kombinatorische in ein equivalentes unbeschränktes kontinuerliches Problem umzuwandeln. Wir zeigen dass dies für jedes BFMP möglich ist. Die strenge Relaxierung kann dann mit einem nichtlinearen Eigenproblem in Bezug gebracht werden. Wir präsentieren ein Verfahren zur Lösung der nichtlinearen Eigenprobleme und damit der Optimierung der im Allgemeinen nichtdifferenzierbaren und nichtkonvexen Bruchfunktionen. Globale Optimalität kann nicht garantiert werden, jedoch die Lösung des nichtlinearen Eigenproblems. Darüberhinaus werden in der Praxis die schwachen spektralen Relaxierungen mit einem großen Vorsprung übertroffen. Der Übergang zu BFMPs und nichtlinearen Eigenproblemen führt zu einer verbesserten Flexibilität in der Modellbildung, die wir anhand von Anwendungen in Graphpartitionierung, beschränkter lokaler Clusteranalyse, dem Finden von dichten Teilgraphen, sowie dünnbesetzter Hauptkomponentenanalyse demonstrieren

    NFFT meets Krylov methods: Fast matrix-vector products for the graph Laplacian of fully connected networks

    Get PDF
    The graph Laplacian is a standard tool in data science, machine learning, and image processing. The corresponding matrix inherits the complex structure of the underlying network and is in certain applications densely populated. This makes computations, in particular matrix-vector products, with the graph Laplacian a hard task. A typical application is the computation of a number of its eigenvalues and eigenvectors. Standard methods become infeasible as the number of nodes in the graph is too large. We propose the use of the fast summation based on the nonequispaced fast Fourier transform (NFFT) to perform the dense matrix-vector product with the graph Laplacian fast without ever forming the whole matrix. The enormous flexibility of the NFFT algorithm allows us to embed the accelerated multiplication into Lanczos-based eigenvalues routines or iterative linear system solvers and even consider other than the standard Gaussian kernels. We illustrate the feasibility of our approach on a number of test problems from image segmentation to semi-supervised learning based on graph-based PDEs. In particular, we compare our approach with the Nystr\"om method. Moreover, we present and test an enhanced, hybrid version of the Nystr\"om method, which internally uses the NFFT.Comment: 28 pages, 9 figure

    Minimizing Quotient Regularization Model

    Full text link
    Quotient regularization models (QRMs) are a class of powerful regularization techniques that have gained considerable attention in recent years, due to their ability to handle complex and highly nonlinear data sets. However, the nonconvex nature of QRM poses a significant challenge in finding its optimal solution. We are interested in scenarios where both the numerator and the denominator of QRM are absolutely one-homogeneous functions, which is widely applicable in the fields of signal processing and image processing. In this paper, we utilize a gradient flow to minimize such QRM in combination with a quadratic data fidelity term. Our scheme involves solving a convex problem iteratively.The convergence analysis is conducted on a modified scheme in a continuous formulation, showing the convergence to a stationary point. Numerical experiments demonstrate the effectiveness of the proposed algorithm in terms of accuracy, outperforming the state-of-the-art QRM solvers.Comment: 20 page

    Mini-Workshop: Deep Learning and Inverse Problems

    Get PDF
    Machine learning and in particular deep learning offer several data-driven methods to amend the typical shortcomings of purely analytical approaches. The mathematical research on these combined models is presently exploding on the experimental side but still lacking on the theoretical point of view. This workshop addresses the challenge of developing a solid mathematical theory for analyzing deep neural networks for inverse problems
    • …
    corecore