240 research outputs found

    Monotone discretizations of levelset convex geometric PDEs

    Full text link
    We introduce a novel algorithm that converges to level-set convex viscosity solutions of high-dimensional Hamilton-Jacobi equations. The algorithm is applicable to a broad class of curvature motion PDEs, as well as a recently developed Hamilton-Jacobi equation for the Tukey depth, which is a statistical depth measure of data points. A main contribution of our work is a new monotone scheme for approximating the direction of the gradient, which allows for monotone discretizations of pure partial derivatives in the direction of, and orthogonal to, the gradient. We provide a convergence analysis of the algorithm on both regular Cartesian grids and unstructured point clouds in any dimension and present numerical experiments that demonstrate the effectiveness of the algorithm in approximating solutions of the affine flow in two dimensions and the Tukey depth measure of high-dimensional datasets such as MNIST and FashionMNIST.Comment: 42 pages including reference

    Deep JKO: time-implicit particle methods for general nonlinear gradient flows

    Full text link
    We develop novel neural network-based implicit particle methods to compute high-dimensional Wasserstein-type gradient flows with linear and nonlinear mobility functions. The main idea is to use the Lagrangian formulation in the Jordan--Kinderlehrer--Otto (JKO) framework, where the velocity field is approximated using a neural network. We leverage the formulations from the neural ordinary differential equation (neural ODE) in the context of continuous normalizing flow for efficient density computation. Additionally, we make use of an explicit recurrence relation for computing derivatives, which greatly streamlines the backpropagation process. Our methodology demonstrates versatility in handling a wide range of gradient flows, accommodating various potential functions and nonlinear mobility scenarios. Extensive experiments demonstrate the efficacy of our approach, including an illustrative example from Bayesian inverse problems. This underscores that our scheme provides a viable alternative solver for the Kalman-Wasserstein gradient flow.Comment: 23 page

    The back-and-forth method for Wasserstein gradient flows

    Full text link
    We present a method to efficiently compute Wasserstein gradient flows. Our approach is based on a generalization of the back-and-forth method (BFM) introduced by Jacobs and L\'eger to solve optimal transport problems. We evolve the gradient flow by solving the dual problem to the JKO scheme. In general, the dual problem is much better behaved than the primal problem. This allows us to efficiently run large-scale simulations for a large class of internal energies including singular and non-convex energies

    Many-Body Quadrupolar Sum Rule for Higher-Order Topological Insulator

    Full text link
    The modern theory of polarization establishes the bulk-boundary correspondence for the bulk polarization. In this paper, we attempt to extend it to a sum rule of the bulk quadrupole moment by employing a many-body operator introduced in [Phys. Rev. B 100, 245134 (2019)] and [Phys. Rev. B 100, 245135 (2019)]. The sum rule that we propose consists of the alternating sum of four observables, which are the phase factors of the many-body operator in different boundary conditions. We demonstrate its validity through extensive numerical computations for various non-interacting tight-binding models. We also observe that individual terms in the sum rule correspond to the bulk quadrupole moment, the edge-localized polarizations, and the corner charge in the thermodynamic limit on some models.Comment: 13 pages (3 figures

    Monotone Generative Modeling via a Gromov-Monge Embedding

    Full text link
    Generative Adversarial Networks (GANs) are powerful tools for creating new content, but they face challenges such as sensitivity to starting conditions and mode collapse. To address these issues, we propose a deep generative model that utilizes the Gromov-Monge embedding (GME). It helps identify the low-dimensional structure of the underlying measure of the data and then maps it, while preserving its geometry, into a measure in a low-dimensional latent space, which is then optimally transported to the reference measure. We guarantee the preservation of the underlying geometry by the GME and cc-cyclical monotonicity of the generative map, where cc is an intrinsic embedding cost employed by the GME. The latter property is a first step in guaranteeing better robustness to initialization of parameters and mode collapse. Numerical experiments demonstrate the effectiveness of our approach in generating high-quality images, avoiding mode collapse, and exhibiting robustness to different starting conditions.Comment: 29 pages including main text and appendi
    corecore