17,485 research outputs found

    Preconditioned low-rank Riemannian optimization for linear systems with tensor product structure

    Full text link
    The numerical solution of partial differential equations on high-dimensional domains gives rise to computationally challenging linear systems. When using standard discretization techniques, the size of the linear system grows exponentially with the number of dimensions, making the use of classic iterative solvers infeasible. During the last few years, low-rank tensor approaches have been developed that allow to mitigate this curse of dimensionality by exploiting the underlying structure of the linear operator. In this work, we focus on tensors represented in the Tucker and tensor train formats. We propose two preconditioned gradient methods on the corresponding low-rank tensor manifolds: A Riemannian version of the preconditioned Richardson method as well as an approximate Newton scheme based on the Riemannian Hessian. For the latter, considerable attention is given to the efficient solution of the resulting Newton equation. In numerical experiments, we compare the efficiency of our Riemannian algorithms with other established tensor-based approaches such as a truncated preconditioned Richardson method and the alternating linear scheme. The results show that our approximate Riemannian Newton scheme is significantly faster in cases when the application of the linear operator is expensive.Comment: 24 pages, 8 figure

    Hydrodynamics of Suspensions of Passive and Active Rigid Particles: A Rigid Multiblob Approach

    Get PDF
    We develop a rigid multiblob method for numerically solving the mobility problem for suspensions of passive and active rigid particles of complex shape in Stokes flow in unconfined, partially confined, and fully confined geometries. As in a number of existing methods, we discretize rigid bodies using a collection of minimally-resolved spherical blobs constrained to move as a rigid body, to arrive at a potentially large linear system of equations for the unknown Lagrange multipliers and rigid-body motions. Here we develop a block-diagonal preconditioner for this linear system and show that a standard Krylov solver converges in a modest number of iterations that is essentially independent of the number of particles. For unbounded suspensions and suspensions sedimented against a single no-slip boundary, we rely on existing analytical expressions for the Rotne-Prager tensor combined with a fast multipole method or a direct summation on a Graphical Processing Unit to obtain an simple yet efficient and scalable implementation. For fully confined domains, such as periodic suspensions or suspensions confined in slit and square channels, we extend a recently-developed rigid-body immersed boundary method to suspensions of freely-moving passive or active rigid particles at zero Reynolds number. We demonstrate that the iterative solver for the coupled fluid and rigid body equations converges in a bounded number of iterations regardless of the system size. We optimize a number of parameters in the iterative solvers and apply our method to a variety of benchmark problems to carefully assess the accuracy of the rigid multiblob approach as a function of the resolution. We also model the dynamics of colloidal particles studied in recent experiments, such as passive boomerangs in a slit channel, as well as a pair of non-Brownian active nanorods sedimented against a wall.Comment: Under revision in CAMCOS, Nov 201

    Distributed PCP Theorems for Hardness of Approximation in P

    Get PDF
    We present a new distributed model of probabilistically checkable proofs (PCP). A satisfying assignment x{0,1}nx \in \{0,1\}^n to a CNF formula φ\varphi is shared between two parties, where Alice knows x1,,xn/2x_1, \dots, x_{n/2}, Bob knows xn/2+1,,xnx_{n/2+1},\dots,x_n, and both parties know φ\varphi. The goal is to have Alice and Bob jointly write a PCP that xx satisfies φ\varphi, while exchanging little or no information. Unfortunately, this model as-is does not allow for nontrivial query complexity. Instead, we focus on a non-deterministic variant, where the players are helped by Merlin, a third party who knows all of xx. Using our framework, we obtain, for the first time, PCP-like reductions from the Strong Exponential Time Hypothesis (SETH) to approximation problems in P. In particular, under SETH we show that there are no truly-subquadratic approximation algorithms for Bichromatic Maximum Inner Product over {0,1}-vectors, Bichromatic LCS Closest Pair over permutations, Approximate Regular Expression Matching, and Diameter in Product Metric. All our inapproximability factors are nearly-tight. In particular, for the first two problems we obtain nearly-polynomial factors of 2(logn)1o(1)2^{(\log n)^{1-o(1)}}; only (1+o(1))(1+o(1))-factor lower bounds (under SETH) were known before

    New numerical approaches for modeling thermochemical convection in a compositionally stratified fluid

    Full text link
    Seismic imaging of the mantle has revealed large and small scale heterogeneities in the lower mantle; specifically structures known as large low shear velocity provinces (LLSVP) below Africa and the South Pacific. Most interpretations propose that the heterogeneities are compositional in nature, differing in composition from the overlying mantle, an interpretation that would be consistent with chemical geodynamic models. Numerical modeling of persistent compositional interfaces presents challenges, even to state-of-the-art numerical methodology. For example, some numerical algorithms for advecting the compositional interface cannot maintain a sharp compositional boundary as the fluid migrates and distorts with time dependent fingering due to the numerical diffusion that has been added in order to maintain the upper and lower bounds on the composition variable and the stability of the advection method. In this work we present two new algorithms for maintaining a sharper computational boundary than the advection methods that are currently openly available to the computational mantle convection community; namely, a Discontinuous Galerkin method with a Bound Preserving limiter and a Volume-of-Fluid interface tracking algorithm. We compare these two new methods with two approaches commonly used for modeling the advection of two distinct, thermally driven, compositional fields in mantle convection problems; namely, an approach based on a high-order accurate finite element method advection algorithm that employs an artificial viscosity technique to maintain the upper and lower bounds on the composition variable as well as the stability of the advection algorithm and the advection of particles that carry a scalar quantity representing the location of each compositional field. All four of these algorithms are implemented in the open source FEM code ASPECT

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    A Parallel Tensor Network Contraction Algorithm and Its Applications in Quantum Computation

    Full text link
    Tensors are a natural generalization of matrices, and tensor networks are a natural generalization of matrix products. Despite the simple definition of tensor networks, they are versatile enough to represent many different kinds of "products" that arise in various theoretical and practical problems. In particular, the powerful computational model of quantum computation can be defined almost entirely in terms of matrix products and tensor products, both of which are special cases of tensor networks. As such, (classical) algorithms for evaluating tensor networks have profound importance in the study of quantum computation. In this thesis, we design and implement a parallel algorithm for tensor network contraction. In addition to finding efficient contraction orders for a tensor network, we also dynamically slice it into multiple sub-tasks with lower space and time costs, in order to evaluate the tensor network in parallel. We refer to such an evaluation strategy as a contraction scheme for the tensor network. In addition, we introduce a local optimization procedure that improves the efficiency of the contraction schemes we find. We also investigate the applications of our parallel tensor network contraction algorithm in quantum computation. The most ready application is the simulation of random quantum supremacy circuits, where we benchmark our algorithm to demonstrate its advantage over other similar tensor network based simulators. Other applications we found include evaluating the energy function of a Quantum Approximate Optimization Algorithm (QAOA), and simulating surface codes under a realistic error model with crosstalk.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/163098/1/fangzh_1.pd
    corecore