107 research outputs found

    Tensor rank reduction via coordinate flows

    Full text link
    Recently, there has been a growing interest in efficient numerical algorithms based on tensor networks and low-rank techniques to approximate high-dimensional functions and solutions to high-dimensional PDEs. In this paper, we propose a new tensor rank reduction method based on coordinate transformations that can greatly increase the efficiency of high-dimensional tensor approximation algorithms. The idea is simple: given a multivariate function, determine a coordinate transformation so that the function in the new coordinate system has smaller tensor rank. We restrict our analysis to linear coordinate transformations, which gives rise to a new class of functions that we refer to as tensor ridge functions. Leveraging Riemannian gradient descent on matrix manifolds we develop an algorithm that determines a quasi-optimal linear coordinate transformation for tensor rank reduction.The results we present for rank reduction via linear coordinate transformations open the possibility for generalizations to larger classes of nonlinear transformations. Numerical applications are presented and discussed for linear and nonlinear PDEs.Comment: 41 pages, 19 figure

    Statistical Communication Theory

    Get PDF
    Contains reports on six research projects.National Institutes of Health (Grant MH-04737-02

    A Stochastic Conjugate Gradient Method for Approximation of Functions

    Get PDF
    A stochastic conjugate gradient method for approximation of a function is proposed. The proposed method avoids computing and storing the covariance matrix in the normal equations for the least squares solution. In addition, the method performs the conjugate gradient steps by using an inner product that is based stochastic sampling. Theoretical analysis shows that the method is convergent in probability. The method has applications in such fields as predistortion for the linearization of power amplifiers.Comment: 21 pages, 5 figure

    Design of generalized fractional order gradient descent method

    Full text link
    This paper focuses on the convergence problem of the emerging fractional order gradient descent method, and proposes three solutions to overcome the problem. In fact, the general fractional gradient method cannot converge to the real extreme point of the target function, which critically hampers the application of this method. Because of the long memory characteristics of fractional derivative, fixed memory principle is a prior choice. Apart from the truncation of memory length, two new methods are developed to reach the convergence. The one is the truncation of the infinite series, and the other is the modification of the constant fractional order. Finally, six illustrative examples are performed to illustrate the effectiveness and practicability of proposed methods.Comment: 8 pages, 16 figure

    Nonadiabatic quantum transition-state theory in the golden-rule limit. I. Theory and application to model systems

    Full text link
    We propose a new quantum transition-state theory for calculating Fermi's golden-rule rates in complex multidimensional systems. This method is able to account for the nuclear quantum effects of delocalization, zero-point energy and tunnelling in an electron-transfer reaction. It is related to instanton theory but can be computed by path-integral sampling and is thus applicable to treat molecular reactions in solution. A constraint functional based on energy conservation is introduced which ensures that the dominant paths contributing to the reaction rate are sampled. We prove that the theory gives exact results for a system of crossed linear potentials and also the correct classical limit for any system. In numerical tests, the new method is also seen to be accurate for anharmonic systems, and even gives good predictions for rates in the Marcus inverted regime.Comment: 18 pages and 6 figure

    Riemannian Optimization for Solving High-Dimensional Problems with Low-Rank Tensor Structure

    Get PDF
    In this thesis, we present a Riemannian framework for the solution of high-dimensional optimization problems with an underlying low-rank tensor structure. Here, the high-dimensionality refers to the size of the search space, while the cost function is scalar-valued. Such problems arise, for example, in the reconstruction of high-dimensional data sets and in the solution of parameter dependent partial differential equations. As the degrees of freedom grow exponentially with the number of dimensions, the so-called curse of dimensionality, directly solving the optimization problem is computationally unfeasible even for moderately high-dimensional problems. We constrain the optimization problem by assuming a low-rank tensor structure of the solution; drastically reducing the degrees of freedom. We reformulate this constrained optimization as an optimization problem on a manifold using the smooth embedded Riemannian manifold structure of the low-rank representations of the Tucker and tensor train formats. Exploiting this smooth structure, we derive efficient gradient-based optimization algorithms. In particular, we propose Riemannian conjugate gradient schemes for the solution of the tensor completion problem, where we aim to reconstruct a high-dimensional data set for which the vast majority of entries is unknown. For the solution of linear systems, we show how we can precondition the Riemannian gradient and leverage second-order information in an approximate Newton scheme. Finally, we describe a preconditioned alternating optimization scheme with subspace correction for the solution of high-dimensional symmetric eigenvalue problems
    • …
    corecore