88,094 research outputs found

    Time integration of tree tensor networks

    Get PDF
    Dynamical low-rank approximation by tree tensor networks is studied for the data-sparse approximation to large time-dependent data tensors and unknown solutions of tensor differential equations. A time integration method for tree tensor networks of prescribed tree rank is presented and analyzed. It extends the known projector-splitting integrators for dynamical low-rank approximation by matrices and Tucker tensors and is shown to inherit their favorable properties. The integrator is based on recursively applying the Tucker tensor integrator. In every time step, the integrator climbs up and down the tree: it uses a recursion that passes from the root to the leaves of the tree for the construction of initial value problems on subtree tensor networks using appropriate restrictions and prolongations, and another recursion that passes from the leaves to the root for the update of the factors in the tree tensor network. The integrator reproduces given time-dependent tree tensor networks of the specified tree rank exactly and is robust to the typical presence of small singular values in matricizations of the connection tensors, in contrast to standard integrators applied to the differential equations for the factors in the dynamical low-rank approximation by tree tensor networks

    Local exchange-correlation vector potential with memory in Time-Dependent Density Functional Theory: the generalized hydrodynamics approach

    Full text link
    Using Landau Fermi liquid theory we derive a nonlinear non-adiabatic approximation for the exchange-correlation (xc) vector potential defined by the xc stress tensor. The stress tensor is a local nonlinear functional of two basic variables - the displacement vector and the second-rank tensor which describes the evolution of momentum in a local frame moving with Eulerian velocity. For irrotational motion and equilibrium initial state the dependence on the tensor variable reduces to that on a metrics generated by a dynamical deformation of the system.Comment: RevTex, 5 pages, no figures. Final version published in PR

    On the validity of the adiabatic approximation in compact binary inspirals

    Full text link
    Using a semi-analytical approach recently developed to model the tidal deformations of neutron stars in inspiralling compact binaries, we study the dynamical evolution of the tidal tensor, which we explicitly derive at second post-Newtonian order, and of the quadrupole tensor. Since we do not assume a priori that the quadrupole tensor is proportional to the tidal tensor, i.e. the so called "adiabatic approximation", our approach enables us to establish to which extent such approximation is reliable. We find that the ratio between the quadrupole and tidal tensors (i.e., the Love number) increases as the inspiral progresses, but this phenomenon only marginally affects the emitted gravitational waveform. We estimate the frequency range in which the tidal component of the gravitational signal is well described using the stationary phase approximation at next-to-leading post-Newtonian order, comparing different contributions to the tidal phase. We also derive a semi-analytical expression for the Love number, which reproduces within a few percentage points the results obtained so far by numerical integrations of the relativistic equations of stellar perturbations.Comment: 13 pages, 1 table, 2 figures. Minor changes to match the version appearing on Phys. Rev.

    Tensor-Based Algorithms for Image Classification

    Get PDF
    Interest in machine learning with tensor networks has been growing rapidly in recent years. We show that tensor-based methods developed for learning the governing equations of dynamical systems from data can, in the same way, be used for supervised learning problems and propose two novel approaches for image classification. One is a kernel-based reformulation of the previously introduced multidimensional approximation of nonlinear dynamics (MANDy), the other an alternating ridge regression in the tensor train format. We apply both methods to the MNIST and fashion MNIST data set and show that the approaches are competitive with state-of-the-art neural network-based classifiers

    An adaptive dynamical low-rank tensor approximation scheme for fast circuit simulation

    Get PDF
    Tensors, as higher order generalization of matrices, have received growing attention due to their readiness in representing multidimensional data intrinsic to numerous engineering problems. This paper develops an efficient and accurate dynamical update algorithm for the low-rank mode factors. By means of tangent space projection onto the low-rank tensor manifold, the repeated computation of a full tensor Tucker decomposition is replaced with a much simpler solution of nonlinear differential equations governing the tensor mode factors. A worked-out numerical example demonstrates the excellent efficiency and scalability of the proposed dynamical approximation scheme.postprin
    corecore