2,578 research outputs found

    Multidimensional approximation of nonlinear dynamical systems

    Get PDF
    A key task in the field of modeling and analyzing nonlinear dynamical systems is the recovery of unknown governing equations from measurement data only. There is a wide range of application areas for this important instance of system identification, ranging from industrial engineering and acoustic signal processing to stock market models. In order to find appropriate representations of underlying dynamical systems, various data-driven methods have been proposed by different communities. However, if the given data sets are high-dimensional, then these methods typically suffer from the curse of dimensionality. To significantly reduce the computational costs and storage consumption, we propose the method multidimensional approximation of nonlinear dynamical systems (MANDy) which combines data-driven methods with tensor network decompositions. The efficiency of the introduced approach will be illustrated with the aid of several high-dimensional nonlinear dynamical systems

    Tensor network states and algorithms in the presence of a global SU(2) symmetry

    Full text link
    The benefits of exploiting the presence of symmetries in tensor network algorithms have been extensively demonstrated in the context of matrix product states (MPSs). These include the ability to select a specific symmetry sector (e.g. with a given particle number or spin), to ensure the exact preservation of total charge, and to significantly reduce computational costs. Compared to the case of a generic tensor network, the practical implementation of symmetries in the MPS is simplified by the fact that tensors only have three indices (they are trivalent, just as the Clebsch-Gordan coefficients of the symmetry group) and are organized as a one-dimensional array of tensors, without closed loops. Instead, a more complex tensor network, one where tensors have a larger number of indices and/or a more elaborate network structure, requires a more general treatment. In two recent papers, namely (i) [Phys. Rev. A 82, 050301 (2010)] and (ii) [Phys. Rev. B 83, 115125 (2011)], we described how to incorporate a global internal symmetry into a generic tensor network algorithm based on decomposing and manipulating tensors that are invariant under the symmetry. In (i) we considered a generic symmetry group G that is compact, completely reducible and multiplicity free, acting as a global internal symmetry. Then in (ii) we described the practical implementation of Abelian group symmetries. In this paper we describe the implementation of non-Abelian group symmetries in great detail and for concreteness consider an SU(2) symmetry. Our formalism can be readily extended to more exotic symmetries associated with conservation of total fermionic or anyonic charge. As a practical demonstration, we describe the SU(2)-invariant version of the multi-scale entanglement renormalization ansatz and apply it to study the low energy spectrum of a quantum spin chain with a global SU(2) symmetry.Comment: 32 pages, 37 figure

    Tensor network states and algorithms in the presence of a global U(1) symmetry

    Get PDF
    Tensor network decompositions offer an efficient description of certain many-body states of a lattice system and are the basis of a wealth of numerical simulation algorithms. In a recent paper [arXiv:0907.2994v1] we discussed how to incorporate a global internal symmetry, given by a compact, completely reducible group G, into tensor network decompositions and algorithms. Here we specialize to the case of Abelian groups and, for concreteness, to a U(1) symmetry, often associated with particle number conservation. We consider tensor networks made of tensors that are invariant (or covariant) under the symmetry, and explain how to decompose and manipulate such tensors in order to exploit their symmetry. In numerical calculations, the use of U(1) symmetric tensors allows selection of a specific number of particles, ensures the exact preservation of particle number, and significantly reduces computational costs. We illustrate all these points in the context of the multi-scale entanglement renormalization ansatz.Comment: 22 pages, 25 figures, RevTeX

    Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis

    Full text link
    The widespread use of multi-sensor technology and the emergence of big datasets has highlighted the limitations of standard flat-view matrix models and the necessity to move towards more versatile data analysis tools. We show that higher-order tensors (i.e., multiway arrays) enable such a fundamental paradigm shift towards models that are essentially polynomial and whose uniqueness, unlike the matrix methods, is guaranteed under verymild and natural conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints that match data properties, and to find more general latent components in the data than matrix-based methods. A comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker models, through to advanced cause-effect and multi-view data analysis schemes. We show that tensor decompositions enable natural generalizations of some commonly used signal processing paradigms, such as canonical correlation and subspace techniques, signal separation, linear regression, feature extraction and classification. We also cover computational aspects, and point out how ideas from compressed sensing and scientific computing may be used for addressing the otherwise unmanageable storage and manipulation problems associated with big datasets. The concepts are supported by illustrative real world case studies illuminating the benefits of the tensor framework, as efficient and promising tools for modern signal processing, data analysis and machine learning applications; these benefits also extend to vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker decomposition, HOSVD, tensor networks, Tensor Train
    corecore