3,700 research outputs found

    Parameter reduction in nonlinear state-space identification of hysteresis

    Full text link
    Hysteresis is a highly nonlinear phenomenon, showing up in a wide variety of science and engineering problems. The identification of hysteretic systems from input-output data is a challenging task. Recent work on black-box polynomial nonlinear state-space modeling for hysteresis identification has provided promising results, but struggles with a large number of parameters due to the use of multivariate polynomials. This drawback is tackled in the current paper by applying a decoupling approach that results in a more parsimonious representation involving univariate polynomials. This work is carried out numerically on input-output data generated by a Bouc-Wen hysteretic model and follows up on earlier work of the authors. The current article discusses the polynomial decoupling approach and explores the selection of the number of univariate polynomials with the polynomial degree, as well as the connections with neural network modeling. We have found that the presented decoupling approach is able to reduce the number of parameters of the full nonlinear model up to about 50\%, while maintaining a comparable output error level.Comment: 24 pages, 8 figure

    Implementation of rigorous renormalization group method for ground space and low-energy states of local Hamiltonians

    Get PDF
    The practical success of polynomial-time tensor network methods for computing ground states of certain quantum local Hamiltonians has recently been given a sound theoretical basis by Arad, Landau, Vazirani, and Vidick. The convergence proof, however, relies on "rigorous renormalization group" (RRG) techniques which differ fundamentally from existing algorithms. We introduce an efficient implementation of the theoretical RRG procedure which finds MPS ansatz approximations to the ground spaces and low-lying excited spectra of local Hamiltonians in situations of practical interest. In contrast to other schemes, RRG does not utilize variational methods on tensor networks. Rather, it operates on subsets of the system Hilbert space by constructing approximations to the global ground space in a tree-like manner. We evaluate the algorithm numerically, finding similar performance to DMRG in the case of a gapped nondegenerate Hamiltonian. Even in challenging situations of criticality, or large ground-state degeneracy, or long-range entanglement, RRG remains able to identify candidate states having large overlap with ground and low-energy eigenstates, outperforming DMRG in some cases.Comment: 13 pages, 10 figure

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Connecting lattice and relativistic models via conformal field theory

    Full text link
    We consider the quantum group invariant XXZ-model. In infrared limit it describes Conformal Field Theory with modified energy-momentum tensor. The correlation functions are related to solutions of level -4 of qKZ equations. We describe these solutions relating them to level 0 solutions. We further consider general matrix elements (form factors) containing local operators and asymptotic states. We explain that the formulae for solutions of qKZ equations suggest a decomposition of these matrix elements with respect to states of corresponding Conformal Field Theory .Comment: 22 pages, 1 figur

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page

    Lattice Gauge Tensor Networks

    Get PDF
    We present a unified framework to describe lattice gauge theories by means of tensor networks: this framework is efficient as it exploits the high amount of local symmetry content native of these systems describing only the gauge invariant subspace. Compared to a standard tensor network description, the gauge invariant one allows to speed-up real and imaginary time evolution of a factor that is up to the square of the dimension of the link variable. The gauge invariant tensor network description is based on the quantum link formulation, a compact and intuitive formulation for gauge theories on the lattice, and it is alternative to and can be combined with the global symmetric tensor network description. We present some paradigmatic examples that show how this architecture might be used to describe the physics of condensed matter and high-energy physics systems. Finally, we present a cellular automata analysis which estimates the gauge invariant Hilbert space dimension as a function of the number of lattice sites and that might guide the search for effective simplified models of complex theories.Comment: 28 pages, 9 figure
    • …
    corecore