64,805 research outputs found
Identifying and modelling delay feedback systems
Systems with delayed feedback can possess chaotic attractors with extremely
high dimension, even if only a few physical degrees of freedom are involved. We
propose a state space reconstruction from time series data of a scalar
observable, coming along with a novel method to identify and model such
systems, if a single variable is fed back. Making use of special properties of
the feedback structure, we can understand the structure of the system by
constructing equivalent equations of motion in spaces with dimensions which can
be much smaller than the dimension of the chaotic attractor. We verify our
method using both numerical and experimental data
Generalized Teacher Forcing for Learning Chaotic Dynamics
Chaotic dynamical systems (DS) are ubiquitous in nature and society. Often we
are interested in reconstructing such systems from observed time series for
prediction or mechanistic insight, where by reconstruction we mean learning
geometrical and invariant temporal properties of the system in question (like
attractors). However, training reconstruction algorithms like recurrent neural
networks (RNNs) on such systems by gradient-descent based techniques faces
severe challenges. This is mainly due to exploding gradients caused by the
exponential divergence of trajectories in chaotic systems. Moreover, for
(scientific) interpretability we wish to have as low dimensional
reconstructions as possible, preferably in a model which is mathematically
tractable. Here we report that a surprisingly simple modification of teacher
forcing leads to provably strictly all-time bounded gradients in training on
chaotic systems, and, when paired with a simple architectural rearrangement of
a tractable RNN design, piecewise-linear RNNs (PLRNNs), allows for faithful
reconstruction in spaces of at most the dimensionality of the observed system.
We show on several DS that with these amendments we can reconstruct DS better
than current SOTA algorithms, in much lower dimensions. Performance differences
were particularly compelling on real world data with which most other methods
severely struggled. This work thus led to a simple yet powerful DS
reconstruction algorithm which is highly interpretable at the same time.Comment: Published in the Proceedings of the 40th International Conference on
Machine Learning (ICML 2023
- …