2 research outputs found
Deep Hamiltonian networks based on symplectic integrators
HNets is a class of neural networks on grounds of physical prior for learning
Hamiltonian systems. This paper explains the influences of different
integrators as hyper-parameters on the HNets through error analysis. If we
define the network target as the map with zero empirical loss on arbitrary
training data, then the non-symplectic integrators cannot guarantee the
existence of the network targets of HNets. We introduce the inverse modified
equations for HNets and prove that the HNets based on symplectic integrators
possess network targets and the differences between the network targets and the
original Hamiltonians depend on the accuracy orders of the integrators. Our
numerical experiments show that the phase flows of the Hamiltonian systems
obtained by symplectic HNets do not exactly preserve the original Hamiltonians,
but preserve the network targets calculated; the loss of the network target for
the training data and the test data is much less than the loss of the original
Hamiltonian; the symplectic HNets have more powerful generalization ability and
higher accuracy than the non-symplectic HNets in addressing predicting issues.
Thus, the symplectic integrators are of critical importance for HNets
Nonseparable Symplectic Neural Networks
Predicting the behaviors of Hamiltonian systems has been drawing increasing
attention in scientific machine learning. However, the vast majority of the
literature was focused on predicting separable Hamiltonian systems with their
kinematic and potential energy terms being explicitly decoupled while building
data-driven paradigms to predict nonseparable Hamiltonian systems that are
ubiquitous in fluid dynamics and quantum mechanics were rarely explored. The
main computational challenge lies in the effective embedding of symplectic
priors to describe the inherently coupled evolution of position and momentum,
which typically exhibits intricate dynamics. To solve the problem, we propose a
novel neural network architecture, Nonseparable Symplectic Neural Networks
(NSSNNs), to uncover and embed the symplectic structure of a nonseparable
Hamiltonian system from limited observation data. The enabling mechanics of our
approach is an augmented symplectic time integrator to decouple the position
and momentum energy terms and facilitate their evolution. We demonstrated the
efficacy and versatility of our method by predicting a wide range of
Hamiltonian systems, both separable and nonseparable, including chaotic
vortical flows. We showed the unique computational merits of our approach to
yield long-term, accurate, and robust predictions for large-scale Hamiltonian
systems by rigorously enforcing symplectomorphism.Comment: ICLR202