We explore training deep neural network models in conjunction with physical
simulations via partial differential equations (PDEs), using the simulated
degrees of freedom as latent space for the neural network. In contrast to
previous work, we do not impose constraints on the simulated space, but rather
treat its degrees of freedom purely as tools to be used by the neural network.
We demonstrate this concept for learning reduced representations. It is
typically extremely challenging for conventional simulations to faithfully
preserve the correct solutions over long time-spans with traditional, reduced
representations. This problem is particularly pronounced for solutions with
large amounts of small scale features. Here, data-driven methods can learn to
restore the details as required for accurate solutions of the underlying PDE
problem. We explore the use of physical, reduced latent space within this
context, and train models such that they can modify the content of physical
states as much as needed to best satisfy the learning objective. Surprisingly,
this autonomy allows the neural network to discover alternate dynamics that
enable a significantly improved performance in the given tasks. We demonstrate
this concept for a range of challenging test cases, among others, for
Navier-Stokes based turbulence simulations.Comment: 25 pages, 29 figure