We introduce Branched Latent Neural Maps (BLNMs) to learn finite dimensional
input-output maps encoding complex physical processes. A BLNM is defined by a
simple and compact feedforward partially-connected neural network that
structurally disentangles inputs with different intrinsic roles, such as the
time variable from model parameters of a differential equation, while
transferring them into a generic field of interest. BLNMs leverage latent
outputs to enhance the learned dynamics and break the curse of dimensionality
by showing excellent generalization properties with small training datasets and
short training times on a single processor. Indeed, their generalization error
remains comparable regardless of the adopted discretization during the testing
phase. Moreover, the partial connections significantly reduce the number of
tunable parameters. We show the capabilities of BLNMs in a challenging test
case involving electrophysiology simulations in a biventricular cardiac model
of a pediatric patient with hypoplastic left heart syndrome. The model includes
a 1D Purkinje network for fast conduction and a 3D heart-torso geometry.
Specifically, we trained BLNMs on 150 in silico generated 12-lead
electrocardiograms (ECGs) while spanning 7 model parameters, covering
cell-scale and organ-level. Although the 12-lead ECGs manifest very fast
dynamics with sharp gradients, after automatic hyperparameter tuning the
optimal BLNM, trained in less than 3 hours on a single CPU, retains just 7
hidden layers and 19 neurons per layer. The resulting mean square error is on
the order of 10−4 on a test dataset comprised of 50 electrophysiology
simulations. In the online phase, the BLNM allows for 5000x faster real-time
simulations of cardiac electrophysiology on a single core standard computer and
can be used to solve inverse problems via global optimization in a few seconds
of computational time