58,486 research outputs found
Learning Mesh Motion Techniques with Application to Fluid-Structure Interaction
Mesh degeneration is a bottleneck for fluid-structure interaction (FSI)
simulations and for shape optimization via the method of mappings. In both
cases, an appropriate mesh motion technique is required. The choice is
typically based on heuristics, e.g., the solution operators of partial
differential equations (PDE), such as the Laplace or biharmonic equation.
Especially the latter, which shows good numerical performance for large
displacements, is expensive. Moreover, from a continuous perspective, choosing
the mesh motion technique is to a certain extent arbitrary and has no influence
on the physically relevant quantities. Therefore, we consider approaches
inspired by machine learning. We present a hybrid PDE-NN approach, where the
neural network (NN) serves as parameterization of a coefficient in a second
order nonlinear PDE. We ensure existence of solutions for the nonlinear PDE by
the choice of the neural network architecture. Moreover, we present an approach
where a neural network corrects the harmonic extension such that the boundary
displacement is not changed. In order to avoid technical difficulties in
coupling finite element and machine learning software, we work with a splitting
of the monolithic FSI system into three smaller subsystems. This allows to
solve the mesh motion equation in a separate step. We assess the quality of the
learned mesh motion technique by applying it to a FSI benchmark problem
The Neural Particle Filter
The robust estimation of dynamically changing features, such as the position
of prey, is one of the hallmarks of perception. On an abstract, algorithmic
level, nonlinear Bayesian filtering, i.e. the estimation of temporally changing
signals based on the history of observations, provides a mathematical framework
for dynamic perception in real time. Since the general, nonlinear filtering
problem is analytically intractable, particle filters are considered among the
most powerful approaches to approximating the solution numerically. Yet, these
algorithms prevalently rely on importance weights, and thus it remains an
unresolved question how the brain could implement such an inference strategy
with a neuronal population. Here, we propose the Neural Particle Filter (NPF),
a weight-less particle filter that can be interpreted as the neuronal dynamics
of a recurrently connected neural network that receives feed-forward input from
sensory neurons and represents the posterior probability distribution in terms
of samples. Specifically, this algorithm bridges the gap between the
computational task of online state estimation and an implementation that allows
networks of neurons in the brain to perform nonlinear Bayesian filtering. The
model captures not only the properties of temporal and multisensory integration
according to Bayesian statistics, but also allows online learning with a
maximum likelihood approach. With an example from multisensory integration, we
demonstrate that the numerical performance of the model is adequate to account
for both filtering and identification problems. Due to the weightless approach,
our algorithm alleviates the 'curse of dimensionality' and thus outperforms
conventional, weighted particle filters in higher dimensions for a limited
number of particles
Reduced Order Modeling for Nonlinear PDE-constrained Optimization using Neural Networks
Nonlinear model predictive control (NMPC) often requires real-time solution
to optimization problems. However, in cases where the mathematical model is of
high dimension in the solution space, e.g. for solution of partial differential
equations (PDEs), black-box optimizers are rarely sufficient to get the
required online computational speed. In such cases one must resort to
customized solvers. This paper present a new solver for nonlinear
time-dependent PDE-constrained optimization problems. It is composed of a
sequential quadratic programming (SQP) scheme to solve the PDE-constrained
problem in an offline phase, a proper orthogonal decomposition (POD) approach
to identify a lower dimensional solution space, and a neural network (NN) for
fast online evaluations. The proposed method is showcased on a regularized
least-square optimal control problem for the viscous Burgers' equation. It is
concluded that significant online speed-up is achieved, compared to
conventional methods using SQP and finite elements, at a cost of a prolonged
offline phase and reduced accuracy.Comment: Accepted for publishing at the 58th IEEE Conference on Decision and
Control, Nice, France, 11-13 December, https://cdc2019.ieeecss.org
SuperSpike: Supervised learning in multi-layer spiking neural networks
A vast majority of computation in the brain is performed by spiking neural
networks. Despite the ubiquity of such spiking, we currently lack an
understanding of how biological spiking neural circuits learn and compute
in-vivo, as well as how we can instantiate such capabilities in artificial
spiking circuits in-silico. Here we revisit the problem of supervised learning
in temporally coding multi-layer spiking neural networks. First, by using a
surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based
three factor learning rule capable of training multi-layer networks of
deterministic integrate-and-fire neurons to perform nonlinear computations on
spatiotemporal spike patterns. Second, inspired by recent results on feedback
alignment, we compare the performance of our learning rule under different
credit assignment strategies for propagating output errors to hidden units.
Specifically, we test uniform, symmetric and random feedback, finding that
simpler tasks can be solved with any type of feedback, while more complex tasks
require symmetric feedback. In summary, our results open the door to obtaining
a better scientific understanding of learning and computation in spiking neural
networks by advancing our ability to train them to solve nonlinear problems
involving transformations between different spatiotemporal spike-time patterns
- …