23,053 research outputs found
Manifold interpolation and model reduction
One approach to parametric and adaptive model reduction is via the
interpolation of orthogonal bases, subspaces or positive definite system
matrices. In all these cases, the sampled inputs stem from matrix sets that
feature a geometric structure and thus form so-called matrix manifolds. This
work will be featured as a chapter in the upcoming Handbook on Model Order
Reduction (P. Benner, S. Grivet-Talocia, A. Quarteroni, G. Rozza, W.H.A.
Schilders, L.M. Silveira, eds, to appear on DE GRUYTER) and reviews the
numerical treatment of the most important matrix manifolds that arise in the
context of model reduction. Moreover, the principal approaches to data
interpolation and Taylor-like extrapolation on matrix manifolds are outlined
and complemented by algorithms in pseudo-code.Comment: 37 pages, 4 figures, featured chapter of upcoming "Handbook on Model
Order Reduction
Generating 3D faces using Convolutional Mesh Autoencoders
Learned 3D representations of human faces are useful for computer vision
problems such as 3D face tracking and reconstruction from images, as well as
graphics applications such as character generation and animation. Traditional
models learn a latent representation of a face using linear subspaces or
higher-order tensor generalizations. Due to this linearity, they can not
capture extreme deformations and non-linear expressions. To address this, we
introduce a versatile model that learns a non-linear representation of a face
using spectral convolutions on a mesh surface. We introduce mesh sampling
operations that enable a hierarchical mesh representation that captures
non-linear variations in shape and expression at multiple scales within the
model. In a variational setting, our model samples diverse realistic 3D faces
from a multivariate Gaussian distribution. Our training data consists of 20,466
meshes of extreme expressions captured over 12 different subjects. Despite
limited training data, our trained model outperforms state-of-the-art face
models with 50% lower reconstruction error, while using 75% fewer parameters.
We also show that, replacing the expression space of an existing
state-of-the-art face model with our autoencoder, achieves a lower
reconstruction error. Our data, model and code are available at
http://github.com/anuragranj/com
Study of Phase Reconstruction Techniques applied to Smith-Purcell Radiation Measurements
Measurements of coherent radiation at accelerators typically give the
absolute value of the beam profile Fourier transform but not its phase. Phase
reconstruction techniques such as Hilbert transform or Kramers Kronig
reconstruction are used to recover such phase. We report a study of the
performances of these methods and how to optimize the reconstructed profiles
Learning Equations for Extrapolation and Control
We present an approach to identify concise equations from data using a
shallow neural network approach. In contrast to ordinary black-box regression,
this approach allows understanding functional relations and generalizing them
from observed data to unseen parts of the parameter space. We show how to
extend the class of learnable equations for a recently proposed equation
learning network to include divisions, and we improve the learning and model
selection strategy to be useful for challenging real-world data. For systems
governed by analytical expressions, our method can in many cases identify the
true underlying equation and extrapolate to unseen domains. We demonstrate its
effectiveness by experiments on a cart-pendulum system, where only 2 random
rollouts are required to learn the forward dynamics and successfully achieve
the swing-up task.Comment: 9 pages, 9 figures, ICML 201
- …