33,823 research outputs found
A New Perspective and Extension of the Gaussian Filter
The Gaussian Filter (GF) is one of the most widely used filtering algorithms;
instances are the Extended Kalman Filter, the Unscented Kalman Filter and the
Divided Difference Filter. GFs represent the belief of the current state by a
Gaussian with the mean being an affine function of the measurement. We show
that this representation can be too restrictive to accurately capture the
dependences in systems with nonlinear observation models, and we investigate
how the GF can be generalized to alleviate this problem. To this end, we view
the GF from a variational-inference perspective. We analyse how restrictions on
the form of the belief can be relaxed while maintaining simplicity and
efficiency. This analysis provides a basis for generalizations of the GF. We
propose one such generalization which coincides with a GF using a virtual
measurement, obtained by applying a nonlinear function to the actual
measurement. Numerical experiments show that the proposed Feature Gaussian
Filter (FGF) can have a substantial performance advantage over the standard GF
for systems with nonlinear observation models.Comment: Will appear in Robotics: Science and Systems (R:SS) 201
Geometric combinatorics and computational molecular biology: branching polytopes for RNA sequences
Questions in computational molecular biology generate various discrete
optimization problems, such as DNA sequence alignment and RNA secondary
structure prediction. However, the optimal solutions are fundamentally
dependent on the parameters used in the objective functions. The goal of a
parametric analysis is to elucidate such dependencies, especially as they
pertain to the accuracy and robustness of the optimal solutions. Techniques
from geometric combinatorics, including polytopes and their normal fans, have
been used previously to give parametric analyses of simple models for DNA
sequence alignment and RNA branching configurations. Here, we present a new
computational framework, and proof-of-principle results, which give the first
complete parametric analysis of the branching portion of the nearest neighbor
thermodynamic model for secondary structure prediction for real RNA sequences.Comment: 17 pages, 8 figure
An overview of the proper generalized decomposition with applications in computational rheology
We review the foundations and applications of the proper generalized decomposition (PGD), a powerful model reduction technique that computes a priori by means of successive enrichment a separated representation of the unknown field. The computational complexity of the PGD scales linearly with the dimension of the space wherein the model is defined, which is in marked contrast with the exponential scaling of standard grid-based methods. First introduced in the context of computational rheology by Ammar et al. [3] and [4], the PGD has since been further developed and applied in a variety of applications ranging from the solution of the Schrödinger equation of quantum mechanics to the analysis of laminate composites. In this paper, we illustrate the use of the PGD in four problem categories related to computational rheology: (i) the direct solution of the Fokker-Planck equation for complex fluids in configuration spaces of high dimension, (ii) the development of very efficient non-incremental algorithms for transient problems, (iii) the fully three-dimensional solution of problems defined in degenerate plate or shell-like domains often encountered in polymer processing or composites manufacturing, and finally (iv) the solution of multidimensional parametric models obtained by introducing various sources of problem variability as additional coordinates
Computation of Electromagnetic Fields Scattered From Objects With Uncertain Shapes Using Multilevel Monte Carlo Method
Computational tools for characterizing electromagnetic scattering from
objects with uncertain shapes are needed in various applications ranging from
remote sensing at microwave frequencies to Raman spectroscopy at optical
frequencies. Often, such computational tools use the Monte Carlo (MC) method to
sample a parametric space describing geometric uncertainties. For each sample,
which corresponds to a realization of the geometry, a deterministic
electromagnetic solver computes the scattered fields. However, for an accurate
statistical characterization the number of MC samples has to be large. In this
work, to address this challenge, the continuation multilevel Monte Carlo
(CMLMC) method is used together with a surface integral equation solver. The
CMLMC method optimally balances statistical errors due to sampling of the
parametric space, and numerical errors due to the discretization of the
geometry using a hierarchy of discretizations, from coarse to fine. The number
of realizations of finer discretizations can be kept low, with most samples
computed on coarser discretizations to minimize computational cost.
Consequently, the total execution time is significantly reduced, in comparison
to the standard MC scheme.Comment: 25 pages, 10 Figure
Optimizing the geometrical accuracy of curvilinear meshes
This paper presents a method to generate valid high order meshes with
optimized geometrical accuracy. The high order meshing procedure starts with a
linear mesh, that is subsequently curved without taking care of the validity of
the high order elements. An optimization procedure is then used to both
untangle invalid elements and optimize the geometrical accuracy of the mesh.
Standard measures of the distance between curves are considered to evaluate the
geometrical accuracy in planar two-dimensional meshes, but they prove
computationally too costly for optimization purposes. A fast estimate of the
geometrical accuracy, based on Taylor expansions of the curves, is introduced.
An unconstrained optimization procedure based on this estimate is shown to
yield significant improvements in the geometrical accuracy of high order
meshes, as measured by the standard Haudorff distance between the geometrical
model and the mesh. Several examples illustrate the beneficial impact of this
method on CFD solutions, with a particular role of the enhanced mesh boundary
smoothness.Comment: Submitted to JC
Pricing options and computing implied volatilities using neural networks
This paper proposes a data-driven approach, by means of an Artificial Neural
Network (ANN), to value financial options and to calculate implied volatilities
with the aim of accelerating the corresponding numerical methods. With ANNs
being universal function approximators, this method trains an optimized ANN on
a data set generated by a sophisticated financial model, and runs the trained
ANN as an agent of the original solver in a fast and efficient way. We test
this approach on three different types of solvers, including the analytic
solution for the Black-Scholes equation, the COS method for the Heston
stochastic volatility model and Brent's iterative root-finding method for the
calculation of implied volatilities. The numerical results show that the ANN
solver can reduce the computing time significantly
- …