18,880 research outputs found
Pricing options and computing implied volatilities using neural networks
This paper proposes a data-driven approach, by means of an Artificial Neural
Network (ANN), to value financial options and to calculate implied volatilities
with the aim of accelerating the corresponding numerical methods. With ANNs
being universal function approximators, this method trains an optimized ANN on
a data set generated by a sophisticated financial model, and runs the trained
ANN as an agent of the original solver in a fast and efficient way. We test
this approach on three different types of solvers, including the analytic
solution for the Black-Scholes equation, the COS method for the Heston
stochastic volatility model and Brent's iterative root-finding method for the
calculation of implied volatilities. The numerical results show that the ANN
solver can reduce the computing time significantly
Construction of Bayesian Deformable Models via Stochastic Approximation Algorithm: A Convergence Study
The problem of the definition and the estimation of generative models based
on deformable templates from raw data is of particular importance for modelling
non aligned data affected by various types of geometrical variability. This is
especially true in shape modelling in the computer vision community or in
probabilistic atlas building for Computational Anatomy (CA). A first coherent
statistical framework modelling the geometrical variability as hidden variables
has been given by Allassonni\`ere, Amit and Trouv\'e (JRSS 2006). Setting the
problem in a Bayesian context they proved the consistency of the MAP estimator
and provided a simple iterative deterministic algorithm with an EM flavour
leading to some reasonable approximations of the MAP estimator under low noise
conditions. In this paper we present a stochastic algorithm for approximating
the MAP estimator in the spirit of the SAEM algorithm. We prove its convergence
to a critical point of the observed likelihood with an illustration on images
of handwritten digits
Variable selection for BART: An application to gene regulation
We consider the task of discovering gene regulatory networks, which are
defined as sets of genes and the corresponding transcription factors which
regulate their expression levels. This can be viewed as a variable selection
problem, potentially with high dimensionality. Variable selection is especially
challenging in high-dimensional settings, where it is difficult to detect
subtle individual effects and interactions between predictors. Bayesian
Additive Regression Trees [BART, Ann. Appl. Stat. 4 (2010) 266-298] provides a
novel nonparametric alternative to parametric regression approaches, such as
the lasso or stepwise regression, especially when the number of relevant
predictors is sparse relative to the total number of available predictors and
the fundamental relationships are nonlinear. We develop a principled
permutation-based inferential approach for determining when the effect of a
selected predictor is likely to be real. Going further, we adapt the BART
procedure to incorporate informed prior information about variable importance.
We present simulations demonstrating that our method compares favorably to
existing parametric and nonparametric procedures in a variety of data settings.
To demonstrate the potential of our approach in a biological context, we apply
it to the task of inferring the gene regulatory network in yeast (Saccharomyces
cerevisiae). We find that our BART-based procedure is best able to recover the
subset of covariates with the largest signal compared to other variable
selection methods. The methods developed in this work are readily available in
the R package bartMachine.Comment: Published in at http://dx.doi.org/10.1214/14-AOAS755 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Deciding the dimension of effective dimension reduction space for functional and high-dimensional data
In this paper, we consider regression models with a Hilbert-space-valued
predictor and a scalar response, where the response depends on the predictor
only through a finite number of projections. The linear subspace spanned by
these projections is called the effective dimension reduction (EDR) space. To
determine the dimensionality of the EDR space, we focus on the leading
principal component scores of the predictor, and propose two sequential
testing procedures under the assumption that the predictor has an
elliptically contoured distribution. We further extend these procedures and
introduce a test that simultaneously takes into account a large number of
principal component scores. The proposed procedures are supported by theory,
validated by simulation studies, and illustrated by a real-data example. Our
methods and theory are applicable to functional data and high-dimensional
multivariate data.Comment: Published in at http://dx.doi.org/10.1214/10-AOS816 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Gaussian Process Model Predictive Control of An Unmanned Quadrotor
The Model Predictive Control (MPC) trajectory tracking problem of an unmanned
quadrotor with input and output constraints is addressed. In this article, the
dynamic models of the quadrotor are obtained purely from operational data in
the form of probabilistic Gaussian Process (GP) models. This is different from
conventional models obtained through Newtonian analysis. A hierarchical control
scheme is used to handle the trajectory tracking problem with the translational
subsystem in the outer loop and the rotational subsystem in the inner loop.
Constrained GP based MPC are formulated separately for both subsystems. The
resulting MPC problems are typically nonlinear and non-convex. We derived 15 a
GP based local dynamical model that allows these optimization problems to be
relaxed to convex ones which can be efficiently solved with a simple active-set
algorithm. The performance of the proposed approach is compared with an
existing unconstrained Nonlinear Model Predictive Control (NMPC). Simulation
results show that the two approaches exhibit similar trajectory tracking
performance. However, our approach has the advantage of incorporating
constraints on the control inputs. In addition, our approach only requires 20%
of the computational time for NMPC.Comment: arXiv admin note: text overlap with arXiv:1612.0121
- …