36 research outputs found
Convergence of multi-dimensional quantized 's
We quantize a multidimensional (in the Stratonovich sense) by solving
the related system of 's in which the -dimensional Brownian motion has
been replaced by the components of functional stationary quantizers. We make a
connection with rough path theory to show that the solutions of the quantized
solutions of the converge toward the solution of the . On our way to
this result we provide convergence rates of optimal quantizations toward the
Brownian motion for -H\" older distance, , in .Comment: 43 page
Functional co-monotony of processes with applications to peacocks and barrier options
We show that several general classes of stochastic processes satisfy a
functional co-monotony principle, including processes with independent
increments, Brownian diffusions, Liouville processes. As a first application,
we recover some recent results about peacock processes obtained by Hirsch et
al. which were themselves motivated by a former work of Carr et al. about the
sensitivity of Asian Call options with respect to their volatility and residual
maturity (seniority). We also derive semi-universal bounds for various barrier
options.Comment: 27 page
Quadratic optimal functional quantization of stochastic processes and numerical applications
In this paper, we present an overview of the recent developments of
functional quantization of stochastic processes, with an emphasis on the
quadratic case. Functional quantization is a way to approximate a process,
viewed as a Hilbert-valued random variable, using a nearest neighbour
projection on a finite codebook. A special emphasis is made on the
computational aspects and the numerical applications, in particular the pricing
of some path-dependent European options.Comment: 41 page
Theoretical Properties of Projection Based Multilayer Perceptrons with Functional Inputs
Many real world data are sampled functions. As shown by Functional Data
Analysis (FDA) methods, spectra, time series, images, gesture recognition data,
etc. can be processed more efficiently if their functional nature is taken into
account during the data analysis process. This is done by extending standard
data analysis methods so that they can apply to functional inputs. A general
way to achieve this goal is to compute projections of the functional data onto
a finite dimensional sub-space of the functional space. The coordinates of the
data on a basis of this sub-space provide standard vector representations of
the functions. The obtained vectors can be processed by any standard method. In
our previous work, this general approach has been used to define projection
based Multilayer Perceptrons (MLPs) with functional inputs. We study in this
paper important theoretical properties of the proposed model. We show in
particular that MLPs with functional inputs are universal approximators: they
can approximate to arbitrary accuracy any continuous mapping from a compact
sub-space of a functional space to R. Moreover, we provide a consistency result
that shows that any mapping from a functional space to R can be learned thanks
to examples by a projection based MLP: the generalization mean square error of
the MLP decreases to the smallest possible mean square error on the data when
the number of examples goes to infinity