66 research outputs found
Embedded Markov chain approximations in Skorokhod topologies
In order to approximate a continuous time stochastic process by discrete time
Markov chains one has several options to embed the Markov chains into
continuous time processes. On the one hand there is the Markov embedding, which
uses exponential waiting times. On the other hand each Skorokhod topology
naturally suggests a certain embedding. These are the step function embedding
for , the linear interpolation embedding for , the multi step
embedding for and a more general embedding for . We show that the
convergence of the step function embedding in implies the convergence of
the other embeddings in the corresponding topologies, respectively. For the
converse statement a -tightness condition for embedded Markov chains is
given.
The result relies on various representations of the Skorokhod topologies.
Additionally it is shown that convergence is equivalent to the joint
convergence in and .Comment: To appear in Probability and Mathematical Statistic
The Euler scheme for Feller processes
We consider the Euler scheme for stochastic differential equations with
jumps, whose intensity might be infinite and the jump structure may depend on
the position. This general type of SDE is explicitly given for Feller processes
and a general convergence condition is presented.
In particular the characteristic functions of the increments of the Euler
scheme are calculated in terms of the symbol of the Feller process in a closed
form. These increments are increments of L\'evy processes and thus the Euler
scheme can be used for simulation by applying standard techniques from L\'evy
processes
Constructions of Coupling Processes for L\'evy Processes
We construct optimal Markov couplings of L\'{e}vy processes, whose L\'evy
(jump) measure has an absolutely continuous component. The construction is
based on properties of subordinate Brownian motions and the coupling of
Brownian motions by reflection.Comment: 16 page
Detecting independence of random vectors: generalized distance covariance and Gaussian covariance
Distance covariance is a quantity to measure the dependence of two random
vectors. We show that the original concept introduced and developed by
Sz\'{e}kely, Rizzo and Bakirov can be embedded into a more general framework
based on symmetric L\'{e}vy measures and the corresponding real-valued
continuous negative definite functions. The L\'{e}vy measures replace the
weight functions used in the original definition of distance covariance. All
essential properties of distance covariance are preserved in this new
framework. From a practical point of view this allows less restrictive moment
conditions on the underlying random variables and one can use other distance
functions than Euclidean distance, e.g. Minkowski distance. Most importantly,
it serves as the basic building block for distance multivariance, a quantity to
measure and estimate dependence of multiple random vectors, which is introduced
in a follow-up paper [Distance Multivariance: New dependence measures for
random vectors (submitted). Revised version of arXiv: 1711.07775v1] to the
present article.Comment: Published at https://doi.org/10.15559/18-VMSTA116 in the Modern
Stochastics: Theory and Applications (https://www.i-journals.org/vtxpp/VMSTA)
by VTeX (http://www.vtex.lt/
- …