124 research outputs found
Continuous time volatility modelling: COGARCH versus Ornstein-Uhlenbeck models
We compare the probabilistic properties of the non-Gaussian Ornstein-Uhlenbeck based stochastic volatility model of Barndorff-Nielsen and Shephard (2001) with those of the COGARCH process. The latter is a continuous time GARCH process introduced by the authors (2004). Many features are shown to be shared by both processes, but differences are pointed out as well. Furthermore, it is shown that the COGARCH process has Pareto like tails under weak regularity conditions
An example of non-attainability of expected quantum information
Introduction Braunstein and Caves [1] have clarified the relation between classical expected information i(`), in the sense of Fisher, and the analogous concept of expected quantum information I(`), by showing that I(`) is an upper bound of i(`; M) with respect to all (dominated) generalized measurements M of the state ae = ae(`) where ` is an unknown parameter and i(`; M) is the Fisher expected information for ` in the distribution of the outcome of the measurement of M . They indicate moreover that a measurement exists achieving the bound. In the present paper we show by an example, for an elementary spin- 1 2 situation, that in general there does not exist
Bridge homogeneous volatility estimators
We present a theory of bridge homogeneous volatility estimators for log-price stochastic processes. Starting with the standard definition of a Brownian bridge as the conditional Wiener process with two endpoints fixed, we introduce the concept of an incomplete bridge by breaking the symmetry between the two endpoints. For any given time interval, this allows us to encode the information contained in the open, high, low and close prices into an incomplete bridge. The efficiency of the new proposed estimators is favourably compared with that of the classical GarmanâKlass and Parkinson estimators
Importance Sampling for multi-constraints rare event probability
Improving Importance Sampling estimators for rare event probabilities
requires sharp approx- imations of the optimal density leading to a nearly
zero-variance estimator. This paper presents a new way to handle the estimation
of the probability of a rare event defined as a finite intersection of subset.
We provide a sharp approximation of the density of long runs of a random walk
condi- tioned by multiples constraints, each of them defined by an average of a
function of its summands as their number tends to infinity.Comment: Conference pape
Meixner class of non-commutative generalized stochastic processes with freely independent values I. A characterization
Let be an underlying space with a non-atomic measure on it (e.g.
and is the Lebesgue measure). We introduce and study a
class of non-commutative generalized stochastic processes, indexed by points of
, with freely independent values. Such a process (field),
, , is given a rigorous meaning through smearing out
with test functions on , with being a
(bounded) linear operator in a full Fock space. We define a set
of all continuous polynomials of , and then define a con-commutative
-space by taking the closure of in the norm
, where is the vacuum in the Fock
space. Through procedure of orthogonalization of polynomials, we construct a
unitary isomorphism between and a (Fock-space-type) Hilbert space
, with
explicitly given measures . We identify the Meixner class as those
processes for which the procedure of orthogonalization leaves the set invariant. (Note that, in the general case, the projection of a
continuous monomial of oder onto the -th chaos need not remain a
continuous polynomial.) Each element of the Meixner class is characterized by
two continuous functions and on , such that, in the
space, has representation
\omega(t)=\di_t^\dag+\lambda(t)\di_t^\dag\di_t+\di_t+\eta(t)\di_t^\dag\di^2_t,
where \di_t^\dag and \di_t are the usual creation and annihilation
operators at point
Plausibility functions and exact frequentist inference
In the frequentist program, inferential methods with exact control on error
rates are a primary focus. The standard approach, however, is to rely on
asymptotic approximations, which may not be suitable. This paper presents a
general framework for the construction of exact frequentist procedures based on
plausibility functions. It is shown that the plausibility function-based tests
and confidence regions have the desired frequentist properties in finite
samples---no large-sample justification needed. An extension of the proposed
method is also given for problems involving nuisance parameters. Examples
demonstrate that the plausibility function-based method is both exact and
efficient in a wide variety of problems.Comment: 21 pages, 5 figures, 3 table
Nonparametric Information Geometry
The differential-geometric structure of the set of positive densities on a
given measure space has raised the interest of many mathematicians after the
discovery by C.R. Rao of the geometric meaning of the Fisher information. Most
of the research is focused on parametric statistical models. In series of
papers by author and coworkers a particular version of the nonparametric case
has been discussed. It consists of a minimalistic structure modeled according
the theory of exponential families: given a reference density other densities
are represented by the centered log likelihood which is an element of an Orlicz
space. This mappings give a system of charts of a Banach manifold. It has been
observed that, while the construction is natural, the practical applicability
is limited by the technical difficulty to deal with such a class of Banach
spaces. It has been suggested recently to replace the exponential function with
other functions with similar behavior but polynomial growth at infinity in
order to obtain more tractable Banach spaces, e.g. Hilbert spaces. We give
first a review of our theory with special emphasis on the specific issues of
the infinite dimensional setting. In a second part we discuss two specific
topics, differential equations and the metric connection. The position of this
line of research with respect to other approaches is briefly discussed.Comment: Submitted for publication in the Proceedings od GSI2013 Aug 28-30
2013 Pari
Stochastic particle packing with specified granulometry and porosity
This work presents a technique for particle size generation and placement in
arbitrary closed domains. Its main application is the simulation of granular
media described by disks. Particle size generation is based on the statistical
analysis of granulometric curves which are used as empirical cumulative
distribution functions to sample from mixtures of uniform distributions. The
desired porosity is attained by selecting a certain number of particles, and
their placement is performed by a stochastic point process. We present an
application analyzing different types of sand and clay, where we model the
grain size with the gamma, lognormal, Weibull and hyperbolic distributions. The
parameters from the resulting best fit are used to generate samples from the
theoretical distribution, which are used for filling a finite-size area with
non-overlapping disks deployed by a Simple Sequential Inhibition stochastic
point process. Such filled areas are relevant as plausible inputs for assessing
Discrete Element Method and similar techniques
Optimal estimation of qubit states with continuous time measurements
We propose an adaptive, two steps strategy, for the estimation of mixed qubit
states. We show that the strategy is optimal in a local minimax sense for the
trace norm distance as well as other locally quadratic figures of merit. Local
minimax optimality means that given identical qubits, there exists no
estimator which can perform better than the proposed estimator on a
neighborhood of size of an arbitrary state. In particular, it is
asymptotically Bayesian optimal for a large class of prior distributions.
We present a physical implementation of the optimal estimation strategy based
on continuous time measurements in a field that couples with the qubits.
The crucial ingredient of the result is the concept of local asymptotic
normality (or LAN) for qubits. This means that, for large , the statistical
model described by identically prepared qubits is locally equivalent to a
model with only a classical Gaussian distribution and a Gaussian state of a
quantum harmonic oscillator.
The term `local' refers to a shrinking neighborhood around a fixed state
. An essential result is that the neighborhood radius can be chosen
arbitrarily close to . This allows us to use a two steps procedure by
which we first localize the state within a smaller neighborhood of radius
, and then use LAN to perform optimal estimation.Comment: 32 pages, 3 figures, to appear in Commun. Math. Phy
- âŠ