26,125 research outputs found
Split Sampling: Expectations, Normalisation and Rare Events
In this paper we develop a methodology that we call split sampling methods to
estimate high dimensional expectations and rare event probabilities. Split
sampling uses an auxiliary variable MCMC simulation and expresses the
expectation of interest as an integrated set of rare event probabilities. We
derive our estimator from a Rao-Blackwellised estimate of a marginal auxiliary
variable distribution. We illustrate our method with two applications. First,
we compute a shortest network path rare event probability and compare our
method to estimation to a cross entropy approach. Then, we compute a
normalisation constant of a high dimensional mixture of Gaussians and compare
our estimate to one based on nested sampling. We discuss the relationship
between our method and other alternatives such as the product of conditional
probability estimator and importance sampling. The methods developed here are
available in the R package: SplitSampling
Rare-event Simulation and Efficient Discretization for the Supremum of Gaussian Random Fields
In this paper, we consider a classic problem concerning the high excursion
probabilities of a Gaussian random field living on a compact set . We
develop efficient computational methods for the tail probabilities and the conditional expectations as
. For each positive, we present Monte Carlo
algorithms that run in \emph{constant} time and compute the interesting
quantities with relative error for arbitrarily large . The
efficiency results are applicable to a large class of H\"older continuous
Gaussian random fields. Besides computations, the proposed change of measure
and its analysis techniques have several theoretical and practical indications
in the asymptotic analysis of extremes of Gaussian random fields
Fast performance estimation of block codes
Importance sampling is used in this paper to address the classical yet important problem of performance estimation of block codes. Simulation distributions that comprise discreteand continuous-mixture probability densities are motivated and used for this application. These mixtures are employed in concert with the so-called g-method, which is a conditional importance sampling technique that more effectively exploits knowledge of underlying input distributions. For performance estimation, the emphasis is on bit by bit maximum a-posteriori probability decoding, but message passing algorithms for certain codes have also been investigated. Considered here are single parity check codes, multidimensional product codes, and briefly, low-density parity-check codes. Several error rate results are presented for these various codes, together with performances of the simulation techniques
Control Variates for Reversible MCMC Samplers
A general methodology is introduced for the construction and effective
application of control variates to estimation problems involving data from
reversible MCMC samplers. We propose the use of a specific class of functions
as control variates, and we introduce a new, consistent estimator for the
values of the coefficients of the optimal linear combination of these
functions. The form and proposed construction of the control variates is
derived from our solution of the Poisson equation associated with a specific
MCMC scenario. The new estimator, which can be applied to the same MCMC sample,
is derived from a novel, finite-dimensional, explicit representation for the
optimal coefficients. The resulting variance-reduction methodology is primarily
applicable when the simulated data are generated by a conjugate random-scan
Gibbs sampler. MCMC examples of Bayesian inference problems demonstrate that
the corresponding reduction in the estimation variance is significant, and that
in some cases it can be quite dramatic. Extensions of this methodology in
several directions are given, including certain families of Metropolis-Hastings
samplers and hybrid Metropolis-within-Gibbs algorithms. Corresponding
simulation examples are presented illustrating the utility of the proposed
methods. All methodological and asymptotic arguments are rigorously justified
under easily verifiable and essentially minimal conditions.Comment: 44 pages; 6 figures; 5 table
The adaptive nature of liquidity taking in limit order books
In financial markets, the order flow, defined as the process assuming value
one for buy market orders and minus one for sell market orders, displays a very
slowly decaying autocorrelation function. Since orders impact prices,
reconciling the persistence of the order flow with market efficiency is a
subtle issue. A possible solution is provided by asymmetric liquidity, which
states that the impact of a buy or sell order is inversely related to the
probability of its occurrence. We empirically find that when the order flow
predictability increases in one direction, the liquidity in the opposite side
decreases, but the probability that a trade moves the price decreases
significantly. While the last mechanism is able to counterbalance the
persistence of order flow and restore efficiency and diffusivity, the first
acts in opposite direction. We introduce a statistical order book model where
the persistence of the order flow is mitigated by adjusting the market order
volume to the predictability of the order flow. The model reproduces the
diffusive behaviour of prices at all time scales without fine-tuning the values
of parameters, as well as the behaviour of most order book quantities as a
function of the local predictability of order flow.Comment: 40 pages, 14 figures, and 2 tables; old figure 12 removed. Accepted
for publication on JSTA
Parameter estimation for stochastic hybrid model applied to urban traffic flow estimation
This study proposes a novel data-based approach for estimating the parameters of a stochastic hybrid model describing the traffic flow in an urban traffic network with signalized intersections. The model represents the evolution of the traffic flow rate, measuring the number of vehicles passing a given location per time unit. This traffic flow rate is described using a mode-dependent first-order autoregressive (AR) stochastic process. The parameters of the AR process take different values depending on the mode of traffic operation – free flowing, congested or faulty – making this a hybrid stochastic process. Mode switching occurs according to a first-order Markov chain. This study proposes an expectation-maximization (EM) technique for estimating the transition matrix of this Markovian mode process and the parameters of the AR models for each mode. The technique is applied to actual traffic flow data from the city of Jakarta, Indonesia. The model thus obtained is validated by using the smoothed inference algorithms and an online particle filter. The authors also develop an EM parameter estimation that, in combination with a time-window shift technique, can be useful and practical for periodically updating the parameters of hybrid model leading to an adaptive traffic flow state estimator
- …