24 research outputs found
Some integral inequalities on time scales
In this paper, some new integral inequalities on time scales are presented by
using elementarily analytic methods in calculus of time scales.Comment: 8 page
Stochastic Volatility Filtering with Intractable Likelihoods
This paper is concerned with particle filtering for -stable
stochastic volatility models. The -stable distribution provides a
flexible framework for modeling asymmetry and heavy tails, which is useful when
modeling financial returns. An issue with this distributional assumption is the
lack of a closed form for the probability density function. To estimate the
volatility of financial returns in this setting, we develop a novel auxiliary
particle filter. The algorithm we develop can be easily applied to any hidden
Markov model for which the likelihood function is intractable or
computationally expensive. The approximate target distribution of our auxiliary
filter is based on the idea of approximate Bayesian computation (ABC). ABC
methods allow for inference on posterior quantities in situations when the
likelihood of the underlying model is not available in closed form, but
simulating samples from it is possible. The ABC auxiliary particle filter
(ABC-APF) that we propose provides not only a good alternative to state
estimation in stochastic volatility models, but it also improves on the
existing ABC literature. It allows for more flexibility in state estimation
while improving on the accuracy through better proposal distributions in cases
when the optimal importance density of the filter is unavailable in closed
form. We assess the performance of the ABC-APF on a simulated dataset from the
-stable stochastic volatility model and compare it to other currently
existing ABC filters
Approximate Bayesian computation (ABC) gives exact results under the assumption of model error
Approximate Bayesian computation (ABC) or likelihood-free inference
algorithms are used to find approximations to posterior distributions without
making explicit use of the likelihood function, depending instead on simulation
of sample data sets from the model. In this paper we show that under the
assumption of the existence of a uniform additive model error term, ABC
algorithms give exact results when sufficient summaries are used. This
interpretation allows the approximation made in many previous application
papers to be understood, and should guide the choice of metric and tolerance in
future work. ABC algorithms can be generalized by replacing the 0-1 cut-off
with an acceptance probability that varies with the distance of the simulated
data from the observed data. The acceptance density gives the distribution of
the error term, enabling the uniform error usually used to be replaced by a
general distribution. This generalization can also be applied to approximate
Markov chain Monte Carlo algorithms. In light of this work, ABC algorithms can
be seen as calibration techniques for implicit stochastic models, inferring
parameter values in light of the computer model, data, prior beliefs about the
parameter values, and any measurement or model errors.Comment: 33 pages, 1 figure, to appear in Statistical Applications in Genetics
and Molecular Biology 201
Bayesian Symbol Detection in Wireless Relay Networks via Likelihood-Free Inference
This paper presents a general stochastic model developed for a class of
cooperative wireless relay networks, in which imperfect knowledge of the
channel state information at the destination node is assumed. The framework
incorporates multiple relay nodes operating under general known non-linear
processing functions. When a non-linear relay function is considered, the
likelihood function is generally intractable resulting in the maximum
likelihood and the maximum a posteriori detectors not admitting closed form
solutions. We illustrate our methodology to overcome this intractability under
the example of a popular optimal non-linear relay function choice and
demonstrate how our algorithms are capable of solving the previously
intractable detection problem. Overcoming this intractability involves
development of specialised Bayesian models. We develop three novel algorithms
to perform detection for this Bayesian model, these include a Markov chain
Monte Carlo Approximate Bayesian Computation (MCMC-ABC) approach; an Auxiliary
Variable MCMC (MCMC-AV) approach; and a Suboptimal Exhaustive Search Zero
Forcing (SES-ZF) approach. Finally, numerical examples comparing the symbol
error rate (SER) performance versus signal to noise ratio (SNR) of the three
detection algorithms are studied in simulated examples
A Model-Based Bayesian Estimation of the Rate of Evolution of VNTR Loci in Mycobacterium tuberculosis
Variable numbers of tandem repeats (VNTR) typing is widely used for studying the bacterial cause of tuberculosis. Knowledge of the rate of mutation of VNTR loci facilitates the study of the evolution and epidemiology of Mycobacterium tuberculosis. Previous studies have applied population genetic models to estimate the mutation rate, leading to estimates varying widely from around to per locus per year. Resolving this issue using more detailed models and statistical methods would lead to improved inference in the molecular epidemiology of tuberculosis. Here, we use a model-based approach that incorporates two alternative forms of a stepwise mutation process for VNTR evolution within an epidemiological model of disease transmission. Using this model in a Bayesian framework we estimate the mutation rate of VNTR in M. tuberculosis from four published data sets of VNTR profiles from Albania, Iran, Morocco and Venezuela. In the first variant, the mutation rate increases linearly with respect to repeat numbers (linear model); in the second, the mutation rate is constant across repeat numbers (constant model). We find that under the constant model, the mean mutation rate per locus is (95% CI: ,)and under the linear model, the mean mutation rate per locus per repeat unit is (95% CI: ,). These new estimates represent a high rate of mutation at VNTR loci compared to previous estimates. To compare the two models we use posterior predictive checks to ascertain which of the two models is better able to reproduce the observed data. From this procedure we find that the linear model performs better than the constant model. The general framework we use allows the possibility of extending the analysis to more complex models in the future
Sequential Monte Carlo with Highly Informative Observations
We propose sequential Monte Carlo (SMC) methods for sampling the posterior
distribution of state-space models under highly informative observation
regimes, a situation in which standard SMC methods can perform poorly. A
special case is simulating bridges between given initial and final values. The
basic idea is to introduce a schedule of intermediate weighting and resampling
times between observation times, which guide particles towards the final state.
This can always be done for continuous-time models, and may be done for
discrete-time models under sparse observation regimes; our main focus is on
continuous-time diffusion processes. The methods are broadly applicable in that
they support multivariate models with partial observation, do not require
simulation of the backward transition (which is often unavailable), and, where
possible, avoid pointwise evaluation of the forward transition. When simulating
bridges, the last cannot be avoided entirely without concessions, and we
suggest an epsilon-ball approach (reminiscent of Approximate Bayesian
Computation) as a workaround. Compared to the bootstrap particle filter, the
new methods deliver substantially reduced mean squared error in normalising
constant estimates, even after accounting for execution time. The methods are
demonstrated for state estimation with two toy examples, and for parameter
estimation (within a particle marginal Metropolis--Hastings sampler) with three
applied examples in econometrics, epidemiology and marine biogeochemistry.Comment: 25 pages, 11 figure