14,873 research outputs found
Computationally highly efficient mixture of adaptive filters
We introduce a new combination approach for the mixture of adaptive filters based on the set-membership filtering (SMF) framework. We perform SMF to combine the outputs of several parallel running adaptive algorithms and propose unconstrained, affinely constrained and convexly constrained combination weight configurations. Here, we achieve better trade-off in terms of the transient and steady-state convergence performance while providing significant computational reduction. Hence, through the introduced approaches, we can greatly enhance the convergence performance of the constituent filters with a slight increase in the computational load. In this sense, our approaches are suitable for big data applications where the data should be processed in streams with highly efficient algorithms. In the numerical examples, we demonstrate the superior performance of the proposed approaches over the state of the art using the well-known datasets in the machine learning literature. © 2016, Springer-Verlag London
Particle Efficient Importance Sampling
The efficient importance sampling (EIS) method is a general principle for the
numerical evaluation of high-dimensional integrals that uses the sequential
structure of target integrands to build variance minimising importance
samplers. Despite a number of successful applications in high dimensions, it is
well known that importance sampling strategies are subject to an exponential
growth in variance as the dimension of the integration increases. We solve this
problem by recognising that the EIS framework has an offline sequential Monte
Carlo interpretation. The particle EIS method is based on non-standard
resampling weights that take into account the look-ahead construction of the
importance sampler. We apply the method for a range of univariate and bivariate
stochastic volatility specifications. We also develop a new application of the
EIS approach to state space models with Student's t state innovations. Our
results show that the particle EIS method strongly outperforms both the
standard EIS method and particle filters for likelihood evaluation in high
dimensions. Moreover, the ratio between the variances of the particle EIS and
particle filter methods remains stable as the time series dimension increases.
We illustrate the efficiency of the method for Bayesian inference using the
particle marginal Metropolis-Hastings and importance sampling squared
algorithms
Ensemble Transport Adaptive Importance Sampling
Markov chain Monte Carlo methods are a powerful and commonly used family of
numerical methods for sampling from complex probability distributions. As
applications of these methods increase in size and complexity, the need for
efficient methods increases. In this paper, we present a particle ensemble
algorithm. At each iteration, an importance sampling proposal distribution is
formed using an ensemble of particles. A stratified sample is taken from this
distribution and weighted under the posterior, a state-of-the-art ensemble
transport resampling method is then used to create an evenly weighted sample
ready for the next iteration. We demonstrate that this ensemble transport
adaptive importance sampling (ETAIS) method outperforms MCMC methods with
equivalent proposal distributions for low dimensional problems, and in fact
shows better than linear improvements in convergence rates with respect to the
number of ensemble members. We also introduce a new resampling strategy,
multinomial transformation (MT), which while not as accurate as the ensemble
transport resampler, is substantially less costly for large ensemble sizes, and
can then be used in conjunction with ETAIS for complex problems. We also focus
on how algorithmic parameters regarding the mixture proposal can be quickly
tuned to optimise performance. In particular, we demonstrate this methodology's
superior sampling for multimodal problems, such as those arising from inference
for mixture models, and for problems with expensive likelihoods requiring the
solution of a differential equation, for which speed-ups of orders of magnitude
are demonstrated. Likelihood evaluations of the ensemble could be computed in a
distributed manner, suggesting that this methodology is a good candidate for
parallel Bayesian computations
Solving Inverse Problems with Piecewise Linear Estimators: From Gaussian Mixture Models to Structured Sparsity
A general framework for solving image inverse problems is introduced in this
paper. The approach is based on Gaussian mixture models, estimated via a
computationally efficient MAP-EM algorithm. A dual mathematical interpretation
of the proposed framework with structured sparse estimation is described, which
shows that the resulting piecewise linear estimate stabilizes the estimation
when compared to traditional sparse inverse problem techniques. This
interpretation also suggests an effective dictionary motivated initialization
for the MAP-EM algorithm. We demonstrate that in a number of image inverse
problems, including inpainting, zooming, and deblurring, the same algorithm
produces either equal, often significantly better, or very small margin worse
results than the best published ones, at a lower computational cost.Comment: 30 page
Quality Adaptive Least Squares Trained Filters for Video Compression Artifacts Removal Using a No-reference Block Visibility Metric
Compression artifacts removal is a challenging problem because videos can be compressed at different qualities. In this paper, a least squares approach that is self-adaptive to the visual quality of the input sequence is proposed. For compression artifacts, the visual quality of an image is measured by a no-reference block visibility metric. According to the blockiness visibility of an input image, an appropriate set of filter coefficients that are trained beforehand is selected for optimally removing coding artifacts and reconstructing object details. The performance of the proposed algorithm is evaluated on a variety of sequences compressed at different qualities in comparison to several other deblocking techniques. The proposed method outperforms the others significantly both objectively and subjectively
Single camera pose estimation using Bayesian filtering and Kinect motion priors
Traditional approaches to upper body pose estimation using monocular vision
rely on complex body models and a large variety of geometric constraints. We
argue that this is not ideal and somewhat inelegant as it results in large
processing burdens, and instead attempt to incorporate these constraints
through priors obtained directly from training data. A prior distribution
covering the probability of a human pose occurring is used to incorporate
likely human poses. This distribution is obtained offline, by fitting a
Gaussian mixture model to a large dataset of recorded human body poses, tracked
using a Kinect sensor. We combine this prior information with a random walk
transition model to obtain an upper body model, suitable for use within a
recursive Bayesian filtering framework. Our model can be viewed as a mixture of
discrete Ornstein-Uhlenbeck processes, in that states behave as random walks,
but drift towards a set of typically observed poses. This model is combined
with measurements of the human head and hand positions, using recursive
Bayesian estimation to incorporate temporal information. Measurements are
obtained using face detection and a simple skin colour hand detector, trained
using the detected face. The suggested model is designed with analytical
tractability in mind and we show that the pose tracking can be
Rao-Blackwellised using the mixture Kalman filter, allowing for computational
efficiency while still incorporating bio-mechanical properties of the upper
body. In addition, the use of the proposed upper body model allows reliable
three-dimensional pose estimates to be obtained indirectly for a number of
joints that are often difficult to detect using traditional object recognition
strategies. Comparisons with Kinect sensor results and the state of the art in
2D pose estimation highlight the efficacy of the proposed approach.Comment: 25 pages, Technical report, related to Burke and Lasenby, AMDO 2014
conference paper. Code sample: https://github.com/mgb45/SignerBodyPose Video:
https://www.youtube.com/watch?v=dJMTSo7-uF
Robust Distributed Fusion with Labeled Random Finite Sets
This paper considers the problem of the distributed fusion of multi-object
posteriors in the labeled random finite set filtering framework, using
Generalized Covariance Intersection (GCI) method. Our analysis shows that GCI
fusion with labeled multi-object densities strongly relies on label
consistencies between local multi-object posteriors at different sensor nodes,
and hence suffers from a severe performance degradation when perfect label
consistencies are violated. Moreover, we mathematically analyze this phenomenon
from the perspective of Principle of Minimum Discrimination Information and the
so called yes-object probability. Inspired by the analysis, we propose a novel
and general solution for the distributed fusion with labeled multi-object
densities that is robust to label inconsistencies between sensors.
Specifically, the labeled multi-object posteriors are firstly marginalized to
their unlabeled posteriors which are then fused using GCI method. We also
introduce a principled method to construct the labeled fused density and
produce tracks formally. Based on the developed theoretical framework, we
present tractable algorithms for the family of generalized labeled
multi-Bernoulli (GLMB) filters including -GLMB, marginalized
-GLMB and labeled multi-Bernoulli filters. The robustness and
efficiency of the proposed distributed fusion algorithm are demonstrated in
challenging tracking scenarios via numerical experiments.Comment: 17pages, 23 figure
- …