42,067 research outputs found
Video foreground detection based on symmetric alpha-stable mixture models.
Background subtraction (BS) is an efficient technique for detecting moving objects in video sequences. A simple BS process involves building a model of the background and extracting regions of the foreground (moving objects) with the assumptions that the camera remains stationary and there exist no movements in the background. These assumptions restrict the applicability of BS methods to real-time object detection in video. In this paper, we propose an extended cluster BS technique with a mixture of symmetric alpha stable (SS) distributions. An on-line self-adaptive mechanism is presented that allows automated estimation of the model parameters using the log moment method. Results over real video sequences from indoor and outdoor environments, with data from static and moving video cameras are presented. The SS mixture model is shown to improve the detection performance compared with a cluster BS method using a Gaussian mixture model and the method of Li et al. [11]
Incrementally Learned Mixture Models for GNSS Localization
GNSS localization is an important part of today's autonomous systems,
although it suffers from non-Gaussian errors caused by non-line-of-sight
effects. Recent methods are able to mitigate these effects by including the
corresponding distributions in the sensor fusion algorithm. However, these
approaches require prior knowledge about the sensor's distribution, which is
often not available. We introduce a novel sensor fusion algorithm based on
variational Bayesian inference, that is able to approximate the true
distribution with a Gaussian mixture model and to learn its parametrization
online. The proposed Incremental Variational Mixture algorithm automatically
adapts the number of mixture components to the complexity of the measurement's
error distribution. We compare the proposed algorithm against current
state-of-the-art approaches using a collection of open access real world
datasets and demonstrate its superior localization accuracy.Comment: 8 pages, 5 figures, published in proceedings of IEEE Intelligent
Vehicles Symposium (IV) 201
Adaptive Importance Sampling in General Mixture Classes
In this paper, we propose an adaptive algorithm that iteratively updates both
the weights and component parameters of a mixture importance sampling density
so as to optimise the importance sampling performances, as measured by an
entropy criterion. The method is shown to be applicable to a wide class of
importance sampling densities, which includes in particular mixtures of
multivariate Student t distributions. The performances of the proposed scheme
are studied on both artificial and real examples, highlighting in particular
the benefit of a novel Rao-Blackwellisation device which can be easily
incorporated in the updating scheme.Comment: Removed misleading comment in Section
A self-organising mixture network for density modelling
A completely unsupervised mixture distribution network, namely the self-organising mixture network, is proposed for learning arbitrary density functions. The algorithm minimises the Kullback-Leibler information by means of stochastic approximation methods. The density functions are modelled as mixtures of parametric distributions such as Gaussian and Cauchy. The first layer of the network is similar to the Kohonen's self-organising map (SOM), but with the parameters of the class conditional densities as the learning weights. The winning mechanism is based on maximum posterior probability, and the updating of weights can be limited to a small neighbourhood around the winner. The second layer accumulates the responses of these local nodes, weighted by the learning mixing parameters. The network possesses simple structure and computation, yet yields fast and robust convergence. Experimental results are also presente
Probabilistic Point Cloud Modeling via Self-Organizing Gaussian Mixture Models
This letter presents a continuous probabilistic modeling methodology for
spatial point cloud data using finite Gaussian Mixture Models (GMMs) where the
number of components are adapted based on the scene complexity. Few
hierarchical and adaptive methods have been proposed to address the challenge
of balancing model fidelity with size. Instead, state-of-the-art mapping
approaches require tuning parameters for specific use cases, but do not
generalize across diverse environments. To address this gap, we utilize a
self-organizing principle from information-theoretic learning to automatically
adapt the complexity of the GMM model based on the relevant information in the
sensor data. The approach is evaluated against existing point cloud modeling
techniques on real-world data with varying degrees of scene complexity.Comment: 8 pages, 6 figures, to appear in IEEE Robotics and Automation Letter
Statistical framework for video decoding complexity modeling and prediction
Video decoding complexity modeling and prediction is an increasingly important issue for efficient resource utilization in a variety of applications, including task scheduling, receiver-driven complexity shaping, and adaptive dynamic voltage scaling. In this paper we present a novel view of this problem based on a statistical framework perspective. We explore the statistical structure (clustering) of the execution time required by each video decoder module (entropy decoding, motion compensation, etc.) in conjunction with complexity features that are easily extractable at encoding time (representing the properties of each module's input source data). For this purpose, we employ Gaussian mixture models (GMMs) and an expectation-maximization algorithm to estimate the joint execution-time - feature probability density function (PDF). A training set of typical video sequences is used for this purpose in an offline estimation process. The obtained GMM representation is used in conjunction with the complexity features of new video sequences to predict the execution time required for the decoding of these sequences. Several prediction approaches are discussed and compared. The potential mismatch between the training set and new video content is addressed by adaptive online joint-PDF re-estimation. An experimental comparison is performed to evaluate the different approaches and compare the proposed prediction scheme with related resource prediction schemes from the literature. The usefulness of the proposed complexity-prediction approaches is demonstrated in an application of rate-distortion-complexity optimized decoding
Free Energy Methods for Bayesian Inference: Efficient Exploration of Univariate Gaussian Mixture Posteriors
Because of their multimodality, mixture posterior distributions are difficult
to sample with standard Markov chain Monte Carlo (MCMC) methods. We propose a
strategy to enhance the sampling of MCMC in this context, using a biasing
procedure which originates from computational Statistical Physics. The
principle is first to choose a "reaction coordinate", that is, a "direction" in
which the target distribution is multimodal. In a second step, the marginal
log-density of the reaction coordinate with respect to the posterior
distribution is estimated; minus this quantity is called "free energy" in the
computational Statistical Physics literature. To this end, we use adaptive
biasing Markov chain algorithms which adapt their targeted invariant
distribution on the fly, in order to overcome sampling barriers along the
chosen reaction coordinate. Finally, we perform an importance sampling step in
order to remove the bias and recover the true posterior. The efficiency factor
of the importance sampling step can easily be estimated \emph{a priori} once
the bias is known, and appears to be rather large for the test cases we
considered. A crucial point is the choice of the reaction coordinate. One
standard choice (used for example in the classical Wang-Landau algorithm) is
minus the log-posterior density. We discuss other choices. We show in
particular that the hyper-parameter that determines the order of magnitude of
the variance of each component is both a convenient and an efficient reaction
coordinate. We also show how to adapt the method to compute the evidence
(marginal likelihood) of a mixture model. We illustrate our approach by
analyzing two real data sets
- âŠ