2,498 research outputs found
Consensus Labeled Random Finite Set Filtering for Distributed Multi-Object Tracking
This paper addresses distributed multi-object tracking over a network of
heterogeneous and geographically dispersed nodes with sensing, communication
and processing capabilities. The main contribution is an approach to
distributed multi-object estimation based on labeled Random Finite Sets (RFSs)
and dynamic Bayesian inference, which enables the development of two novel
consensus tracking filters, namely a Consensus Marginalized
-Generalized Labeled Multi-Bernoulli and Consensus Labeled
Multi-Bernoulli tracking filter. The proposed algorithms provide fully
distributed, scalable and computationally efficient solutions for multi-object
tracking. Simulation experiments via Gaussian mixture implementations confirm
the effectiveness of the proposed approach on challenging scenarios
Gaussian Process Prior Variational Autoencoders
Variational autoencoders (VAE) are a powerful and widely-used class of models
to learn complex data distributions in an unsupervised fashion. One important
limitation of VAEs is the prior assumption that latent sample representations
are independent and identically distributed. However, for many important
datasets, such as time-series of images, this assumption is too strong:
accounting for covariances between samples, such as those in time, can yield to
a more appropriate model specification and improve performance in downstream
tasks. In this work, we introduce a new model, the Gaussian Process (GP) Prior
Variational Autoencoder (GPPVAE), to specifically address this issue. The
GPPVAE aims to combine the power of VAEs with the ability to model correlations
afforded by GP priors. To achieve efficient inference in this new class of
models, we leverage structure in the covariance matrix, and introduce a new
stochastic backpropagation strategy that allows for computing stochastic
gradients in a distributed and low-memory fashion. We show that our method
outperforms conditional VAEs (CVAEs) and an adaptation of standard VAEs in two
image data applications.Comment: Accepted at 32nd Conference on Neural Information Processing Systems
(NIPS 2018), Montr\'eal, Canad
Trajectory PHD and CPHD filters
This paper presents the probability hypothesis density filter (PHD) and the
cardinality PHD (CPHD) filter for sets of trajectories, which are referred to
as the trajectory PHD (TPHD) and trajectory CPHD (TCPHD) filters. Contrary to
the PHD/CPHD filters, the TPHD/TCPHD filters are able to produce trajectory
estimates from first principles. The TPHD filter is derived by recursively
obtaining the best Poisson multitrajectory density approximation to the
posterior density over the alive trajectories by minimising the
Kullback-Leibler divergence. The TCPHD is derived in the same way but
propagating an independent identically distributed (IID) cluster
multitrajectory density approximation. We also propose the Gaussian mixture
implementations of the TPHD and TCPHD recursions, the Gaussian mixture TPHD
(GMTPHD) and the Gaussian mixture TCPHD (GMTCPHD), and the L-scan
computationally efficient implementations, which only update the density of the
trajectory states of the last L time steps.Comment: MATLAB implementations are provided here:
https://github.com/Agarciafernandez/MT
Bayesian parameter estimation of core collapse supernovae using gravitational wave simulations
Using the latest numerical simulations of rotating stellar core collapse, we
present a Bayesian framework to extract the physical information encoded in
noisy gravitational wave signals. We fit Bayesian principal component
regression models with known and unknown signal arrival times to reconstruct
gravitational wave signals, and subsequently fit known astrophysical parameters
on the posterior means of the principal component coefficients using a linear
model. We predict the ratio of rotational kinetic energy to gravitational
energy of the inner core at bounce by sampling from the posterior predictive
distribution, and find that these predictions are generally very close to the
true parameter values, with credible intervals and wide for the known and unknown arrival time models respectively. Two
supervised machine learning methods are implemented to classify precollapse
differential rotation, and we find that these methods discriminate rapidly
rotating progenitors particularly well. We also introduce a constrained
optimization approach to model selection to find an optimal number of principal
components in the signal reconstruction step. Using this approach, we select 14
principal components as the most parsimonious model
Distributed Fusion with Multi-Bernoulli Filter based on Generalized Covariance Intersection
In this paper, we propose a distributed multi-object tracking algorithm
through the use of multi-Bernoulli (MB) filter based on generalized Covariance
Intersection (G-CI). Our analyses show that the G-CI fusion with two MB
posterior distributions does not admit an accurate closed-form expression. To
solve this challenging problem, we firstly approximate the fused posterior as
the unlabeled version of -generalized labeled multi-Bernoulli
(-GLMB) distribution, referred to as generalized multi-Bernoulli (GMB)
distribution. Then, to allow the subsequent fusion with another multi-Bernoulli
posterior distribution, e.g., fusion with a third sensor node in the sensor
network, or fusion in the feedback working mode, we further approximate the
fused GMB posterior distribution as an MB distribution which matches its
first-order statistical moment. The proposed fusion algorithm is implemented
using sequential Monte Carlo technique and its performance is highlighted by
numerical results.Comment: 14 pages, 13 figures, under review for IEEE Trans. on Signal Process
Volume: 65, Issue: 1, Jan.1, 1 201
Scalable Bayesian Inference for Excitatory Point Process Networks
Networks capture our intuition about relationships in the world. They
describe the friendships between Facebook users, interactions in financial
markets, and synapses connecting neurons in the brain. These networks are
richly structured with cliques of friends, sectors of stocks, and a smorgasbord
of cell types that govern how neurons connect. Some networks, like social
network friendships, can be directly observed, but in many cases we only have
an indirect view of the network through the actions of its constituents and an
understanding of how the network mediates that activity. In this work, we focus
on the problem of latent network discovery in the case where the observable
activity takes the form of a mutually-excitatory point process known as a
Hawkes process. We build on previous work that has taken a Bayesian approach to
this problem, specifying prior distributions over the latent network structure
and a likelihood of observed activity given this network. We extend this work
by proposing a discrete-time formulation and developing a computationally
efficient stochastic variational inference (SVI) algorithm that allows us to
scale the approach to long sequences of observations. We demonstrate our
algorithm on the calcium imaging data used in the Chalearn neural connectomics
challenge
Open TURNS: An industrial software for uncertainty quantification in simulation
The needs to assess robust performances for complex systems and to answer
tighter regulatory processes (security, safety, environmental control, and
health impacts, etc.) have led to the emergence of a new industrial simulation
challenge: to take uncertainties into account when dealing with complex
numerical simulation frameworks. Therefore, a generic methodology has emerged
from the joint effort of several industrial companies and academic
institutions. EDF R&D, Airbus Group and Phimeca Engineering started a
collaboration at the beginning of 2005, joined by IMACS in 2014, for the
development of an Open Source software platform dedicated to uncertainty
propagation by probabilistic methods, named OpenTURNS for Open source Treatment
of Uncertainty, Risk 'N Statistics. OpenTURNS addresses the specific industrial
challenges attached to uncertainties, which are transparency, genericity,
modularity and multi-accessibility. This paper focuses on OpenTURNS and
presents its main features: openTURNS is an open source software under the LGPL
license, that presents itself as a C++ library and a Python TUI, and which
works under Linux and Windows environment. All the methodological tools are
described in the different sections of this paper: uncertainty quantification,
uncertainty propagation, sensitivity analysis and metamodeling. A section also
explains the generic wrappers way to link openTURNS to any external code. The
paper illustrates as much as possible the methodological tools on an
educational example that simulates the height of a river and compares it to the
height of a dyke that protects industrial facilities. At last, it gives an
overview of the main developments planned for the next few years
Extrapolating Expected Accuracies for Large Multi-Class Problems
The difficulty of multi-class classification generally increases with the
number of classes. Using data from a subset of the classes, can we predict how
well a classifier will scale with an increased number of classes? Under the
assumptions that the classes are sampled identically and independently from a
population, and that the classifier is based on independently learned scoring
functions, we show that the expected accuracy when the classifier is trained on
k classes is the (k-1)st moment of a certain distribution that can be estimated
from data. We present an unbiased estimation method based on the theory, and
demonstrate its application on a facial recognition example.Comment: Submitted to JML
Advanced statistical methods for eye movement analysis and modeling: a gentle introduction
In this Chapter we show that by considering eye movements, and in particular,
the resulting sequence of gaze shifts, a stochastic process, a wide variety of
tools become available for analyses and modelling beyond conventional
statistical methods. Such tools encompass random walk analyses and more complex
techniques borrowed from the pattern recognition and machine learning fields.
After a brief, though critical, probabilistic tour of current computational
models of eye movements and visual attention, we lay down the basis for gaze
shift pattern analysis. To this end, the concepts of Markov Processes, the
Wiener process and related random walks within the Gaussian framework of the
Central Limit Theorem will be introduced. Then, we will deliberately violate
fundamental assumptions of the Central Limit Theorem to elicit a larger
perspective, rooted in statistical physics, for analysing and modelling eye
movements in terms of anomalous, non-Gaussian, random walks and modern foraging
theory.
Eventually, by resorting to machine learning techniques, we discuss how the
analyses of movement patterns can develop into the inference of hidden patterns
of the mind: inferring the observer's task, assessing cognitive impairments,
classifying expertise.Comment: Draft of Chapter to appear in "An introduction to the scientific
foundations of eye movement research and its applications
On the Inability of Markov Models to Capture Criticality in Human Mobility
We examine the non-Markovian nature of human mobility by exposing the
inability of Markov models to capture criticality in human mobility. In
particular, the assumed Markovian nature of mobility was used to establish a
theoretical upper bound on the predictability of human mobility (expressed as a
minimum error probability limit), based on temporally correlated entropy. Since
its inception, this bound has been widely used and empirically validated using
Markov chains. We show that recurrent-neural architectures can achieve
significantly higher predictability, surpassing this widely used upper bound.
In order to explain this anomaly, we shed light on several underlying
assumptions in previous research works that has resulted in this bias. By
evaluating the mobility predictability on real-world datasets, we show that
human mobility exhibits scale-invariant long-range correlations, bearing
similarity to a power-law decay. This is in contrast to the initial assumption
that human mobility follows an exponential decay. This assumption of
exponential decay coupled with Lempel-Ziv compression in computing Fano's
inequality has led to an inaccurate estimation of the predictability upper
bound. We show that this approach inflates the entropy, consequently lowering
the upper bound on human mobility predictability. We finally highlight that
this approach tends to overlook long-range correlations in human mobility. This
explains why recurrent-neural architectures that are designed to handle
long-range structural correlations surpass the previously computed upper bound
on mobility predictability
- …