1,231 research outputs found
The Neural Particle Filter
The robust estimation of dynamically changing features, such as the position
of prey, is one of the hallmarks of perception. On an abstract, algorithmic
level, nonlinear Bayesian filtering, i.e. the estimation of temporally changing
signals based on the history of observations, provides a mathematical framework
for dynamic perception in real time. Since the general, nonlinear filtering
problem is analytically intractable, particle filters are considered among the
most powerful approaches to approximating the solution numerically. Yet, these
algorithms prevalently rely on importance weights, and thus it remains an
unresolved question how the brain could implement such an inference strategy
with a neuronal population. Here, we propose the Neural Particle Filter (NPF),
a weight-less particle filter that can be interpreted as the neuronal dynamics
of a recurrently connected neural network that receives feed-forward input from
sensory neurons and represents the posterior probability distribution in terms
of samples. Specifically, this algorithm bridges the gap between the
computational task of online state estimation and an implementation that allows
networks of neurons in the brain to perform nonlinear Bayesian filtering. The
model captures not only the properties of temporal and multisensory integration
according to Bayesian statistics, but also allows online learning with a
maximum likelihood approach. With an example from multisensory integration, we
demonstrate that the numerical performance of the model is adequate to account
for both filtering and identification problems. Due to the weightless approach,
our algorithm alleviates the 'curse of dimensionality' and thus outperforms
conventional, weighted particle filters in higher dimensions for a limited
number of particles
Enhanced particle PHD filtering for multiple human tracking
PhD ThesisVideo-based single human tracking has found wide application but multiple
human tracking is more challenging and enhanced processing techniques are
required to estimate the positions and number of targets in each frame. In
this thesis, the particle probability hypothesis density (PHD) lter is therefore
the focus due to its ability to estimate both localization and cardinality
information related to multiple human targets. To improve the tracking performance
of the particle PHD lter, a number of enhancements are proposed.
The Student's-t distribution is employed within the state and measurement
models of the PHD lter to replace the Gaussian distribution because
of its heavier tails, and thereby better predict particles with larger amplitudes.
Moreover, the variational Bayesian approach is utilized to estimate
the relationship between the measurement noise covariance matrix and the
state model, and a joint multi-dimensioned Student's-t distribution is exploited.
In order to obtain more observable measurements, a backward retrodiction
step is employed to increase the measurement set, building upon the
concept of a smoothing algorithm. To make further improvement, an adaptive
step is used to combine the forward ltering and backward retrodiction
ltering operations through the similarities of measurements achieved over
discrete time. As such, the errors in the delayed measurements generated by
false alarms and environment noise are avoided.
In the nal work, information describing human behaviour is employed
iv
Abstract v
to aid particle sampling in the prediction step of the particle PHD lter,
which is captured in a social force model. A novel social force model is
proposed based on the exponential function. Furthermore, a Markov Chain
Monte Carlo (MCMC) step is utilized to resample the predicted particles,
and the acceptance ratio is calculated by the results from the social force
model to achieve more robust prediction. Then, a one class support vector
machine (OCSVM) is applied in the measurement model of the PHD lter,
trained on human features, to mitigate noise from the environment and to
achieve better tracking performance.
The proposed improvements of the particle PHD lters are evaluated
with benchmark datasets such as the CAVIAR, PETS2009 and TUD datasets
and assessed with quantitative and global evaluation measures, and are compared
with state-of-the-art techniques to con rm the improvement of multiple
human tracking performance
Real-time people tracking in a camera network
Visual tracking is a fundamental key to the recognition and analysis of human behaviour.
In this thesis we present an approach to track several subjects using multiple
cameras in real time. The tracking framework employs a numerical Bayesian estimator,
also known as a particle lter, which has been developed for parallel implementation on
a Graphics Processing Unit (GPU). In order to integrate multiple cameras into a single
tracking unit we represent the human body by a parametric ellipsoid in a 3D world.
The elliptical boundary can be projected rapidly, several hundred times per subject per
frame, onto any image for comparison with the image data within a likelihood model.
Adding variables to encode visibility and persistence into the state vector, we tackle the
problems of distraction and short-period occlusion. However, subjects may also disappear
for longer periods due to blind spots between cameras elds of view. To recognise
a desired subject after such a long-period, we add coloured texture to the ellipsoid surface,
which is learnt and retained during the tracking process. This texture signature
improves the recall rate from 60% to 70-80% when compared to state only data association.
Compared to a standard Central Processing Unit (CPU) implementation, there
is a signi cant speed-up ratio
MEG and fMRI Fusion for Non-Linear Estimation of Neural and BOLD Signal Changes
The combined analysis of magnetoencephalography (MEG)/electroencephalography and functional magnetic resonance imaging (fMRI) measurements can lead to improvement in the description of the dynamical and spatial properties of brain activity. In this paper we empirically demonstrate this improvement using simulated and recorded task related MEG and fMRI activity. Neural activity estimates were derived using a dynamic Bayesian network with continuous real valued parameters by means of a sequential Monte Carlo technique. In synthetic data, we show that MEG and fMRI fusion improves estimation of the indirectly observed neural activity and smooths tracking of the blood oxygenation level dependent (BOLD) response. In recordings of task related neural activity the combination of MEG and fMRI produces a result with greater signal-to-noise ratio, that confirms the expectation arising from the nature of the experiment. The highly non-linear model of the BOLD response poses a difficult inference problem for neural activity estimation; computational requirements are also high due to the time and space complexity. We show that joint analysis of the data improves the system's behavior by stabilizing the differential equations system and by requiring fewer computational resources
A statistical approach to the inverse problem in magnetoencephalography
Magnetoencephalography (MEG) is an imaging technique used to measure the
magnetic field outside the human head produced by the electrical activity
inside the brain. The MEG inverse problem, identifying the location of the
electrical sources from the magnetic signal measurements, is ill-posed, that
is, there are an infinite number of mathematically correct solutions. Common
source localization methods assume the source does not vary with time and do
not provide estimates of the variability of the fitted model. Here, we
reformulate the MEG inverse problem by considering time-varying locations for
the sources and their electrical moments and we model their time evolution
using a state space model. Based on our predictive model, we investigate the
inverse problem by finding the posterior source distribution given the multiple
channels of observations at each time rather than fitting fixed source
parameters. Our new model is more realistic than common models and allows us to
estimate the variation of the strength, orientation and position. We propose
two new Monte Carlo methods based on sequential importance sampling. Unlike the
usual MCMC sampling scheme, our new methods work in this situation without
needing to tune a high-dimensional transition kernel which has a very high
cost. The dimensionality of the unknown parameters is extremely large and the
size of the data is even larger. We use Parallel Virtual Machine (PVM) to speed
up the computation.Comment: Published in at http://dx.doi.org/10.1214/14-AOAS716 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …