19 research outputs found
Robust Global Localization Using Clustered Particle Filtering
Global mobile robot localization is the problem of determining a robot's pose
in an environment, using sensor data, when the starting position is unknown. A
family of probabilistic algorithms known as Monte Carlo Localization (MCL) is
currently among the most popular methods for solving this problem. MCL
algorithms represent a robot's belief by a set of weighted samples, which
approximate the posterior probability of where the robot is located by using a
Bayesian formulation of the localization problem. This article presents an
extension to the MCL algorithm, which addresses its problems when localizing in
highly symmetrical environments; a situation where MCL is often unable to
correctly track equally probable poses for the robot. The problem arises from
the fact that sample sets in MCL often become impoverished, when samples are
generated according to their posterior likelihood. Our approach incorporates
the idea of clusters of samples and modifies the proposal distribution
considering the probability mass of those clusters. Experimental results are
presented that show that this new extension to the MCL algorithm successfully
localizes in symmetric environments where ordinary MCL often fails.Comment: 6 pages. Proceedings of AAAI-2002 (in press
Unsupervised State-Space Modeling Using Reproducing Kernels
This is the accepted manuscript. The final version is available at http://dx.doi.org/10.1109/TSP.2015.2448527.A novel framework for the design of state-space models (SSMs) is proposed whereby the state-transition function of the model is parametrised using reproducing kernels. The
nature of SSMs requires learning a latent function that resides
in the state space and for which input-output sample pairs are not
available, thus prohibiting the use of gradient-based supervised
kernel learning. To this end, we then propose to learn the mixing
weights of the kernel estimate by sampling from their posterior
density using Monte Carlo methods. We first introduce an offline
version of the proposed algorithm, followed by an online version
which performs inference on both the parameters and the hidden
state through particle filtering. The accuracy of the estimation
of the state-transition function is first validated on synthetic
data. Next, we show that the proposed algorithm outperforms
kernel adaptive filters in the prediction of real-world time series,
while also providing probabilistic estimates, a key advantage over
standard methods.Felipe Tobar acknowledges financial support from EPSRC grant number EP/L000776/1
Interacting multiple-models, state augmented Particle Filtering for fault diagnostics
International audienceParticle Filtering (PF) is a model-based, filtering technique, which has drawn the attention of the Prognostic and Health Management (PHM) community due to its applicability to nonlinear models with non-additive and non-Gaussian noise. When multiple physical models can describe the evolution of the degradation of a component, the PF approach can be based on Multiple Swarms (MS) of particles, each one evolving according to a different model, from which to select the most accurate a posteriori distribution. However, MS are highly computational demanding due to the large number of particles to simulate. In this work, to tackle the problem we have developed a PF approach based on the introduction of an augmented discrete state identifying the physical model describing the component evolution, which allows to detect the occurrence of abnormal conditions and identifying the degradation mechanism causing it. A crack growth degradation problem has been considered to prove the effectiveness of the proposed method in the detection of the crack initiation and the identification of the occurring degradation mechanism. The comparison of the obtained results with that of a literature MS method and of an empirical statistical test has shown that the proposed method provides both an early detection of the crack initiation, and an accurate and early identification of the degradation mechanism. A reduction of the computational cost is also achieved.
Reliable Monte Carlo Localization for Mobile Robots
Reliability is a key factor for realizing safety guarantee of full autonomous
robot systems. In this paper, we focus on reliability in mobile robot
localization. Monte Carlo localization (MCL) is widely used for mobile robot
localization. However, it is still difficult to guarantee its safety because
there are no methods determining reliability for MCL estimate. This paper
presents a novel localization framework that enables robust localization,
reliability estimation, and quick re-localization, simultaneously. The
presented method can be implemented using similar estimation manner to that of
MCL. The method can increase localization robustness to environment changes by
estimating known and unknown obstacles while performing localization; however,
localization failure of course occurs by unanticipated errors. The method also
includes a reliability estimation function that enables us to know whether
localization has failed. Additionally, the method can seamlessly integrate a
global localization method via importance sampling. Consequently, quick
re-localization from failures can be realized while mitigating noisy influence
of global localization. Through three types of experiments, we show that
reliable MCL that performs robust localization, self-failure detection, and
quick failure recovery can be realized
Generalised particle filters
The ability to analyse, interpret and make inferences about evolving dynamical
systems is of great importance in different areas of the world we live in today.
Various examples include the control of engineering systems, data assimilation in
meteorology, volatility estimation in financial markets, computer vision and vehicle
tracking. In general, the dynamical systems are not directly observable, quite often
only partial information, which is deteriorated by the presence noise, is available.
This naturally leads us to the area of stochastic filtering, which is defined as the
estimation of dynamical systems whose trajectory is modelled by a stochastic process
called the signal, given the information accumulated from its partial observation.
A massive scientific and computational effort is dedicated to the development of
various tools for approximating the solution of the filtering problem. Classical PDE
methods can be successful, particularly if the state space has low dimensions (one to
three). In higher dimensions (up to ten), a class of numerical methods called particle
filters have proved the most successful methods to-date. These methods produce
approximations of the posterior distribution of the current state of the signal by
using the empirical distribution of a cloud of particles that explore the signal’s state
space.
In this thesis, we discuss a more general class of numerical methods which involve
generalised particles, that is, particles that evolve through spaces larger than the
signal’s state space. Such generalised particles include Gaussian mixtures, wavelets,
orthonormal polynomials, and finite elements in addition to the classical particle
methods. This thesis contains a rigorous analysis of the approximation of the solution
of the filtering problem using Gaussian mixtures. In particular we deduce
the L2-convergence rate and obtain the central limit theorem for the approximating
system. Finally, the filtering model associated to the Navier-Stokes equation will be
discussed as an example
A Combined Stochastic and Greedy Hybrid Estimation Capability for Concurrent Hybrid Models with Autonomous Mode Transitions
Robotic and embedded systems have become increasingly pervasive in applicationsranging from space probes and life support systems to robot assistants. In order to act robustly in the physical world, robotic systems must be able to detect changes in operational mode, such as faults, whose symptoms manifest themselves only in the continuous state. In such systems, the state is observed indirectly, and must therefore be estimated in a robust, memory-efficient manner from noisy observations.Probabilistic hybrid discrete/continuous models, such as Concurrent Probabilistic Hybrid Automata (CPHA) are convenient modeling tools for such systems. In CPHA, the hidden state is represented with discrete and continuous state variables that evolve probabilistically. In this paper, we present a novel method for estimating the hybrid state of CPHA that achieves robustness by balancing greedy and stochastic search. The key insight is that stochastic and greedy search methods, taken together, are often particularly effective in practice.To accomplish this, we first develop an efficient stochastic sampling approach for CPHA based on Rao-Blackwellised Particle Filtering. We then propose a strategy for mixing stochastic and greedy search. The resulting method is able to handle three particularly challenging aspects of real-world systems, namely that they 1) exhibit autonomous mode transitions, 2) consist of a large collection of concurrently operating components, and 3) are non-linear. Autonomous mode transitions, that is, discrete transitions that depend on thecontinuous state, are particularly challenging to address, since they couple the discrete and continuous state evolution tightly. In this paper we extend the class of autonomous mode transitions that can be handled to arbitrary piecewise polynomial transition distributions.We perform an empirical comparison of the greedy and stochastic approaches to hybrid estimation, and then demonstrate the robustness of the mixed method incorporated with our HME (Hybrid Mode Estimation) capability. We show that this robustness comes at only a small performance penalty
Planning in constraint space for multi-body manipulation tasks
Robots are inherently limited by physical constraints on their link lengths, motor torques, battery
power and structural rigidity. To thrive in circumstances that push these limits, such as in search
and rescue scenarios, intelligent agents can use the available objects in their environment as
tools. Reasoning about arbitrary objects and how they can be placed together to create useful
structures such as ramps, bridges or simple machines is critical to push beyond one's physical
limitations. Unfortunately, the solution space is combinatorial in the number of available objects
and the configuration space of the chosen objects and the robot that uses the structure is high
dimensional.
To address these challenges, we propose using constraint satisfaction as a means to test the
feasibility of candidate structures and adopt search algorithms in the classical planning literature
to find sufficient designs. The key idea is that the interactions between the components of a
structure can be encoded as equality and inequality constraints on the configuration spaces of the
respective objects. Furthermore, constraints that are induced by a broadly defined action, such as
placing an object on another, can be grouped together using logical representations such as Planning
Domain Definition Language (PDDL). Then, a classical planning search algorithm can reason about
which set of constraints to impose on the available objects, iteratively creating a structure that
satisfies the task goals and the robot constraints. To demonstrate the effectiveness of this
framework, we present both simulation and real robot results with static structures such as ramps,
bridges and stairs, and quasi-static structures such as lever-fulcrum simple machines.Ph.D
A Genetic Optimization Resampling Based Particle Filtering Algorithm for Indoor Target Tracking
In indoor target tracking based on wireless sensor networks, the particle filtering algorithm has been widely used because of its outstanding performance in coping with highly non-linear problems. Resampling is generally required to address the inherent particle degeneracy problem in the particle filter. However, traditional resampling methods cause the problem of particle impoverishment. This problem degrades positioning accuracy and robustness and sometimes may even result in filtering divergence and tracking failure. In order to mitigate the particle impoverishment and improve positioning accuracy, this paper proposes an improved genetic optimization based resampling method. This resampling method optimizes the distribution of resampled particles by the five operators, i.e., selection, roughening, classification, crossover, and mutation. The proposed resampling method is then integrated into the particle filtering framework to form a genetic optimization resampling based particle filtering (GORPF) algorithm. The performance of the GORPF algorithm is tested by a one-dimensional tracking simulation and a three-dimensional indoor tracking experiment. Both test results show that with the aid of the proposed resampling method, the GORPF has better robustness against particle impoverishment and achieves better positioning accuracy than several existing target tracking algorithms. Moreover, the GORPF algorithm owns an affordable computation load for real-time applications