552 research outputs found
Navigation without localisation: reliable teach and repeat based on the convergence theorem
We present a novel concept for teach-and-repeat visual navigation. The
proposed concept is based on a mathematical model, which indicates that in
teach-and-repeat navigation scenarios, mobile robots do not need to perform
explicit localisation. Rather than that, a mobile robot which repeats a
previously taught path can simply `replay' the learned velocities, while using
its camera information only to correct its heading relative to the intended
path. To support our claim, we establish a position error model of a robot,
which traverses a taught path by only correcting its heading. Then, we outline
a mathematical proof which shows that this position error does not diverge over
time. Based on the insights from the model, we present a simple monocular
teach-and-repeat navigation method. The method is computationally efficient, it
does not require camera calibration, and it can learn and autonomously traverse
arbitrarily-shaped paths. In a series of experiments, we demonstrate that the
method can reliably guide mobile robots in realistic indoor and outdoor
conditions, and can cope with imperfect odometry, landmark deficiency,
illumination variations and naturally-occurring environment changes.
Furthermore, we provide the navigation system and the datasets gathered at
http://www.github.com/gestom/stroll_bearnav.Comment: The paper will be presented at IROS 2018 in Madri
Cooperative localization for mobile agents: a recursive decentralized algorithm based on Kalman filter decoupling
We consider cooperative localization technique for mobile agents with
communication and computation capabilities. We start by provide and overview of
different decentralization strategies in the literature, with special focus on
how these algorithms maintain an account of intrinsic correlations between
state estimate of team members. Then, we present a novel decentralized
cooperative localization algorithm that is a decentralized implementation of a
centralized Extended Kalman Filter for cooperative localization. In this
algorithm, instead of propagating cross-covariance terms, each agent propagates
new intermediate local variables that can be used in an update stage to create
the required propagated cross-covariance terms. Whenever there is a relative
measurement in the network, the algorithm declares the agent making this
measurement as the interim master. By acquiring information from the interim
landmark, the agent the relative measurement is taken from, the interim master
can calculate and broadcast a set of intermediate variables which each robot
can then use to update its estimates to match that of a centralized Extended
Kalman Filter for cooperative localization. Once an update is done, no further
communication is needed until the next relative measurement
Recent Developments in Monocular SLAM within the HRI Framework
This chapter describes an approach to improve the feature initialization process in the delayed inverse-depth feature initialization monocular Simultaneous Localisation and Mapping (SLAM), using data provided by a robot’s camera plus an additional monocular sensor deployed in the headwear of the human component in a human-robot collaborative exploratory team. The robot and the human deploy a set of sensors that once combined provides the data required to localize the secondary camera worn by the human. The approach and its implementation are described along with experimental results demonstrating its performance. A discussion on the usual sensors within the robotics field, especially in SLAM, provides background to the advantages and capabilities of the system implemented in this research
Real-time simulator of collaborative and autonomous vehicles
Durant ces dernières décennies, l’apparition des systèmes d’aide à la conduite a essentiellement été favorisée par le développement des différentes technologies ainsi que par celui des outils mathématiques associés. Cela a profondément affecté les systèmes de transport et a donné naissance au domaine des systèmes de transport intelligents (STI). Nous assistons de nos jours au développement du marché des véhicules intelligents dotés de systèmes d’aide à la conduite et de moyens de communication inter-véhiculaire. Les véhicules et les infrastructures intelligents changeront le mode de conduite sur les routes. Ils pourront résoudre une grande partie des problèmes engendrés par le trafic routier comme les accidents, les embouteillages, la pollution, etc.
Cependant, le bon fonctionnement et la fiabilité des nouvelles générations des systèmes de transport nécessitent une parfaite maitrise des différents processus de leur conception, en particulier en ce qui concerne les systèmes embarqués. Il est clair que l’identification et la correction des défauts des systèmes embarqués sont deux tâches primordiales à la fois pour la sauvegarde de la vie humaine, à la fois pour la préservation de l’intégrité des véhicules et des infrastructures urbaines. Pour ce faire, la simulation numérique en temps réel est la démarche la plus adéquate pour tester et valider les systèmes de conduite et les véhicules intelligents. Elle présente de nombreux avantages qui la rendent incontournable pour la conception des systèmes embarqués.
Par conséquent, dans ce projet, nous présentons une nouvelle plateforme de simulation temps-réel des véhicules intelligents et autonomes en conduite collaborative. Le projet se base sur deux principaux composants. Le premier étant les produits d’OPAL-RT Technologies notamment le logiciel RT-LAB « en : Real Time LABoratory », l’application Orchestra et les machines de simulation dédiées à la simulation en temps réel et aux calculs parallèles, le second composant est Pro-SiVIC pour la simulation de la dynamique des véhicules, du comportement des capteurs embarqués et de l’infrastructure. Cette nouvelle plateforme (Pro-SiVIC/RT-LAB) permettra notamment de tester les systèmes embarqués (capteurs, actionneurs, algorithmes), ainsi que les moyens de communication inter-véhiculaire. Elle permettra aussi d’identifier et de corriger les problèmes et les erreurs logicielles, et enfin de valider les systèmes embarqués avant même le prototypage
An Equivariant Observer Design for Visual Localisation and Mapping
This paper builds on recent work on Simultaneous Localisation and Mapping
(SLAM) in the non-linear observer community, by framing the visual localisation
and mapping problem as a continuous-time equivariant observer design problem on
the symmetry group of a kinematic system. The state-space is a quotient of the
robot pose expressed on SE(3) and multiple copies of real projective space,
used to represent both points in space and bearings in a single unified
framework. An observer with decoupled Riccati-gains for each landmark is
derived and we show that its error system is almost globally asymptotically
stable and exponentially stable in-the-large.Comment: 12 pages, 2 figures, published in 2019 IEEE CD
Resilient Perception for Outdoor Unmanned Ground Vehicles
This thesis promotes the development of resilience for perception systems with a focus on Unmanned Ground Vehicles (UGVs) in adverse environmental conditions. Perception is the interpretation of sensor data to produce a representation of the environment that is necessary for subsequent decision making. Long-term autonomy requires perception systems that correctly function in unusual but realistic conditions that will eventually occur during extended missions. State-of-the-art UGV systems can fail when the sensor data are beyond the operational capacity of the perception models. The key to resilient perception system lies in the use of multiple sensor modalities and the pre-selection of appropriate sensor data to minimise the chance of failure. This thesis proposes a framework based on diagnostic principles to evaluate and preselect sensor data prior to interpretation by the perception system. Image-based quality metrics are explored and evaluated experimentally using infrared (IR) and visual cameras onboard a UGV in the presence of smoke and airborne dust. A novel quality metric, Spatial Entropy (SE), is introduced and evaluated. The proposed framework is applied to a state-of-the-art Visual-SLAM algorithm combining visual and IR imaging as a real-world example. An extensive experimental evaluation demonstrates that the framework allows for camera-based localisation that is resilient to a range of low-visibility conditions when compared to other methods that use a single sensor or combine sensor data without selection. The proposed framework allows for a resilient localisation in adverse conditions using image data but also has significant potential to benefit many perception applications. Employing multiple sensing modalities along with pre-selection of appropriate data is a powerful method to create resilient perception systems by anticipating and mitigating errors. The development of such resilient perception systems is a requirement for next-generation outdoor UGVs
Swarm underwater acoustic 3D localization: Kalman vs Monte Carlo
Two three-dimensional localization algorithms for a swarm of underwater vehicles are presented. The first is grounded on an extended Kalman filter (EKF) scheme used to fuse some proprioceptive data such as the vessel's speed and some exteroceptive measurements such as the time of flight (TOF) sonar distance of the companion vessels. The second is a Monte Carlo particle filter localization processing the same sensory data suite. The results of several simulations using the two approaches are presented, with comparison. The case of a supporting surface vessel is also considered. An analysis of the robustness of the two approaches against some system parameters is given
- …