28 research outputs found

    Simultaneous Parameter Calibration, Localization, and Mapping

    Get PDF
    The calibration parameters of a mobile robot play a substantial role in navigation tasks. Often these parameters are subject to variations that depend either on changes in the environment or on the load of the robot. In this paper, we propose an approach to simultaneously estimate a map of the environment, the position of the on-board sensors of the robot, and its kinematic parameters. Our method requires no prior knowledge about the environment and relies only on a rough initial guess of the parameters of the platform. The proposed approach estimates the parameters online and it is able to adapt to non-stationary changes of the configuration. We tested our approach in simulated environments and on a wide range of real-world data using different types of robotic platforms. (C) 2012 Taylor & Francis and The Robotics Society of Japa

    Simultaneous Localization and Odometry Calibration for Mobile Robot

    Get PDF

    Estimating the Odometry Error of a Mobile Robot during Navigation

    Get PDF
    This paper addresses the problem of the odometry error estimation during the robot navigation. The robot is equipped with an external sensor (like laser range finder). Concerning the systematic error an augmented Kalman Filter is introduced. This filter estimates a vector state containing the robot configuration and the parameters characterizing the systematic component of the odometry error. It uses encoder readings as inputs and the readings from the external sensor as observations. The estimation of the non-systematic component is carried out through another Kalman Filter where the observations are obtained by two subsequent robot configurations provided by the previous augmented Kalman Filter. Both synchronous and differential drive systems are considered

    Robot Egomotion from the Deformation of Active Contours

    Get PDF
    Traditional sources of information for image-based computer vision algorithms have been points, lines, corners, and recently SIFT features (Lowe, 2004), which seem to represent at present the state of the art in feature definition. Alternatively, the present work explores the possibility of using tracked contours as informative features, especially in applications no

    The Khepera IV Mobile Robot: Performance Evaluation, Sensory Data and Software Toolbox

    Get PDF
    Taking distributed robotic system research from simulation to the real world often requires the use of small robots that can be deployed and managed in large numbers. This has led to the development of a multitude of these devices, deployed in the thousands by researchers worldwide. This paper looks at the Khepera IV mobile robot, the latest iteration of the Khepera series. This full-featured differential wheeled robot provides a broad set of sensors in a small, extensible body, making it easy to test new algorithms in compact indoor arenas. We describe the robot and conduct an independent performance evaluation, providing results for all sensors. We also introduce the Khepera IV Toolbox, an open source framework meant to ease application development. In doing so, we hope to help potential users assess the suitability of the Khepera IV for their envisioned applications and reduce the overhead in getting started using the robot

    DEUX: Active Exploration for Learning Unsupervised Depth Perception

    Full text link
    Depth perception models are typically trained on non-interactive datasets with predefined camera trajectories. However, this often introduces systematic biases into the learning process correlated to specific camera paths chosen during data acquisition. In this paper, we investigate the role of how data is collected for learning depth completion, from a robot navigation perspective, by leveraging 3D interactive environments. First, we evaluate four depth completion models trained on data collected using conventional navigation techniques. Our key insight is that existing exploration paradigms do not necessarily provide task-specific data points to achieve competent unsupervised depth completion learning. We then find that data collected with respect to photometric reconstruction has a direct positive influence on model performance. As a result, we develop an active, task-informed, depth uncertainty-based motion planning approach for learning depth completion, which we call DEpth Uncertainty-guided eXploration (DEUX). Training with data collected by our approach improves depth completion by an average greater than 18% across four depth completion models compared to existing exploration methods on the MP3D test set. We show that our approach further improves zero-shot generalization, while offering new insights into integrating robot learning-based depth estimation

    LiDAR-Based Place Recognition For Autonomous Driving: A Survey

    Full text link
    LiDAR-based place recognition (LPR) plays a pivotal role in autonomous driving, which assists Simultaneous Localization and Mapping (SLAM) systems in reducing accumulated errors and achieving reliable localization. However, existing reviews predominantly concentrate on visual place recognition (VPR) methods. Despite the recent remarkable progress in LPR, to the best of our knowledge, there is no dedicated systematic review in this area. This paper bridges the gap by providing a comprehensive review of place recognition methods employing LiDAR sensors, thus facilitating and encouraging further research. We commence by delving into the problem formulation of place recognition, exploring existing challenges, and describing relations to previous surveys. Subsequently, we conduct an in-depth review of related research, which offers detailed classifications, strengths and weaknesses, and architectures. Finally, we summarize existing datasets, commonly used evaluation metrics, and comprehensive evaluation results from various methods on public datasets. This paper can serve as a valuable tutorial for newcomers entering the field of place recognition and for researchers interested in long-term robot localization. We pledge to maintain an up-to-date project on our website https://github.com/ShiPC-AI/LPR-Survey.Comment: 26 pages,13 figures, 5 table

    Self-Calibration of Multi-Camera Systems for Vehicle Surround Sensing

    Get PDF
    Multi-camera systems are being deployed in a variety of vehicles and mobile robots today. To eliminate the need for cost and labor intensive maintenance and calibration, continuous self-calibration is highly desirable. In this book we present such an approach for self-calibration of multi-Camera systems for vehicle surround sensing. In an extensive evaluation we assess our algorithm quantitatively using real-world data

    Visual Odometry and Traversability Analysis for Wheeled Robots in Complex Environments

    Get PDF
    Durch die technische Entwicklung im Bereich der radbasierten mobilen Roboter (WMRs) erweitern sich deren Anwendungsszenarien. Neben den eher strukturierten industriellen und häuslichen Umgebungen sind nun komplexere städtische Szenarien oder Außenbereiche mögliche Einsatzgebiete. Einer dieser neuen Anwendungsfälle wird in dieser Arbeit beschrieben: ein intelligenter persönlicher Mobilitätsassistent, basierend auf einem elektrischen Rollator. Ein solches System hat mehrere Anforderungen: Es muss sicher, robust, leicht und preiswert sein und sollte in der Lage sein, in Echtzeit zu navigieren, um eine direkte physische Interaktion mit dem Benutzer zu ermöglichen. Da diese Eigenschaften für fast alle Arten von WMRs wünschenswert sind, können alle in dieser Arbeit präsentierten Methoden auch mit anderen Typen von WMRs verwendet werden. Zuerst wird eine visuelle Odometriemethode vorgestellt, welche auf die Arbeit mit einer nach unten gerichteten RGB-D-Kamera ausgelegt ist. Hierzu wird die Umgebung auf die Bodenebene projiziert, um eine 2-dimensionale Repräsentation zu erhalten. Nun wird ein effizientes Bildausrichtungsverfahren verwendet, um die Fahrzeugbewegung aus aufeinander folgenden Bildern zu schätzen. Da das Verfahren für den Einsatz auf einem WMR ausgelegt ist, können weitere Annahmen verwendet werden, um die Genauigkeit der visuellen Odometrie zu verbessern. Für einen nicht-holonomischen WMR mit einem bekannten Fahrzeugmodell, entweder Differentialantrieb, Skid-Lenkung oder Ackermann-Lenkung, können die Bewegungsparameter direkt aus den Bilddaten geschätzt werden. Dies verbessert die Genauigkeit und Robustheit des Verfahrens erheblich. Zusätzlich wird eine Ausreißererkennung vorgestellt, die im Modellraum, d.h. den Bewegungsparametern des kinematischen Models, arbeitet. Üblicherweise wird die Ausreißererkennung im Datenraum, d.h. auf den Bildpunkten, durchgeführt. Mittels der Projektion der Umgebung auf die Bodenebene kann auch eine Höhenkarte der Umgebung erstellt werde. Es wird untersucht, ob diese Karte, in Verbindung mit einem detaillierten Fahrzeugmodell, zur Abschätzung zukünftiger Fahrzeugposen verwendet werden kann. Durch die Verwendung einer gemeinsamen bildbasierten Darstellung der Umgebung und des Fahrzeugs wird eine sehr effiziente und dennoch sehr genaue Posenschätzmethode vorgeschlagen. Da die Befahrbarkeit eines Bereichs durch die Fahrzeugposen und mögliche Kollisionen bestimmt werden kann, wird diese Methode für eine neue echtzeitfähige Pfadplanung verwendet. Aus der Fahrzeugpose werden verschiedene Sicherheitskriterien bestimmt, die als Heuristik für einen A*-ähnlichen Planer verwendet werden. Hierzu werden mithilfe des kinematischen Models mögliche zukünftige Fahrzeugposen ermittelt und für jede dieser Posen ein Befahrbarkeitswert berechnet.Das endgültige System ermöglicht eine sichere und robuste Echtzeit-Navigation auch in schwierigen Innen- und Außenumgebungen.The application of wheeled mobile robots (WMRs) is currently expanding from rather controlled industrial or domestic scenarios into more complex urban or outdoor environments, allowing a variety of new use cases. One of these new use cases is described in this thesis: An intelligent personal mobility assistant, based on an electrical rollator. Such a system comes with several requirements: It must be safe and robust, lightweight, inexpensive and should be able to navigate in real-time in order to allow direct physical interaction with the user. As these properties are desirable for most WMRs, all methods proposed in this thesis can also be used with other WMR platforms.First, a visual odometry method is presented, which is tailored to work with a downward facing RGB-D camera. It projects the environment onto a ground plane image and uses an efficient image alignment method to estimate the vehicle motion from consecutive images. As the method is designed for use on a WMR, further constraints can be employed to improve the accuracy of the visual odometry. For a non-holonomic WMR with a known vehicle model, either differential drive, skid steering or Ackermann, the motion parameters of the corresponding kinematic model, instead of the generic motion parameters, can be estimated directly from the image data. This significantly improves the accuracyand robustness of the method. Additionally, an outlier rejection scheme is presented that operates in model space, i.e. the motion parameters of the kinematic model, instead of data space, i.e. image pixels. Furthermore, the projection of the environment onto the ground plane can also be used to create an elevation map of the environment. It is investigated if this map, in conjunction with a detailed vehicle model, can be used to estimate future vehicle poses. By using a common image-based representation of the environment and the vehicle, a very efficient and still highly accurate pose estimation method is proposed. Since the traversability of an area can be determined by the vehicle poses and potential collisions, the pose estimation method is employed to create a novel real-time path planning method. The detailed vehicle model is extended to also represent the vehicle’s chassis for collision detection. Guided by an A*-like planner, a search graph is constructed by propagating the vehicle using its kinematic model to possible future poses and calculating a traversability score for each of these poses. The final system performs safe and robust real-time navigation even in challenging indoor and outdoor environments
    corecore