721 research outputs found

    Infrastructure-Aided Localization and State Estimation for Autonomous Mobile Robots

    Get PDF
    A slip-aware localization framework is proposed for mobile robots experiencing wheel slip in dynamic environments. The framework fuses infrastructure-aided visual tracking data (via fisheye lenses) and proprioceptive sensory data from a skid-steer mobile robot to enhance accuracy and reduce variance of the estimated states. The slip-aware localization framework includes: the visual thread to detect and track the robot in the stereo image through computationally efficient 3D point cloud generation using a region of interest; and the ego motion thread which uses a slip-aware odometry mechanism to estimate the robot pose utilizing a motion model considering wheel slip. Covariance intersection is used to fuse the pose prediction (using proprioceptive data) and the visual thread, such that the updated estimate remains consistent. As confirmed by experiments on a skid-steer mobile robot, the designed localization framework addresses state estimation challenges for indoor/outdoor autonomous mobile robots which experience high-slip, uneven torque distribution at each wheel (by the motion planner), or occlusion when observed by an infrastructure-mounted camera. The proposed system is real-time capable and scalable to multiple robots and multiple environmental cameras

    Real-Time Indoor Localization using Visual and Inertial Odometry

    Get PDF
    This project encompassed the design of a mobile, real-time localization device for use in an indoor environment. A system was designed and constructed using visual and inertial odometry methods to meet the project requirements. Stereoscopic image features were detected through a C++ Sobel filter implementation and matched. An inertial measurement unit (IMU) provided raw acceleration and rotation coordinates which were transformed into a global frame of reference. A Kalman filter produced motion approximations from the input data and transmitted the Kalman position state coordinates via a radio transceiver to a remote base station. This station used a graphical user interface to map the incoming coordinates

    Autonomous Navigation in Complex Indoor and Outdoor Environments with Micro Aerial Vehicles

    Get PDF
    Micro aerial vehicles (MAVs) are ideal platforms for surveillance and search and rescue in confined indoor and outdoor environments due to their small size, superior mobility, and hover capability. In such missions, it is essential that the MAV is capable of autonomous flight to minimize operator workload. Despite recent successes in commercialization of GPS-based autonomous MAVs, autonomous navigation in complex and possibly GPS-denied environments gives rise to challenging engineering problems that require an integrated approach to perception, estimation, planning, control, and high level situational awareness. Among these, state estimation is the first and most critical component for autonomous flight, especially because of the inherently fast dynamics of MAVs and the possibly unknown environmental conditions. In this thesis, we present methodologies and system designs, with a focus on state estimation, that enable a light-weight off-the-shelf quadrotor MAV to autonomously navigate complex unknown indoor and outdoor environments using only onboard sensing and computation. We start by developing laser and vision-based state estimation methodologies for indoor autonomous flight. We then investigate fusion from heterogeneous sensors to improve robustness and enable operations in complex indoor and outdoor environments. We further propose estimation algorithms for on-the-fly initialization and online failure recovery. Finally, we present planning, control, and environment coverage strategies for integrated high-level autonomy behaviors. Extensive online experimental results are presented throughout the thesis. We conclude by proposing future research opportunities

    A Comprehensive Review on Autonomous Navigation

    Full text link
    The field of autonomous mobile robots has undergone dramatic advancements over the past decades. Despite achieving important milestones, several challenges are yet to be addressed. Aggregating the achievements of the robotic community as survey papers is vital to keep the track of current state-of-the-art and the challenges that must be tackled in the future. This paper tries to provide a comprehensive review of autonomous mobile robots covering topics such as sensor types, mobile robot platforms, simulation tools, path planning and following, sensor fusion methods, obstacle avoidance, and SLAM. The urge to present a survey paper is twofold. First, autonomous navigation field evolves fast so writing survey papers regularly is crucial to keep the research community well-aware of the current status of this field. Second, deep learning methods have revolutionized many fields including autonomous navigation. Therefore, it is necessary to give an appropriate treatment of the role of deep learning in autonomous navigation as well which is covered in this paper. Future works and research gaps will also be discussed

    Cooperative simultaneous localization and mapping framework

    Get PDF
    This research work is a contribution to develop a framework for cooperative simultaneous localization and mapping with multiple heterogeneous mobile robots. The presented research work contributes in two aspects of a team of heterogeneous mobile robots for cooperative map building. First it provides a mathematical framework for cooperative localization and geometric features based map building. Secondly it proposes a software framework for controlling, configuring and managing a team of heterogeneous mobile robots. Since mapping and pose estimation are very closely related to each other, therefore, two novel sensor data fusion techniques are also presented, furthermore, various state of the art localization and mapping techniques and mobile robot software frameworks are discussed for an overview of the current development in this research area. The mathematical cooperative SLAM formulation probabilistically solves the problem of estimating the robots state and the environment features using Kalman filter. The software framework is an effort toward the ongoing standardization process of the cooperative mobile robotics systems. To enhance the efficiency of a cooperative mobile robot system the proposed software framework addresses various issues such as different communication protocol structure for mobile robots, different sets of sensors for mobile robots, sensor data organization from different robots, monitoring and controlling robots from a single interface. The present work can be applied to number of applications in various domains where a priori map of the environment is not available and it is not possible to use global positioning devices to find the accurate position of the mobile robot. Therefore the mobile robot(s) has to rely on building the map of its environment and using the same map to find its position and orientation relative to the environment. The exemplary areas for applying the proposed SLAM technique are Indoor environments such as warehouse management, factory floors for parts assembly line, mapping abandoned tunnels, disaster struck environment which are missing maps, under see pipeline inspection, ocean surveying, military applications, planet exploration and many others. These applications are some of many and are only limited by the imagination.Diese Forschungsarbeit ist ein Beitrag zur Entwicklung eines Framework für kooperatives SLAM mit heterogenen, mobilen Robotern. Die präsentierte Forschungsarbeit trägt in zwei Aspekten in einem Team von heterogenen, mobilen Robotern bei. Erstens stellt es einen mathematischen Framework für kooperative Lokalisierung und geometrisch basierende Kartengenerierung bereit. Zweitens schlägt es einen Softwareframework zur Steuerung, Konfiguration und Management einer Gruppe von heterogenen mobilen Robotern vor. Da Kartenerstellung und Poseschätzung miteinander stark verbunden sind, werden zwei neuartige Techniken zur Sensordatenfusion präsentiert. Weiterhin werden zum Stand der Technik verschiedene Techniken zur Lokalisierung und Kartengenerierung sowie Softwareframeworks für die mobile Robotik diskutiert um einen Überblick über die aktuelle Entwicklung in diesem Forschungsbereich zu geben. Die mathematische Formulierung des SLAM Problems löst das Problem der Roboterzustandsschätzung und der Umgebungmerkmale durch Benutzung eines Kalman filters. Der Softwareframework ist ein Beitrag zum anhaltenden Standardisierungsprozess von kooperativen, mobilen Robotern. Um die Effektivität eines kooperativen mobilen Robotersystems zu verbessern enthält der vorgeschlagene Softwareframework die Möglichkeit die Kommunikationsprotokolle flexibel zu ändern, mit verschiedenen Sensoren zu arbeiten sowie die Möglichkeit die Sensordaten verschieden zu organisieren und verschiedene Roboter von einem Interface aus zu steuern. Die präsentierte Arbeit kann in einer Vielzahl von Applikationen in verschiedenen Domänen benutzt werden, wo eine Karte der Umgebung nicht vorhanden ist und es nicht möglich ist GPS Daten zur präzisen Lokalisierung eines mobilen Roboters zu nutzen. Daher müssen die mobilen Roboter sich auf die selbsterstellte Karte verlassen und die selbe Karte zur Bestimmung von Position und Orientierung relativ zur Umgebung verwenden. Die exemplarischen Anwendungen der vorgeschlagenen SLAM Technik sind Innenraumumgebungen wie Lagermanagement, Fabrikgebäude mit Produktionsstätten, verlassene Tunnel, Katastrophengebiete ohne aktuelle Karte, Inspektion von Unterseepipelines, Ozeanvermessung, Militäranwendungen, Planetenerforschung und viele andere. Diese Anwendungen sind einige von vielen und sind nur durch die Vorstellungskraft limitiert

    Benchmarking Visual-Inertial Deep Multimodal Fusion for Relative Pose Regression and Odometry-aided Absolute Pose Regression

    Full text link
    Visual-inertial localization is a key problem in computer vision and robotics applications such as virtual reality, self-driving cars, and aerial vehicles. The goal is to estimate an accurate pose of an object when either the environment or the dynamics are known. Recent methods directly regress the pose using convolutional and spatio-temporal networks. Absolute pose regression (APR) techniques predict the absolute camera pose from an image input in a known scene. Odometry methods perform relative pose regression (RPR) that predicts the relative pose from a known object dynamic (visual or inertial inputs). The localization task can be improved by retrieving information of both data sources for a cross-modal setup, which is a challenging problem due to contradictory tasks. In this work, we conduct a benchmark to evaluate deep multimodal fusion based on PGO and attention networks. Auxiliary and Bayesian learning are integrated for the APR task. We show accuracy improvements for the RPR-aided APR task and for the RPR-RPR task for aerial vehicles and hand-held devices. We conduct experiments on the EuRoC MAV and PennCOSYVIO datasets, and record a novel industry dataset.Comment: Under revie
    • …
    corecore