93 research outputs found

    Collaborative Localization and Mapping for Autonomous Planetary Exploration : Distributed Stereo Vision-Based 6D SLAM in GNSS-Denied Environments

    Get PDF
    Mobile robots are a crucial element of present and future scientific missions to explore the surfaces of foreign celestial bodies such as Moon and Mars. The deployment of teams of robots allows to improve efficiency and robustness in such challenging environments. As long communication round-trip times to Earth render the teleoperation of robotic systems inefficient to impossible, on-board autonomy is a key to success. The robots operate in Global Navigation Satellite System (GNSS)-denied environments and thus have to rely on space-suitable on-board sensors such as stereo camera systems. They need to be able to localize themselves online, to model their surroundings, as well as to share information about the environment and their position therein. These capabilities constitute the basis for the local autonomy of each system as well as for any coordinated joint action within the team, such as collaborative autonomous exploration. In this thesis, we present a novel approach for stereo vision-based on-board and online Simultaneous Localization and Mapping (SLAM) for multi-robot teams given the challenges imposed by planetary exploration missions. We combine distributed local and decentralized global estimation methods to get the best of both worlds: A local reference filter on each robot provides real-time local state estimates required for robot control and fast reactive behaviors. We designed a novel graph topology to incorporate these state estimates into an online incremental graph optimization to compute global pose and map estimates that serve as input to higher-level autonomy functions. In order to model the 3D geometry of the environment, we generate dense 3D point cloud and probabilistic voxel-grid maps from noisy stereo data. We distribute the computational load and reduce the required communication bandwidth between robots by locally aggregating high-bandwidth vision data into partial maps that are then exchanged between robots and composed into global models of the environment. We developed methods for intra- and inter-robot map matching to recognize previously visited locations in semi- and unstructured environments based on their estimated local geometry, which is mostly invariant to light conditions as well as different sensors and viewpoints in heterogeneous multi-robot teams. A decoupling of observable and unobservable states in the local filter allows us to introduce a novel optimization: Enforcing all submaps to be gravity-aligned, we can reduce the dimensionality of the map matching from 6D to 4D. In addition to map matches, the robots use visual fiducial markers to detect each other. In this context, we present a novel method for modeling the errors of the loop closure transformations that are estimated from these detections. We demonstrate the robustness of our methods by integrating them on a total of five different ground-based and aerial mobile robots that were deployed in a total of 31 real-world experiments for quantitative evaluations in semi- and unstructured indoor and outdoor settings. In addition, we validated our SLAM framework through several different demonstrations at four public events in Moon and Mars-like environments. These include, among others, autonomous multi-robot exploration tests at a Moon-analogue site on top of the volcano Mt. Etna, Italy, as well as the collaborative mapping of a Mars-like environment with a heterogeneous robotic team of flying and driving robots in more than 35 public demonstration runs

    Submap Matching for Stereo-Vision Based Indoor/Outdoor SLAM

    Get PDF
    Autonomous robots operating in semi- or unstructured environments, e.g. during search and rescue missions, require methods for online on-board creation of maps to support path planning and obstacle avoidance. Perception based on stereo cameras is well suited for mixed indoor/outdoor environments. The creation of full 3D maps in GPS-denied areas however is still a challenging task for current robot systems, in particular due to depth errors resulting from stereo reconstruction. State-of-the-art 6D SLAM approaches employ graph-based optimization on the relative transformations between keyframes or local submaps. To achieve loop closures, correct data association is crucial, in particular for sensor input received at different points in time. In order to approach this challenge, we propose a novel method for submap matching. It is based on robust keypoints, which we derive from local obstacle classification. By describing geometrical 3D features, we achieve invariance to changing viewpoints and varying light conditions. We performed experiments in indoor, outdoor and mixed environments. In all three scenarios we achieved a final 3D position error of less than 0.23% of the full trajectory. In addition, we compared our approach with a 3D RBPF SLAM from previous work, achieving an improvement of at least 27% in mean 2D localization accuracy in different scenarios

    Graph-Optimization base multi-sensor fusion for robust UAV pose estimation

    Get PDF
    ing accurate, high-rate pose estimates from proprioceptive and/or exteroceptive measurements is the first step in the development of navigation algorithms for agile mobile robots such as Unmanned Aerial Vehicles (UAVs). In this paper, we propose a decoupled multi-sensor fusion approach that allows the combination of generic 6D visual-inertial (VI) odometry poses and 3D globally referenced positions to infer the global 6D pose of the robot in real-time. Our approach casts the fusion as a real-time alignment problem between the local base frame of the VI odometry and the global base frame. The quasi-constant alignment transformation that relates these coordinate systems is continuously updated employing graph- based optimization with a sliding window. We evaluate the presented pose estimation method on both simulated data and large outdoor experiments using a small UAV that is capable to run our system onboard. Results are compared against different state-of-the-art sensor fusion frameworks, revealing that the proposed approach is substantially more accurate than other decoupled fusion strategies. We also demonstrate comparable results in relation with a finely tuned Extended Kalman Filter that fuses visual, inertial and GPS measurements in a coupled way and show that our approach is generic enough to deal with different input sources in ner, as well as able to run in real-time

    Towards Collaborative Simultaneous Localization and Mapping: a Survey of the Current Research Landscape

    Get PDF
    Motivated by the tremendous progress we witnessed in recent years, this paper presents a survey of the scientific literature on the topic of Collaborative Simultaneous Localization and Mapping (C-SLAM), also known as multi-robot SLAM. With fleets of self-driving cars on the horizon and the rise of multi-robot systems in industrial applications, we believe that Collaborative SLAM will soon become a cornerstone of future robotic applications. In this survey, we introduce the basic concepts of C-SLAM and present a thorough literature review. We also outline the major challenges and limitations of C-SLAM in terms of robustness, communication, and resource management. We conclude by exploring the area's current trends and promising research avenues.Comment: 44 pages, 3 figure

    Software-in-the-Loop Simulation of a Planetary Rover

    Get PDF
    The development of autonomous navigation algorithms for planetary rovers often hinges on access to rover hardware. Yet this access is usually very limited. In order to facilitate the continued development of these algorithms even when the hardware is temporarily unavailable, simulations are used. To minimize any additional work, these simulations must tightly integrate with the rover’s software infrastructure. They are then called Software-in-the-Loop simulators. In preparation for the 2015 DLR SpaceBot Camp, a simulation of the DLR LRU rover became necessary to ensure a timely progress of the navigation algorithms development. This paper presents the Software-in-the-loop simulator of the LRU, including details on the implementation and application

    A distributed architecture for unmanned aerial systems based on publish/subscribe messaging and simultaneous localisation and mapping (SLAM) testbed

    Get PDF
    A dissertation submitted in fulfilment for the degree of Master of Science. School of Computational and Applied Mathematics, University of the Witwatersrand, Johannesburg, South Africa, November 2017The increased capabilities and lower cost of Micro Aerial Vehicles (MAVs) unveil big opportunities for a rapidly growing number of civilian and commercial applications. Some missions require direct control using a receiver in a point-to-point connection, involving one or very few MAVs. An alternative class of mission is remotely controlled, with the control of the drone automated to a certain extent using mission planning software and autopilot systems. For most emerging missions, there is a need for more autonomous, cooperative control of MAVs, as well as more complex data processing from sensors like cameras and laser scanners. In the last decade, this has given rise to an extensive research from both academia and industry. This research direction applies robotics and computer vision concepts to Unmanned Aerial Systems (UASs). However, UASs are often designed for specific hardware and software, thus providing limited integration, interoperability and re-usability across different missions. In addition, there are numerous open issues related to UAS command, control and communication(C3), and multi-MAVs. We argue and elaborate throughout this dissertation that some of the recent standardbased publish/subscribe communication protocols can solve many of these challenges and meet the non-functional requirements of MAV robotics applications. This dissertation assesses the MQTT, DDS and TCPROS protocols in a distributed architecture of a UAS control system and Ground Control Station software. While TCPROS has been the leading robotics communication transport for ROS applications, MQTT and DDS are lightweight enough to be used for data exchange between distributed systems of aerial robots. Furthermore, MQTT and DDS are based on industry standards to foster communication interoperability of “things”. Both protocols have been extensively presented to address many of today’s needs related to networks based on the internet of things (IoT). For example, MQTT has been used to exchange data with space probes, whereas DDS was employed for aerospace defence and applications of smart cities. We designed and implemented a distributed UAS architecture based on each publish/subscribe protocol TCPROS, MQTT and DDS. The proposed communication systems were tested with a vision-based Simultaneous Localisation and Mapping (SLAM) system involving three Parrot AR Drone2 MAVs. Within the context of this study, MQTT and DDS messaging frameworks serve the purpose of abstracting UAS complexity and heterogeneity. Additionally, these protocols are expected to provide low-latency communication and scale up to meet the requirements of real-time remote sensing applications. The most important contribution of this work is the implementation of a complete distributed communication architecture for multi-MAVs. Furthermore, we assess the viability of this architecture and benchmark the performance of the protocols in relation to an autonomous quadcopter navigation testbed composed of a SLAM algorithm, an extended Kalman filter and a PID controller.XL201

    Model-Based Environmental Visual Perception for Humanoid Robots

    Get PDF
    The visual perception of a robot should answer two fundamental questions: What? and Where? In order to properly and efficiently reply to these questions, it is essential to establish a bidirectional coupling between the external stimuli and the internal representations. This coupling links the physical world with the inner abstraction models by sensor transformation, recognition, matching and optimization algorithms. The objective of this PhD is to establish this sensor-model coupling

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    Mobile Robots

    Get PDF
    The objective of this book is to cover advances of mobile robotics and related technologies applied for multi robot systems' design and development. Design of control system is a complex issue, requiring the application of information technologies to link the robots into a single network. Human robot interface becomes a demanding task, especially when we try to use sophisticated methods for brain signal processing. Generated electrophysiological signals can be used to command different devices, such as cars, wheelchair or even video games. A number of developments in navigation and path planning, including parallel programming, can be observed. Cooperative path planning, formation control of multi robotic agents, communication and distance measurement between agents are shown. Training of the mobile robot operators is very difficult task also because of several factors related to different task execution. The presented improvement is related to environment model generation based on autonomous mobile robot observations

    Mapping, planning and exploration with Pose SLAM

    Get PDF
    This thesis reports research on mapping, path planning, and autonomous exploration. These are classical problems in robotics, typically studied independently, and here we link such problems by framing them within a common SLAM approach, adopting Pose SLAM as the basic state estimation machinery. The main contribution of this thesis is an approach that allows a mobile robot to plan a path using the map it builds with Pose SLAM and to select the appropriate actions to autonomously construct this map. Pose SLAM is the variant of SLAM where only the robot trajectory is estimated and where landmarks are only used to produce relative constraints between robot poses. In Pose SLAM, observations come in the form of relative-motion measurements between robot poses. With regards to extending the original Pose SLAM formulation, this thesis studies the computation of such measurements when they are obtained with stereo cameras and develops the appropriate noise propagation models for such case. Furthermore, the initial formulation of Pose SLAM assumes poses in SE(2) and in this thesis we extend this formulation to SE(3), parameterizing rotations either with Euler angles and quaternions. We also introduce a loop closure test that exploits the information from the filter using an independent measure of information content between poses. In the application domain, we present a technique to process the 3D volumetric maps obtained with this SLAM methodology, but with laser range scanning as the sensor modality, to derive traversability maps. Aside from these extensions to Pose SLAM, the core contribution of the thesis is an approach for path planning that exploits the modeled uncertainties in Pose SLAM to search for the path in the pose graph with the lowest accumulated robot pose uncertainty, i.e., the path that allows the robot to navigate to a given goal with the least probability of becoming lost. An added advantage of the proposed path planning approach is that since Pose SLAM is agnostic with respect to the sensor modalities used, it can be used in different environments and with different robots, and since the original pose graph may come from a previous mapping session, the paths stored in the map already satisfy constraints not easy modeled in the robot controller, such as the existence of restricted regions, or the right of way along paths. The proposed path planning methodology has been extensively tested both in simulation and with a real outdoor robot. Our path planning approach is adequate for scenarios where a robot is initially guided during map construction, but autonomous during execution. For other scenarios in which more autonomy is required, the robot should be able to explore the environment without any supervision. The second core contribution of this thesis is an autonomous exploration method that complements the aforementioned path planning strategy. The method selects the appropriate actions to drive the robot so as to maximize coverage and at the same time minimize localization and map uncertainties. An occupancy grid is maintained for the sole purpose of guaranteeing coverage. A significant advantage of the method is that since the grid is only computed to hypothesize entropy reduction of candidate map posteriors, it can be computed at a very coarse resolution since it is not used to maintain neither the robot localization estimate, nor the structure of the environment. Our technique evaluates two types of actions: exploratory actions and place revisiting actions. Action decisions are made based on entropy reduction estimates. By maintaining a Pose SLAM estimate at run time, the technique allows to replan trajectories online should significant change in the Pose SLAM estimate be detected. The proposed exploration strategy was tested in a common publicly available dataset comparing favorably against frontier based explorationPostprint (published version
    corecore