875 research outputs found

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    Space Science Opportunities Augmented by Exploration Telepresence

    Get PDF
    Since the end of the Apollo missions to the lunar surface in December 1972, humanity has exclusively conducted scientific studies on distant planetary surfaces using teleprogrammed robots. Operations and science return for all of these missions are constrained by two issues related to the great distances between terrestrial scientists and their exploration targets: high communication latencies and limited data bandwidth. Despite the proven successes of in-situ science being conducted using teleprogrammed robotic assets such as Spirit, Opportunity, and Curiosity rovers on the surface of Mars, future planetary field research may substantially overcome latency and bandwidth constraints by employing a variety of alternative strategies that could involve: 1) placing scientists/astronauts directly on planetary surfaces, as was done in the Apollo era; 2) developing fully autonomous robotic systems capable of conducting in-situ field science research; or 3) teleoperation of robotic assets by humans sufficiently proximal to the exploration targets to drastically reduce latencies and significantly increase bandwidth, thereby achieving effective human telepresence. This third strategy has been the focus of experts in telerobotics, telepresence, planetary science, and human spaceflight during two workshops held from October 3–7, 2016, and July 7–13, 2017, at the Keck Institute for Space Studies (KISS). Based on findings from these workshops, this document describes the conceptual and practical foundations of low-latency telepresence (LLT), opportunities for using derivative approaches for scientific exploration of planetary surfaces, and circumstances under which employing telepresence would be especially productive for planetary science. An important finding of these workshops is the conclusion that there has been limited study of the advantages of planetary science via LLT. A major recommendation from these workshops is that space agencies such as NASA should substantially increase science return with greater investments in this promising strategy for human conduct at distant exploration sites

    Space Science Opportunities Augmented by Exploration Telepresence

    Get PDF
    Since the end of the Apollo missions to the lunar surface in December 1972, humanity has exclusively conducted scientific studies on distant planetary surfaces using teleprogrammed robots. Operations and science return for all of these missions are constrained by two issues related to the great distances between terrestrial scientists and their exploration targets: high communication latencies and limited data bandwidth. Despite the proven successes of in-situ science being conducted using teleprogrammed robotic assets such as Spirit, Opportunity, and Curiosity rovers on the surface of Mars, future planetary field research may substantially overcome latency and bandwidth constraints by employing a variety of alternative strategies that could involve: 1) placing scientists/astronauts directly on planetary surfaces, as was done in the Apollo era; 2) developing fully autonomous robotic systems capable of conducting in-situ field science research; or 3) teleoperation of robotic assets by humans sufficiently proximal to the exploration targets to drastically reduce latencies and significantly increase bandwidth, thereby achieving effective human telepresence. This third strategy has been the focus of experts in telerobotics, telepresence, planetary science, and human spaceflight during two workshops held from October 3–7, 2016, and July 7–13, 2017, at the Keck Institute for Space Studies (KISS). Based on findings from these workshops, this document describes the conceptual and practical foundations of low-latency telepresence (LLT), opportunities for using derivative approaches for scientific exploration of planetary surfaces, and circumstances under which employing telepresence would be especially productive for planetary science. An important finding of these workshops is the conclusion that there has been limited study of the advantages of planetary science via LLT. A major recommendation from these workshops is that space agencies such as NASA should substantially increase science return with greater investments in this promising strategy for human conduct at distant exploration sites

    Development and evaluation of mixed reality-enhanced robotic systems for intuitive tele-manipulation and telemanufacturing tasks in hazardous conditions

    Get PDF
    In recent years, with the rapid development of space exploration, deep-sea discovery, nuclear rehabilitation and management, and robotic-assisted medical devices, there is an urgent need for humans to interactively control robotic systems to perform increasingly precise remote operations. The value of medical telerobotic applications during the recent coronavirus pandemic has also been demonstrated and will grow in the future. This thesis investigates novel approaches to the development and evaluation of a mixed reality-enhanced telerobotic platform for intuitive remote teleoperation applications in dangerous and difficult working conditions, such as contaminated sites and undersea or extreme welding scenarios. This research aims to remove human workers from the harmful working environments by equipping complex robotic systems with human intelligence and command/control via intuitive and natural human-robot- interaction, including the implementation of MR techniques to improve the user's situational awareness, depth perception, and spatial cognition, which are fundamental to effective and efficient teleoperation. The proposed robotic mobile manipulation platform consists of a UR5 industrial manipulator, 3D-printed parallel gripper, and customized mobile base, which is envisaged to be controlled by non-skilled operators who are physically separated from the robot working space through an MR-based vision/motion mapping approach. The platform development process involved CAD/CAE/CAM and rapid prototyping techniques, such as 3D printing and laser cutting. Robot Operating System (ROS) and Unity 3D are employed in the developing process to enable the embedded system to intuitively control the robotic system and ensure the implementation of immersive and natural human-robot interactive teleoperation. This research presents an integrated motion/vision retargeting scheme based on a mixed reality subspace approach for intuitive and immersive telemanipulation. An imitation-based velocity- centric motion mapping is implemented via the MR subspace to accurately track operator hand movements for robot motion control, and enables spatial velocity-based control of the robot tool center point (TCP). The proposed system allows precise manipulation of end-effector position and orientation to readily adjust the corresponding velocity of maneuvering. A mixed reality-based multi-view merging framework for immersive and intuitive telemanipulation of a complex mobile manipulator with integrated 3D/2D vision is presented. The proposed 3D immersive telerobotic schemes provide the users with depth perception through the merging of multiple 3D/2D views of the remote environment via MR subspace. The mobile manipulator platform can be effectively controlled by non-skilled operators who are physically separated from the robot working space through a velocity-based imitative motion mapping approach. Finally, this thesis presents an integrated mixed reality and haptic feedback scheme for intuitive and immersive teleoperation of robotic welding systems. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time visual feedback from the robot working space. The proposed mixed reality virtual fixture integration approach implements hybrid haptic constraints to guide the operator’s hand movements following the conical guidance to effectively align the welding torch for welding and constrain the welding operation within a collision-free area. Overall, this thesis presents a complete tele-robotic application space technology using mixed reality and immersive elements to effectively translate the operator into the robot’s space in an intuitive and natural manner. The results are thus a step forward in cost-effective and computationally effective human-robot interaction research and technologies. The system presented is readily extensible to a range of potential applications beyond the robotic tele- welding and tele-manipulation tasks used to demonstrate, optimise, and prove the concepts

    Multimodal learning from visual and remotely sensed data

    Get PDF
    Autonomous vehicles are often deployed to perform exploration and monitoring missions in unseen environments. In such applications, there is often a compromise between the information richness and the acquisition cost of different sensor modalities. Visual data is usually very information-rich, but requires in-situ acquisition with the robot. In contrast, remotely sensed data has a larger range and footprint, and may be available prior to a mission. In order to effectively and efficiently explore and monitor the environment, it is critical to make use of all of the sensory information available to the robot. One important application is the use of an Autonomous Underwater Vehicle (AUV) to survey the ocean floor. AUVs can take high resolution in-situ photographs of the sea floor, which can be used to classify different regions into various habitat classes that summarise the observed physical and biological properties. This is known as benthic habitat mapping. However, since AUVs can only image a tiny fraction of the ocean floor, habitat mapping is usually performed with remotely sensed bathymetry (ocean depth) data, obtained from shipborne multibeam sonar. With the recent surge in unsupervised feature learning and deep learning techniques, a number of previous techniques have investigated the concept of multimodal learning: capturing the relationship between different sensor modalities in order to perform classification and other inference tasks. This thesis proposes related techniques for visual and remotely sensed data, applied to the task of autonomous exploration and monitoring with an AUV. Doing so enables more accurate classification of the benthic environment, and also assists autonomous survey planning. The first contribution of this thesis is to apply unsupervised feature learning techniques to marine data. The proposed techniques are used to extract features from image and bathymetric data separately, and the performance is compared to that with more traditionally used features for each sensor modality. The second contribution is the development of a multimodal learning architecture that captures the relationship between the two modalities. The model is robust to missing modalities, which means it can extract better features for large-scale benthic habitat mapping, where only bathymetry is available. The model is used to perform classification with various combinations of modalities, demonstrating that multimodal learning provides a large performance improvement over the baseline case. The third contribution is an extension of the standard learning architecture using a gated feature learning model, which enables the model to better capture the ‘one-to-many’ relationship between visual and bathymetric data. This opens up further inference capabilities, with the ability to predict visual features from bathymetric data, which allows image-based queries. Such queries are useful for AUV survey planning, especially when supervised labels are unavailable. The final contribution is the novel derivation of a number of information-theoretic measures to aid survey planning. The proposed measures predict the utility of unobserved areas, in terms of the amount of expected additional visual information. As such, they are able to produce utility maps over a large region that can be used by the AUV to determine the most informative locations from a set of candidate missions. The models proposed in this thesis are validated through extensive experiments on real marine data. Furthermore, the introduced techniques have applications in various other areas within robotics. As such, this thesis concludes with a discussion on the broader implications of these contributions, and the future research directions that arise as a result of this work
    • 

    corecore