2,135 research outputs found

    Virtual Borders: Accurate Definition of a Mobile Robot's Workspace Using Augmented Reality

    Full text link
    We address the problem of interactively controlling the workspace of a mobile robot to ensure a human-aware navigation. This is especially of relevance for non-expert users living in human-robot shared spaces, e.g. home environments, since they want to keep the control of their mobile robots, such as vacuum cleaning or companion robots. Therefore, we introduce virtual borders that are respected by a robot while performing its tasks. For this purpose, we employ a RGB-D Google Tango tablet as human-robot interface in combination with an augmented reality application to flexibly define virtual borders. We evaluated our system with 15 non-expert users concerning accuracy, teaching time and correctness and compared the results with other baseline methods based on visual markers and a laser pointer. The experimental results show that our method features an equally high accuracy while reducing the teaching time significantly compared to the baseline methods. This holds for different border lengths, shapes and variations in the teaching process. Finally, we demonstrated the correctness of the approach, i.e. the mobile robot changes its navigational behavior according to the user-defined virtual borders.Comment: Accepted on 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), supplementary video: https://youtu.be/oQO8sQ0JBR

    This Far, No Further: Introducing Virtual Borders to Mobile Robots Using a Laser Pointer

    Full text link
    We address the problem of controlling the workspace of a 3-DoF mobile robot. In a human-robot shared space, robots should navigate in a human-acceptable way according to the users' demands. For this purpose, we employ virtual borders, that are non-physical borders, to allow a user the restriction of the robot's workspace. To this end, we propose an interaction method based on a laser pointer to intuitively define virtual borders. This interaction method uses a previously developed framework based on robot guidance to change the robot's navigational behavior. Furthermore, we extend this framework to increase the flexibility by considering different types of virtual borders, i.e. polygons and curves separating an area. We evaluated our method with 15 non-expert users concerning correctness, accuracy and teaching time. The experimental results revealed a high accuracy and linear teaching time with respect to the border length while correctly incorporating the borders into the robot's navigational map. Finally, our user study showed that non-expert users can employ our interaction method.Comment: Accepted at 2019 Third IEEE International Conference on Robotic Computing (IRC), supplementary video: https://youtu.be/lKsGp8xtyI

    Adaptive and intelligent navigation of autonomous planetary rovers - A survey

    Get PDF
    The application of robotics and autonomous systems in space has increased dramatically. The ongoing Mars rover mission involving the Curiosity rover, along with the success of its predecessors, is a key milestone that showcases the existing capabilities of robotic technology. Nevertheless, there has still been a heavy reliance on human tele-operators to drive these systems. Reducing the reliance on human experts for navigational tasks on Mars remains a major challenge due to the harsh and complex nature of the Martian terrains. The development of a truly autonomous rover system with the capability to be effectively navigated in such environments requires intelligent and adaptive methods fitting for a system with limited resources. This paper surveys a representative selection of work applicable to autonomous planetary rover navigation, discussing some ongoing challenges and promising future research directions from the perspectives of the authors

    Arena-Rosnav 2.0: A Development and Benchmarking Platform for Robot Navigation in Highly Dynamic Environments

    Full text link
    Following up on our previous works, in this paper, we present Arena-Rosnav 2.0 an extension to our previous works Arena-Bench and Arena-Rosnav, which adds a variety of additional modules for developing and benchmarking robotic navigation approaches. The platform is fundamentally restructured and provides unified APIs to add additional functionalities such as planning algorithms, simulators, or evaluation functionalities. We have included more realistic simulation and pedestrian behavior and provide a profound documentation to lower the entry barrier. We evaluated our system by first, conducting a user study in which we asked experienced researchers as well as new practitioners and students to test our system. The feedback was mostly positive and a high number of participants are utilizing our system for other research endeavors. Finally, we demonstrate the feasibility of our system by integrating two new simulators and a variety of state of the art navigation approaches and benchmark them against one another. The platform is openly available at https://github.com/Arena-Rosnav.Comment: 8 pages, 5 figure

    Navigation, Path Planning, and Task Allocation Framework For Mobile Co-Robotic Service Applications in Indoor Building Environments

    Full text link
    Recent advances in computing and robotics offer significant potential for improved autonomy in the operation and utilization of today’s buildings. Examples of such building environment functions that could be improved through automation include: a) building performance monitoring for real-time system control and long-term asset management; and b) assisted indoor navigation for improved accessibility and wayfinding. To enable such autonomy, algorithms related to task allocation, path planning, and navigation are required as fundamental technical capabilities. Existing algorithms in these domains have primarily been developed for outdoor environments. However, key technical challenges that prevent the adoption of such algorithms to indoor environments include: a) the inability of the widely adopted outdoor positioning method (Global Positioning System - GPS) to work indoors; and b) the incompleteness of graph networks formed based on indoor environments due to physical access constraints not encountered outdoors. The objective of this dissertation is to develop general and scalable task allocation, path planning, and navigation algorithms for indoor mobile co-robots that are immune to the aforementioned challenges. The primary contributions of this research are: a) route planning and task allocation algorithms for centrally-located mobile co-robots charged with spatiotemporal tasks in arbitrary built environments; b) path planning algorithms that take preferential and pragmatic constraints (e.g., wheelchair ramps) into consideration to determine optimal accessible paths in building environments; and c) navigation and drift correction algorithms for autonomous mobile robotic data collection in buildings. The developed methods and the resulting computational framework have been validated through several simulated experiments and physical deployments in real building environments. Specifically, a scenario analysis is conducted to compare the performance of existing outdoor methods with the developed approach for indoor multi-robotic task allocation and route planning. A simulated case study is performed along with a pilot experiment in an indoor built environment to test the efficiency of the path planning algorithm and the performance of the assisted navigation interface developed considering people with physical disabilities (i.e., wheelchair users) as building occupants and visitors. Furthermore, a case study is performed to demonstrate the informed retrofit decision-making process with the help of data collected by an intelligent multi-sensor fused robot that is subsequently used in an EnergyPlus simulation. The results demonstrate the feasibility of the proposed methods in a range of applications involving constraints on both the environment (e.g., path obstructions) and robot capabilities (e.g., maximum travel distance on a single charge). By focusing on the technical capabilities required for safe and efficient indoor robot operation, this dissertation contributes to the fundamental science that will make mobile co-robots ubiquitous in building environments in the near future.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143969/1/baddu_1.pd

    VFH+ based shared control for remotely operated mobile robots

    Full text link
    This paper addresses the problem of safe and efficient navigation in remotely controlled robots operating in hazardous and unstructured environments; or conducting other remote robotic tasks. A shared control method is presented which blends the commands from a VFH+ obstacle avoidance navigation module with the teleoperation commands provided by an operator via a joypad. The presented approach offers several advantages such as flexibility allowing for a straightforward adaptation of the controller's behaviour and easy integration with variable autonomy systems; as well as the ability to cope with dynamic environments. The advantages of the presented controller are demonstrated by an experimental evaluation in a disaster response scenario. More specifically, presented evidence show a clear performance increase in terms of safety and task completion time compared to a pure teleoperation approach, as well as an ability to cope with previously unobserved obstacles.Comment: 8 pages,6 figure

    Gridbot: An autonomous robot controlled by a Spiking Neural Network mimicking the brain's navigational system

    Full text link
    It is true that the "best" neural network is not necessarily the one with the most "brain-like" behavior. Understanding biological intelligence, however, is a fundamental goal for several distinct disciplines. Translating our understanding of intelligence to machines is a fundamental problem in robotics. Propelled by new advancements in Neuroscience, we developed a spiking neural network (SNN) that draws from mounting experimental evidence that a number of individual neurons is associated with spatial navigation. By following the brain's structure, our model assumes no initial all-to-all connectivity, which could inhibit its translation to a neuromorphic hardware, and learns an uncharted territory by mapping its identified components into a limited number of neural representations, through spike-timing dependent plasticity (STDP). In our ongoing effort to employ a bioinspired SNN-controlled robot to real-world spatial mapping applications, we demonstrate here how an SNN may robustly control an autonomous robot in mapping and exploring an unknown environment, while compensating for its own intrinsic hardware imperfections, such as partial or total loss of visual input.Comment: 8 pages, 3 Figures, International Conference on Neuromorphic Systems (ICONS 2018

    Automation and robotics for the Space Exploration Initiative: Results from Project Outreach

    Get PDF
    A total of 52 submissions were received in the Automation and Robotics (A&R) area during Project Outreach. About half of the submissions (24) contained concepts that were judged to have high utility for the Space Exploration Initiative (SEI) and were analyzed further by the robotics panel. These 24 submissions are analyzed here. Three types of robots were proposed in the high scoring submissions: structured task robots (STRs), teleoperated robots (TORs), and surface exploration robots. Several advanced TOR control interface technologies were proposed in the submissions. Many A&R concepts or potential standards were presented or alluded to by the submitters, but few specific technologies or systems were suggested
    corecore