794 research outputs found
Routing Unmanned Vehicles in GPS-Denied Environments
Most of the routing algorithms for unmanned vehicles, that arise in data
gathering and monitoring applications in the literature, rely on the Global
Positioning System (GPS) information for localization. However, disruption of
GPS signals either intentionally or unintentionally could potentially render
these algorithms not applicable. In this article, we present a novel method to
address this difficulty by combining methods from cooperative localization and
routing. In particular, the article formulates a fundamental combinatorial
optimization problem to plan routes for an unmanned vehicle in a GPS-restricted
environment while enabling localization for the vehicle. We also develop
algorithms to compute optimal paths for the vehicle using the proposed
formulation. Extensive simulation results are also presented to corroborate the
effectiveness and performance of the proposed formulation and algorithms.Comment: Publised in International Conference on Umanned Aerial System
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Jointly Optimizing Placement and Inference for Beacon-based Localization
The ability of robots to estimate their location is crucial for a wide
variety of autonomous operations. In settings where GPS is unavailable,
measurements of transmissions from fixed beacons provide an effective means of
estimating a robot's location as it navigates. The accuracy of such a
beacon-based localization system depends both on how beacons are distributed in
the environment, and how the robot's location is inferred based on noisy and
potentially ambiguous measurements. We propose an approach for making these
design decisions automatically and without expert supervision, by explicitly
searching for the placement and inference strategies that, together, are
optimal for a given environment. Since this search is computationally
expensive, our approach encodes beacon placement as a differential neural layer
that interfaces with a neural network for inference. This formulation allows us
to employ standard techniques for training neural networks to carry out the
joint optimization. We evaluate this approach on a variety of environments and
settings, and find that it is able to discover designs that enable high
localization accuracy.Comment: Appeared at 2017 International Conference on Intelligent Robots and
Systems (IROS
Scalable underwater assembly with reconfigurable visual fiducials
We present a scalable combined localization infrastructure deployment and
task planning algorithm for underwater assembly. Infrastructure is autonomously
modified to suit the needs of manipulation tasks based on an uncertainty model
on the infrastructure's positional accuracy. Our uncertainty model can be
combined with the noise characteristics from multiple devices. For the task
planning problem, we propose a layer-based clustering approach that completes
the manipulation tasks one cluster at a time. We employ movable visual fiducial
markers as infrastructure and an autonomous underwater vehicle (AUV) for
manipulation tasks. The proposed task planning algorithm is computationally
simple, and we implement it on AUV without any offline computation
requirements. Combined hardware experiments and simulations over large datasets
show that the proposed technique is scalable to large areas.Comment: Submitted to ICRA 202
Navigation, Path Planning, and Task Allocation Framework For Mobile Co-Robotic Service Applications in Indoor Building Environments
Recent advances in computing and robotics offer significant potential for improved autonomy in the operation and utilization of today’s buildings. Examples of such building environment functions that could be improved through automation include: a) building performance monitoring for real-time system control and long-term asset management; and b) assisted indoor navigation for improved accessibility and wayfinding. To enable such autonomy, algorithms related to task allocation, path planning, and navigation are required as fundamental technical capabilities. Existing algorithms in these domains have primarily been developed for outdoor environments. However, key technical challenges that prevent the adoption of such algorithms to indoor environments include: a) the inability of the widely adopted outdoor positioning method (Global Positioning System - GPS) to work indoors; and b) the incompleteness of graph networks formed based on indoor environments due to physical access constraints not encountered outdoors.
The objective of this dissertation is to develop general and scalable task allocation, path planning, and navigation algorithms for indoor mobile co-robots that are immune to the aforementioned challenges. The primary contributions of this research are: a) route planning and task allocation algorithms for centrally-located mobile co-robots charged with spatiotemporal tasks in arbitrary built environments; b) path planning algorithms that take preferential and pragmatic constraints (e.g., wheelchair ramps) into consideration to determine optimal accessible paths in building environments; and c) navigation and drift correction algorithms for autonomous mobile robotic data collection in buildings.
The developed methods and the resulting computational framework have been validated through several simulated experiments and physical deployments in real building environments. Specifically, a scenario analysis is conducted to compare the performance of existing outdoor methods with the developed approach for indoor multi-robotic task allocation and route planning. A simulated case study is performed along with a pilot experiment in an indoor built environment to test the efficiency of the path planning algorithm and the performance of the assisted navigation interface developed considering people with physical disabilities (i.e., wheelchair users) as building occupants and visitors. Furthermore, a case study is performed to demonstrate the informed retrofit decision-making process with the help of data collected by an intelligent multi-sensor fused robot that is subsequently used in an EnergyPlus simulation. The results demonstrate the feasibility of the proposed methods in a range of applications involving constraints on both the environment (e.g., path obstructions) and robot capabilities (e.g., maximum travel distance on a single charge). By focusing on the technical capabilities required for safe and efficient indoor robot operation, this dissertation contributes to the fundamental science that will make mobile co-robots ubiquitous in building environments in the near future.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143969/1/baddu_1.pd
3D Perception Based Lifelong Navigation of Service Robots in Dynamic Environments
Lifelong navigation of mobile robots is to ability to reliably operate over extended periods of time in dynamically changing environments. Historically, computational capacity and sensor capability have been the constraining factors to the richness of the internal representation of the environment that a mobile robot could use for navigation tasks. With affordable contemporary sensing technology available that provides rich 3D information of the environment and increased computational power, we can increasingly make use of more semantic environmental information in navigation related tasks.A navigation system has many subsystems that must operate in real time competing for computation resources in such as the perception, localization, and path planning systems. The main thesis proposed in this work is that we can utilize 3D information from the environment in our systems to increase navigational robustness without making trade-offs in any of the real time subsystems. To support these claims, this dissertation presents robust, real world 3D perception based navigation systems in the domains of indoor doorway detection and traversal, sidewalk-level outdoor navigation in urban environments, and global localization in large scale indoor warehouse environments.The discussion of these systems includes methods of 3D point cloud based object detection to find respective objects of semantic interest for the given navigation tasks as well as the use of 3D information in the navigational systems for purposes such as localization and dynamic obstacle avoidance. Experimental results for each of these applications demonstrate the effectiveness of the techniques for robust long term autonomous operation
Active Mapping and Robot Exploration: A Survey
Simultaneous localization and mapping responds to the problem of building a map of the environment without any prior information and based on the data obtained from one or more sensors. In most situations, the robot is driven by a human operator, but some systems are capable of navigating autonomously while mapping, which is called native simultaneous localization and mapping. This strategy focuses on actively calculating the trajectories to explore the environment while building a map with a minimum error. In this paper, a comprehensive review of the research work developed in this field is provided, targeting the most relevant contributions in indoor mobile robotics.This research was funded by the ELKARTEK project ELKARBOT KK-2020/00092 of the Basque Government
A stacked LSTM based approach for reducing semantic pose estimation error
© 1963-2012 IEEE. Achieving high estimation accuracy is significant for semantic simultaneous localization and mapping (SLAM) tasks. Yet, the estimation process is vulnerable to several sources of error, including limitations of the instruments used to perceive the environment, shortcomings of the employed algorithm, environmental conditions, or other unpredictable noise. In this article, a novel stacked long short-term memory (LSTM)-based error reduction approach is developed to enhance the accuracy of semantic SLAM in presence of such error sources. Training and testing data sets were constructed through simulated and real-time experiments. The effectiveness of the proposed approach was demonstrated by its ability to capture and reduce semantic SLAM estimation errors in training and testing data sets. Quantitative performance measurement was carried out using the absolute trajectory error (ATE) metric. The proposed approach was compared with vanilla and bidirectional LSTM networks, shallow and deep neural networks, and support vector machines. The proposed approach outperforms all other structures and was able to significantly improve the accuracy of semantic SLAM. To further verify the applicability of the proposed approach, it was tested on real-time sequences from the TUM RGB-D data set, where it was able to improve the estimated trajectories
- …