5,472 research outputs found

    Routing Unmanned Vehicles in GPS-Denied Environments

    Full text link
    Most of the routing algorithms for unmanned vehicles, that arise in data gathering and monitoring applications in the literature, rely on the Global Positioning System (GPS) information for localization. However, disruption of GPS signals either intentionally or unintentionally could potentially render these algorithms not applicable. In this article, we present a novel method to address this difficulty by combining methods from cooperative localization and routing. In particular, the article formulates a fundamental combinatorial optimization problem to plan routes for an unmanned vehicle in a GPS-restricted environment while enabling localization for the vehicle. We also develop algorithms to compute optimal paths for the vehicle using the proposed formulation. Extensive simulation results are also presented to corroborate the effectiveness and performance of the proposed formulation and algorithms.Comment: Publised in International Conference on Umanned Aerial System

    Monocular navigation for long-term autonomy

    Get PDF
    We present a reliable and robust monocular navigation system for an autonomous vehicle. The proposed method is computationally efficient, needs off-the-shelf equipment only and does not require any additional infrastructure like radio beacons or GPS. Contrary to traditional localization algorithms, which use advanced mathematical methods to determine vehicle position, our method uses a more practical approach. In our case, an image-feature-based monocular vision technique determines only the heading of the vehicle while the vehicle's odometry is used to estimate the distance traveled. We present a mathematical proof and experimental evidence indicating that the localization error of a robot guided by this principle is bound. The experiments demonstrate that the method can cope with variable illumination, lighting deficiency and both short- and long-term environment changes. This makes the method especially suitable for deployment in scenarios which require long-term autonomous operation

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available

    Submap Matching for Stereo-Vision Based Indoor/Outdoor SLAM

    Get PDF
    Autonomous robots operating in semi- or unstructured environments, e.g. during search and rescue missions, require methods for online on-board creation of maps to support path planning and obstacle avoidance. Perception based on stereo cameras is well suited for mixed indoor/outdoor environments. The creation of full 3D maps in GPS-denied areas however is still a challenging task for current robot systems, in particular due to depth errors resulting from stereo reconstruction. State-of-the-art 6D SLAM approaches employ graph-based optimization on the relative transformations between keyframes or local submaps. To achieve loop closures, correct data association is crucial, in particular for sensor input received at different points in time. In order to approach this challenge, we propose a novel method for submap matching. It is based on robust keypoints, which we derive from local obstacle classification. By describing geometrical 3D features, we achieve invariance to changing viewpoints and varying light conditions. We performed experiments in indoor, outdoor and mixed environments. In all three scenarios we achieved a final 3D position error of less than 0.23% of the full trajectory. In addition, we compared our approach with a 3D RBPF SLAM from previous work, achieving an improvement of at least 27% in mean 2D localization accuracy in different scenarios

    PHALANX: Expendable Projectile Sensor Networks for Planetary Exploration

    Get PDF
    Technologies enabling long-term, wide-ranging measurement in hard-to-reach areas are a critical need for planetary science inquiry. Phenomena of interest include flows or variations in volatiles, gas composition or concentration, particulate density, or even simply temperature. Improved measurement of these processes enables understanding of exotic geologies and distributions or correlating indicators of trapped water or biological activity. However, such data is often needed in unsafe areas such as caves, lava tubes, or steep ravines not easily reached by current spacecraft and planetary robots. To address this capability gap, we have developed miniaturized, expendable sensors which can be ballistically lobbed from a robotic rover or static lander - or even dropped during a flyover. These projectiles can perform sensing during flight and after anchoring to terrain features. By augmenting exploration systems with these sensors, we can extend situational awareness, perform long-duration monitoring, and reduce utilization of primary mobility resources, all of which are crucial in surface missions. We call the integrated payload that includes a cold gas launcher, smart projectiles, planning software, network discovery, and science sensing: PHALANX. In this paper, we introduce the mission architecture for PHALANX and describe an exploration concept that pairs projectile sensors with a rover mothership. Science use cases explored include reconnaissance using ballistic cameras, volatiles detection, and building timelapse maps of temperature and illumination conditions. Strategies to autonomously coordinate constellations of deployed sensors to self-discover and localize with peer ranging (i.e. a local GPS) are summarized, thus providing communications infrastructure beyond-line-of-sight (BLOS) of the rover. Capabilities were demonstrated through both simulation and physical testing with a terrestrial prototype. The approach to developing a terrestrial prototype is discussed, including design of the launching mechanism, projectile optimization, micro-electronics fabrication, and sensor selection. Results from early testing and characterization of commercial-off-the-shelf (COTS) components are reported. Nodes were subjected to successful burn-in tests over 48 hours at full logging duty cycle. Integrated field tests were conducted in the Roverscape, a half-acre planetary analog environment at NASA Ames, where we tested up to 10 sensor nodes simultaneously coordinating with an exploration rover. Ranging accuracy has been demonstrated to be within +/-10cm over 20m using commodity radios when compared to high-resolution laser scanner ground truthing. Evolution of the design, including progressive miniaturization of the electronics and iterated modifications of the enclosure housing for streamlining and optimized radio performance are described. Finally, lessons learned to date, gaps toward eventual flight mission implementation, and continuing future development plans are discussed

    Positional estimation techniques for an autonomous mobile robot

    Get PDF
    Techniques for positional estimation of a mobile robot navigation in an indoor environment are described. A comprehensive review of the various positional estimation techniques studied in the literature is first presented. The techniques are divided into four different types and each of them is discussed briefly. Two different kinds of environments are considered for positional estimation; mountainous natural terrain and an urban, man-made environment with polyhedral buildings. In both cases, the robot is assumed to be equipped with single visual camera that can be panned and tilted and also a 3-D description (world model) of the environment is given. Such a description could be obtained from a stereo pair of aerial images or from the architectural plans of the buildings. Techniques for positional estimation using the camera input and the world model are presented

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved
    • 

    corecore