2,047 research outputs found

    Appearance-based localization for mobile robots using digital zoom and visual compass

    Get PDF
    This paper describes a localization system for mobile robots moving in dynamic indoor environments, which uses probabilistic integration of visual appearance and odometry information. The approach is based on a novel image matching algorithm for appearance-based place recognition that integrates digital zooming, to extend the area of application, and a visual compass. Ambiguous information used for recognizing places is resolved with multiple hypothesis tracking and a selection procedure inspired by Markov localization. This enables the system to deal with perceptual aliasing or absence of reliable sensor data. It has been implemented on a robot operating in an office scenario and the robustness of the approach demonstrated experimentally

    A Spectral Learning Approach to Range-Only SLAM

    Full text link
    We present a novel spectral learning algorithm for simultaneous localization and mapping (SLAM) from range data with known correspondences. This algorithm is an instance of a general spectral system identification framework, from which it inherits several desirable properties, including statistical consistency and no local optima. Compared with popular batch optimization or multiple-hypothesis tracking (MHT) methods for range-only SLAM, our spectral approach offers guaranteed low computational requirements and good tracking performance. Compared with popular extended Kalman filter (EKF) or extended information filter (EIF) approaches, and many MHT ones, our approach does not need to linearize a transition or measurement model; such linearizations can cause severe errors in EKFs and EIFs, and to a lesser extent MHT, particularly for the highly non-Gaussian posteriors encountered in range-only SLAM. We provide a theoretical analysis of our method, including finite-sample error bounds. Finally, we demonstrate on a real-world robotic SLAM problem that our algorithm is not only theoretically justified, but works well in practice: in a comparison of multiple methods, the lowest errors come from a combination of our algorithm with batch optimization, but our method alone produces nearly as good a result at far lower computational cost

    Real-Time Vision-Based Robot Localization

    Get PDF
    In this article we describe an algorithm for robot localization using visual landmarks. This algorithm determines both the correspondence between observed landmarks (in this case vertical edges in the environment) and a pre-loaded map, and the location of the robot from those correspondences. The primary advantages of this algorithm are its use of a single geometric tolerance to describe observation error, its ability to recognize ambiguous sets of correspondences, its ability to compute bounds on the error in localization, and fast performance. The current version of the algorithm has been implemented and tested on a mobile robot system. In several hundred trials the algorithm has never failed, and computes location accurate to within a centimeter in less than half a second

    PHALANX: Expendable Projectile Sensor Networks for Planetary Exploration

    Get PDF
    Technologies enabling long-term, wide-ranging measurement in hard-to-reach areas are a critical need for planetary science inquiry. Phenomena of interest include flows or variations in volatiles, gas composition or concentration, particulate density, or even simply temperature. Improved measurement of these processes enables understanding of exotic geologies and distributions or correlating indicators of trapped water or biological activity. However, such data is often needed in unsafe areas such as caves, lava tubes, or steep ravines not easily reached by current spacecraft and planetary robots. To address this capability gap, we have developed miniaturized, expendable sensors which can be ballistically lobbed from a robotic rover or static lander - or even dropped during a flyover. These projectiles can perform sensing during flight and after anchoring to terrain features. By augmenting exploration systems with these sensors, we can extend situational awareness, perform long-duration monitoring, and reduce utilization of primary mobility resources, all of which are crucial in surface missions. We call the integrated payload that includes a cold gas launcher, smart projectiles, planning software, network discovery, and science sensing: PHALANX. In this paper, we introduce the mission architecture for PHALANX and describe an exploration concept that pairs projectile sensors with a rover mothership. Science use cases explored include reconnaissance using ballistic cameras, volatiles detection, and building timelapse maps of temperature and illumination conditions. Strategies to autonomously coordinate constellations of deployed sensors to self-discover and localize with peer ranging (i.e. a local GPS) are summarized, thus providing communications infrastructure beyond-line-of-sight (BLOS) of the rover. Capabilities were demonstrated through both simulation and physical testing with a terrestrial prototype. The approach to developing a terrestrial prototype is discussed, including design of the launching mechanism, projectile optimization, micro-electronics fabrication, and sensor selection. Results from early testing and characterization of commercial-off-the-shelf (COTS) components are reported. Nodes were subjected to successful burn-in tests over 48 hours at full logging duty cycle. Integrated field tests were conducted in the Roverscape, a half-acre planetary analog environment at NASA Ames, where we tested up to 10 sensor nodes simultaneously coordinating with an exploration rover. Ranging accuracy has been demonstrated to be within +/-10cm over 20m using commodity radios when compared to high-resolution laser scanner ground truthing. Evolution of the design, including progressive miniaturization of the electronics and iterated modifications of the enclosure housing for streamlining and optimized radio performance are described. Finally, lessons learned to date, gaps toward eventual flight mission implementation, and continuing future development plans are discussed

    Learning cognitive maps: Finding useful structure in an uncertain world

    Get PDF
    In this chapter we will describe the central mechanisms that influence how people learn about large-scale space. We will focus particularly on how these mechanisms enable people to effectively cope with both the uncertainty inherent in a constantly changing world and also with the high information content of natural environments. The major lessons are that humans get by with a less is more approach to building structure, and that they are able to quickly adapt to environmental changes thanks to a range of general purpose mechanisms. By looking at abstract principles, instead of concrete implementation details, it is shown that the study of human learning can provide valuable lessons for robotics. Finally, these issues are discussed in the context of an implementation on a mobile robot. © 2007 Springer-Verlag Berlin Heidelberg
    • …
    corecore