394 research outputs found

    Concurrent Initialization for Bearing-Only SLAM

    Get PDF
    Simultaneous Localization and Mapping (SLAM) is perhaps the most fundamental problem to solve in robotics in order to build truly autonomous mobile robots. The sensors have a large impact on the algorithm used for SLAM. Early SLAM approaches focused on the use of range sensors as sonar rings or lasers. However, cameras have become more and more used, because they yield a lot of information and are well adapted for embedded systems: they are light, cheap and power saving. Unlike range sensors which provide range and angular information, a camera is a projective sensor which measures the bearing of images features. Therefore depth information (range) cannot be obtained in a single step. This fact has propitiated the emergence of a new family of SLAM algorithms: the Bearing-Only SLAM methods, which mainly rely in especial techniques for features system-initialization in order to enable the use of bearing sensors (as cameras) in SLAM systems. In this work a novel and robust method, called Concurrent Initialization, is presented which is inspired by having the complementary advantages of the Undelayed and Delayed methods that represent the most common approaches for addressing the problem. The key is to use concurrently two kinds of feature representations for both undelayed and delayed stages of the estimation. The simulations results show that the proposed method surpasses the performance of previous schemes

    Recent Developments in Monocular SLAM within the HRI Framework

    Get PDF
    This chapter describes an approach to improve the feature initialization process in the delayed inverse-depth feature initialization monocular Simultaneous Localisation and Mapping (SLAM), using data provided by a robot’s camera plus an additional monocular sensor deployed in the headwear of the human component in a human-robot collaborative exploratory team. The robot and the human deploy a set of sensors that once combined provides the data required to localize the secondary camera worn by the human. The approach and its implementation are described along with experimental results demonstrating its performance. A discussion on the usual sensors within the robotics field, especially in SLAM, provides background to the advantages and capabilities of the system implemented in this research

    An Audio-visual Solution to Sound Source Localization and Tracking with Applications to HRI

    Full text link
    Robot audition is an emerging and growing branch in the robotic community and is necessary for a natural Human-Robot Interaction (HRI). In this paper, we propose a framework that integrates advances from Simultaneous Localization And Mapping (SLAM), bearing-only target tracking, and robot audition techniques into a unifed system for sound source identification, localization, and tracking. In indoors, acoustic observations are often highly noisy and corrupted due to reverberations, the robot ego-motion and background noise, and possible discontinuous nature of them. Therefore, in everyday interaction scenarios, the system requires accommodating for outliers, robust data association, and appropriate management of the landmarks, i.e. sound sources. We solve the robot self-localization and environment representation problems using an RGB-D SLAM algorithm, and sound source localization and tracking using recursive Bayesian estimation in the form of the extended Kalman Filter with unknown data associations and an unknown number of landmarks. The experimental results show that the proposed system performs well in the medium-sized cluttered indoor environment

    Towards Visual Localization, Mapping and Moving Objects Tracking by a Mobile Robot: a Geometric and Probabilistic Approach

    Get PDF
    Dans cette thĂšse, nous rĂ©solvons le problĂšme de reconstruire simultanĂ©ment une reprĂ©sentation de la gĂ©omĂ©trie du monde, de la trajectoire de l'observateur, et de la trajectoire des objets mobiles, Ă  l'aide de la vision. Nous divisons le problĂšme en trois Ă©tapes : D'abord, nous donnons une solution au problĂšme de la cartographie et localisation simultanĂ©es pour la vision monoculaire qui fonctionne dans les situations les moins bien conditionnĂ©es gĂ©omĂ©triquement. Ensuite, nous incorporons l'observabilitĂ© 3D instantanĂ©e en dupliquant le matĂ©riel de vision avec traitement monoculaire. Ceci Ă©limine les inconvĂ©nients inhĂ©rents aux systĂšmes stĂ©rĂ©o classiques. Nous ajoutons enfin la dĂ©tection et suivi des objets mobiles proches en nous servant de cette observabilitĂ© 3D. Nous choisissons une reprĂ©sentation Ă©parse et ponctuelle du monde et ses objets. La charge calculatoire des algorithmes de perception est allĂ©gĂ©e en focalisant activement l'attention aux rĂ©gions de l'image avec plus d'intĂ©rĂȘt. ABSTRACT : In this thesis we give new means for a machine to understand complex and dynamic visual scenes in real time. In particular, we solve the problem of simultaneously reconstructing a certain representation of the world's geometry, the observer's trajectory, and the moving objects' structures and trajectories, with the aid of vision exteroceptive sensors. We proceeded by dividing the problem into three main steps: First, we give a solution to the Simultaneous Localization And Mapping problem (SLAM) for monocular vision that is able to adequately perform in the most ill-conditioned situations: those where the observer approaches the scene in straight line. Second, we incorporate full 3D instantaneous observability by duplicating vision hardware with monocular algorithms. This permits us to avoid some of the inherent drawbacks of classic stereo systems, notably their limited range of 3D observability and the necessity of frequent mechanical calibration. Third, we add detection and tracking of moving objects by making use of this full 3D observability, whose necessity we judge almost inevitable. We choose a sparse, punctual representation of both the world and the moving objects in order to alleviate the computational payload of the image processing algorithms, which are required to extract the necessary geometrical information out of the images. This alleviation is additionally supported by active feature detection and search mechanisms which focus the attention to those image regions with the highest interest. This focusing is achieved by an extensive exploitation of the current knowledge available on the system (all the mapped information), something that we finally highlight to be the ultimate key to success

    Large-area visually augmented navigation for autonomous underwater vehicles

    Get PDF
    Submitted to the Joint Program in Applied Ocean Science & Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution June 2005This thesis describes a vision-based, large-area, simultaneous localization and mapping (SLAM) algorithm that respects the low-overlap imagery constraints typical of autonomous underwater vehicles (AUVs) while exploiting the inertial sensor information that is routinely available on such platforms. We adopt a systems-level approach exploiting the complementary aspects of inertial sensing and visual perception from a calibrated pose-instrumented platform. This systems-level strategy yields a robust solution to underwater imaging that overcomes many of the unique challenges of a marine environment (e.g., unstructured terrain, low-overlap imagery, moving light source). Our large-area SLAM algorithm recursively incorporates relative-pose constraints using a view-based representation that exploits exact sparsity in the Gaussian canonical form. This sparsity allows for efficient O(n) update complexity in the number of images composing the view-based map by utilizing recent multilevel relaxation techniques. We show that our algorithmic formulation is inherently sparse unlike other feature-based canonical SLAM algorithms, which impose sparseness via pruning approximations. In particular, we investigate the sparsification methodology employed by sparse extended information filters (SEIFs) and offer new insight as to why, and how, its approximation can lead to inconsistencies in the estimated state errors. Lastly, we present a novel algorithm for efficiently extracting consistent marginal covariances useful for data association from the information matrix. In summary, this thesis advances the current state-of-the-art in underwater visual navigation by demonstrating end-to-end automatic processing of the largest visually navigated dataset to date using data collected from a survey of the RMS Titanic (path length over 3 km and 3100 m2 of mapped area). This accomplishment embodies the summed contributions of this thesis to several current SLAM research issues including scalability, 6 degree of freedom motion, unstructured environments, and visual perception.This work was funded in part by the CenSSIS ERC of the National Science Foundation under grant EEC-9986821, in part by the Woods Hole Oceanographic Institution through a grant from the Penzance Foundation, and in part by a NDSEG Fellowship awarded through the Department of Defense

    Range-only SLAM schemes exploiting robot-sensor network cooperation

    Get PDF
    Simultaneous localization and mapping (SLAM) is a key problem in robotics. A robot with no previous knowledge of the environment builds a map of this environment and localizes itself in that map. Range-only SLAM is a particularization of the SLAM problem which only uses the information provided by range sensors. This PhD Thesis describes the design, integration, evaluation and validation of a set of schemes for accurate and e_cient range-only simultaneous localization and mapping exploiting the cooperation between robots and sensor networks. This PhD Thesis proposes a general architecture for range-only simultaneous localization and mapping (RO-SLAM) with cooperation between robots and sensor networks. The adopted architecture has two main characteristics. First, it exploits the sensing, computational and communication capabilities of sensor network nodes. Both, the robot and the beacons actively participate in the execution of the RO-SLAM _lter. Second, it integrates not only robot-beacon measurements but also range measurements between two di_erent beacons, the so-called inter-beacon measurements. Most reported RO-SLAM methods are executed in a centralized manner in the robot. In these methods all tasks in RO-SLAM are executed in the robot, including measurement gathering, integration of measurements in RO-SLAM and the Prediction stage. These fully centralized RO-SLAM methods require high computational burden in the robot and have very poor scalability. This PhD Thesis proposes three di_erent schemes that works under the aforementioned architecture. These schemes exploit the advantages of cooperation between robots and sensor networks and intend to minimize the drawbacks of this cooperation. The _rst scheme proposed in this PhD Thesis is a RO-SLAM scheme with dynamically con_gurable measurement gathering. Integrating inter-beacon measurements in RO-SLAM signi_cantly improves map estimation but involves high consumption of resources, such as the energy required to gather and transmit measurements, the bandwidth required by the measurement collection protocol and the computational burden necessary to integrate the larger number of measurements. The objective of this scheme is to reduce the increment in resource consumption resulting from the integration of inter-beacon measurements by adopting a centralized mechanism running in the robot that adapts measurement gathering. The second scheme of this PhD Thesis consists in a distributed RO-SLAM scheme based on the Sparse Extended Information Filter (SEIF). This scheme reduces the increment in resource consumption resulting from the integration of inter-beacon measurements by adopting a distributed SLAM _lter in which each beacon is responsible for gathering its measurements to the robot and to other beacons and computing the SLAM Update stage in order to integrate its measurements in SLAM. Moreover, it inherits the scalability of the SEIF. The third scheme of this PhD Thesis is a resource-constrained RO-SLAM scheme based on the distributed SEIF previously presented. This scheme includes the two mechanisms developed in the previous contributions {measurement gathering control and distribution of RO-SLAM Update stage between beacons{ in order to reduce the increment in resource consumption resulting from the integration of inter-beacon measurements. This scheme exploits robot-beacon cooperation to improve SLAM accuracy and e_ciency while meeting a given resource consumption bound. The resource consumption bound is expressed in terms of the maximum number of measurements that can be integrated in SLAM per iteration. The sensing channel capacity used, the beacon energy consumed or the computational capacity employed, among others, are proportional to the number of measurements that are gathered and integrated in SLAM. The performance of the proposed schemes have been analyzed and compared with each other and with existing works. The proposed schemes are validated in real experiments with aerial robots. This PhD Thesis proves that the cooperation between robots and sensor networks provides many advantages to solve the RO-SLAM problem. Resource consumption is an important constraint in sensor networks. The proposed architecture allows the exploitation of the cooperation advantages. On the other hand, the proposed schemes give solutions to the resource limitation without degrading performance

    Localization and Mapping from Shore Contours and Depth

    Get PDF
    This work examines the problem of solving SLAM in aquatic environments using an unmanned surface vessel under conditions that restrict global knowledge of the robot's pose. These conditions refer specifically to the absence of a global positioning system to estimate position, a poor vehicle motion model, and absence of magnetic field to estimate absolute heading. These conditions are present in terrestrial environments where GPS satellite reception is occluded by surrounding structures and magnetic inference affects compass measurements. Similar conditions are anticipated in extra-terrestrial environments such as on Titan which lacks the infrastructure necessary for traditional positioning sensors and the unstable magnetic core renders compasses useless. This work develops a solution to the SLAM problem that utilizes shore features coupled with information about the depth of the water column. The approach is validated experimentally using an autonomous surface vehicle utilizing omnidirectional video and SONAR, results are compared to GPS ground truth

    Map building fusing acoustic and visual information using autonomous underwater vehicles

    Get PDF
    Author Posting. © The Author(s), 2012. This is the author's version of the work. It is posted here by permission of John Wiley & Sons for personal use, not for redistribution. The definitive version was published in Journal of Field Robotics 30 (2013): 763–783, doi:10.1002/rob.21473.We present a system for automatically building 3-D maps of underwater terrain fusing visual data from a single camera with range data from multibeam sonar. The six-degree of freedom location of the camera relative to the navigation frame is derived as part of the mapping process, as are the attitude offsets of the multibeam head and the on-board velocity sensor. The system uses pose graph optimization and the square root information smoothing and mapping framework to simultaneously solve for the robot’s trajectory, the map, and the camera location in the robot’s frame. Matched visual features are treated within the pose graph as images of 3-D landmarks, while multibeam bathymetry submap matches are used to impose relative pose constraints linking robot poses from distinct tracklines of the dive trajectory. The navigation and mapping system presented works under a variety of deployment scenarios, on robots with diverse sensor suites. Results of using the system to map the structure and appearance of a section of coral reef are presented using data acquired by the Seabed autonomous underwater vehicle.The work described herein was funded by the National Science Foundation Censsis ERC under grant number EEC-9986821, and by the National Oceanic and Atmospheric Administration under grant number NA090AR4320129

    Image-Aided Navigation Using Cooperative Binocular Stereopsis

    Get PDF
    This thesis proposes a novel method for cooperatively estimating the positions of two vehicles in a global reference frame based on synchronized image and inertial information. The proposed technique - cooperative binocular stereopsis - leverages the ability of one vehicle to reliably localize itself relative to the other vehicle using image data which enables motion estimation from tracking the three dimensional positions of common features. Unlike popular simultaneous localization and mapping (SLAM) techniques, the method proposed in this work does not require that the positions of features be carried forward in memory. Instead, the optimal vehicle motion over a single time interval is estimated from the positions of common features using a modified bundle adjustment algorithm and is used as a measurement in a delayed state extended Kalman filter (EKF). The developed system achieves improved motion estimation as compared to previous work and is a potential alternative to map-based SLAM algorithms

    Cooperative Navigation for Low-bandwidth Mobile Acoustic Networks.

    Full text link
    This thesis reports on the design and validation of estimation and planning algorithms for underwater vehicle cooperative localization. While attitude and depth are easily instrumented with bounded-error, autonomous underwater vehicles (AUVs) have no internal sensor that directly observes XY position. The global positioning system (GPS) and other radio-based navigation techniques are not available because of the strong attenuation of electromagnetic signals in seawater. The navigation algorithms presented herein fuse local body-frame rate and attitude measurements with range observations between vehicles within a decentralized architecture. The acoustic communication channel is both unreliable and low bandwidth, precluding many state-of-the-art terrestrial cooperative navigation algorithms. We exploit the underlying structure of a post-process centralized estimator in order to derive two real-time decentralized estimation frameworks. First, the origin state method enables a client vehicle to exactly reproduce the corresponding centralized estimate within a server-to-client vehicle network. Second, a graph-based navigation framework produces an approximate reconstruction of the centralized estimate onboard each vehicle. Finally, we present a method to plan a locally optimal server path to localize a client vehicle along a desired nominal trajectory. The planning algorithm introduces a probabilistic channel model into prior Gaussian belief space planning frameworks. In summary, cooperative localization reduces XY position error growth within underwater vehicle networks. Moreover, these methods remove the reliance on static beacon networks, which do not scale to large vehicle networks and limit the range of operations. Each proposed localization algorithm was validated in full-scale AUV field trials. The planning framework was evaluated through numerical simulation.PhDMechanical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113428/1/jmwalls_1.pd
    • 

    corecore