1,241 research outputs found

    Simultaneous localization and map-building using active vision

    No full text
    An active approach to sensing can provide the focused measurement capability over a wide field of view which allows correctly formulated Simultaneous Localization and Map-Building (SLAM) to be implemented with vision, permitting repeatable long-term localization using only naturally occurring, automatically-detected features. In this paper, we present the first example of a general system for autonomous localization using active vision, enabled here by a high-performance stereo head, addressing such issues as uncertainty-based measurement selection, automatic map-maintenance, and goal-directed steering. We present varied real-time experiments in a complex environment.Published versio

    Perception-aware Path Planning

    Full text link
    In this paper, we give a double twist to the problem of planning under uncertainty. State-of-the-art planners seek to minimize the localization uncertainty by only considering the geometric structure of the scene. In this paper, we argue that motion planning for vision-controlled robots should be perception aware in that the robot should also favor texture-rich areas to minimize the localization uncertainty during a goal-reaching task. Thus, we describe how to optimally incorporate the photometric information (i.e., texture) of the scene, in addition to the the geometric one, to compute the uncertainty of vision-based localization during path planning. To avoid the caveats of feature-based localization systems (i.e., dependence on feature type and user-defined thresholds), we use dense, direct methods. This allows us to compute the localization uncertainty directly from the intensity values of every pixel in the image. We also describe how to compute trajectories online, considering also scenarios with no prior knowledge about the map. The proposed framework is general and can easily be adapted to different robotic platforms and scenarios. The effectiveness of our approach is demonstrated with extensive experiments in both simulated and real-world environments using a vision-controlled micro aerial vehicle.Comment: 16 pages, 20 figures, revised version. Conditionally accepted for IEEE Transactions on Robotic

    Analysis of Different Feature Selection Criteria Based on a Covariance Convergence Perspective for a SLAM Algorithm

    Get PDF
    This paper introduces several non-arbitrary feature selection techniques for a Simultaneous Localization and Mapping (SLAM) algorithm. The feature selection criteria are based on the determination of the most significant features from a SLAM convergence perspective. The SLAM algorithm implemented in this work is a sequential EKF (Extended Kalman filter) SLAM. The feature selection criteria are applied on the correction stage of the SLAM algorithm, restricting it to correct the SLAM algorithm with the most significant features. This restriction also causes a decrement in the processing time of the SLAM. Several experiments with a mobile robot are shown in this work. The experiments concern the map reconstruction and a comparison between the different proposed techniques performance. The experiments were carried out at an outdoor environment composed by trees, although the results shown herein are not restricted to a special type of features

    Robot Collaboration for Simultaneous Map Building and Localization

    Get PDF

    Vision-based navigation for autonomous underwater vehicles

    Get PDF
    This thesis investigates the use of vision sensors in Autonomous Underwater Vehicle (AUV) navigation, which is typically performed using a combination of dead-reckoning and external acoustic positioning systems. Traditional dead-reckoning sensors such els Doppler Velocity Logs (DVLs) or inertial systems are expensive and result in drifting trajectory estimates. Acoustic positioning systems can be used to correct dead-reckoning drift, however they are time consuming to deploy and have a limited range of operation. Occlusion and multipath problems may also occur when a vehicle operates near the seafloor, particularly in environments such as reefs, ridges and canyons, which are the focus of many AUV applications. Vision-based navigation approaches have the potential to improve the availability and performance of AUVs in a wide range of applications. Visual odometry may replace expensive dead-reckoning sensors in small and low-cost vehicles. Using onboard cameras to correct dead-reckoning drift will allow AUVs to navigate accurately over long distances, without the limitations of acoustic positioning systems. This thesis contains three principal contributions. The first is an algorithm to estimate the trajectory of a vehicle by fusing observations from sonar and monocular vision sensors. The second is a stereo-vision motion estimation approach that can be used on its own to provide odometry estimation, or fused with additional sensors in a Simultaneous Localisation And Mapping (SLAM) framework. The third is an efficient SLAM algorithm that uses visual observations to correct drifting trajectory estimates. Results of this work are presented in simulation and using data collected during several deployments of underwater vehicles in coral reef environments. Trajectory estimation is demonstrated for short transects using the sonar and vision fusion and stereo-vision approaches. Navigation over several kilometres is demonstrated using the SLAM algorithm, where stereo-vision is shown to improve the estimated trajectory produced by a DVL

    Efficient and Featureless Approaches to Bathymetric Simultaneous Localisation and Mapping

    Get PDF
    This thesis investigates efficient forms of Simultaneous Localization and Mapping (SLAM) that do not require explicit identification, tracking or association of map features. The specific application considered here is subsea robotic bathymetric mapping. In this context, SLAM allows a GPS-denied robot operating near the sea floor to create a self-consistent bathymetric map. This is accomplished using a Rao-Blackwellized Particle Filter (RBPF) whereby each particle maintains a hypothesis of the current vehicle state and map that is efficiently maintained using Distributed Particle Mapping. Through particle weighting and resampling, successive observations of the seafloor structure are used to improve the estimated trajectory and resulting map by enforcing map self consistency. The main contributions of this thesis are two novel map representations, either of which can be paired with the RBPF to perform SLAM. The first is a grid-based 2D depth map that is efficiently stored by exploiting redundancies between different maps. The second is a trajectory map representation that, instead of directly storing estimates of seabed depth, records the trajectory of each particle and synchronises it to a common log of bathymetric observations. Upon detecting a loop closure each particle is weighted by matching new observations to the current predictions. For the grid map approach this is done by extracting the predictions stored in the observed cells. For the trajectory map approach predictions are instead generated from a local reconstruction of their map using Gaussian Process Regression. While the former allows for faster map access the latter requires less memory and fully exploits the spatial correlation in the environment, allowing predictions of seabed depth to be generated in areas that were not directly observed previously. In this case particle resampling therefore not only enforces self-consistency in overlapping sections of the map but additionally enforces self-consistency between neighboring map borders. Both approaches are validated using multibeam sonar data collected from several missions of varying scale by a variety of different Unmanned Underwater Vehicles. These trials demonstrate how the corrections provided by both approaches improve the trajectory and map when compared to dead reckoning fused with Ultra Short Baseline or Long Baseline observations. Furthermore, results are compared with a pre-existing state of the art bathymetric SLAM technique, confirming that similar results can be achieved at a fraction of the computation cost. Lastly the added capabilities of the trajectory map are validated using two different bathymetric datasets. These demonstrate how navigation and mapping corrections can still be achieved when only sparse bathymetry is available (e.g. from a four beam Doppler Velocity Log sensor) or in missions where map overlap is minimal or even non-existent
    corecore