83 research outputs found

    AUV SLAM and experiments using a mechanical scanning forward-looking sonar

    Get PDF
    Navigation technology is one of the most important challenges in the applications of autonomous underwater vehicles (AUVs) which navigate in the complex undersea environment. The ability of localizing a robot and accurately mapping its surroundings simultaneously, namely the simultaneous localization and mapping (SLAM) problem, is a key prerequisite of truly autonomous robots. In this paper, a modified-FastSLAM algorithm is proposed and used in the navigation for our C-Ranger research platform, an open-frame AUV. A mechanical scanning imaging sonar is chosen as the active sensor for the AUV. The modified-FastSLAM implements the update relying on the on-board sensors of C-Ranger. On the other hand, the algorithm employs the data association which combines the single particle maximum likelihood method with modified negative evidence method, and uses the rank-based resampling to overcome the particle depletion problem. In order to verify the feasibility of the proposed methods, both simulation experiments and sea trials for C-Ranger are conducted. The experimental results show the modified-FastSLAM employed for the navigation of the C-Ranger AUV is much more effective and accurate compared with the traditional methods

    Learning to Fly by Crashing

    Full text link
    How do you learn to navigate an Unmanned Aerial Vehicle (UAV) and avoid obstacles? One approach is to use a small dataset collected by human experts: however, high capacity learning algorithms tend to overfit when trained with little data. An alternative is to use simulation. But the gap between simulation and real world remains large especially for perception problems. The reason most research avoids using large-scale real data is the fear of crashes! In this paper, we propose to bite the bullet and collect a dataset of crashes itself! We build a drone whose sole purpose is to crash into objects: it samples naive trajectories and crashes into random objects. We crash our drone 11,500 times to create one of the biggest UAV crash dataset. This dataset captures the different ways in which a UAV can crash. We use all this negative flying data in conjunction with positive data sampled from the same trajectories to learn a simple yet powerful policy for UAV navigation. We show that this simple self-supervised model is quite effective in navigating the UAV even in extremely cluttered environments with dynamic obstacles including humans. For supplementary video see: https://youtu.be/u151hJaGKU

    Autonomous Navigation for Autonomous Underwater Vehicles Based on Information Filters and Active Sensing

    Get PDF
    This paper addresses an autonomous navigation method for the autonomous underwater vehicle (AUV) C-Ranger applying information-filter-based simultaneous localization and mapping (SLAM), and its sea trial experiments in Tuandao Bay (Shangdong Province, P.R. China). Weak links in the information matrix in an extended information filter (EIF) can be pruned to achieve an efficient approach-sparse EIF algorithm (SEIF-SLAM). All the basic update formulae can be implemented in constant time irrespective of the size of the map; hence the computational complexity is significantly reduced. The mechanical scanning imaging sonar is chosen as the active sensing device for the underwater vehicle, and a compensation method based on feedback of the AUV pose is presented to overcome distortion of the acoustic images due to the vehicle motion. In order to verify the feasibility of the navigation methods proposed for the C-Ranger, a sea trial was conducted in Tuandao Bay. Experimental results and analysis show that the proposed navigation approach based on SEIF-SLAM improves the accuracy of the navigation compared with conventional method; moreover the algorithm has a low computational cost when compared with EKF-SLAM

    Autonomous navigation with constrained consistency for C-Ranger

    Get PDF
    Autonomous underwater vehicles (AUVs) have become the most widely used tools for undertaking complex exploration tasks in marine environments. Their synthetic ability to carry out localization autonomously and build an environmental map concurrently, in other words, simultaneous localization and mapping (SLAM), are considered to be pivotal requirements for AUVs to have truly autonomous navigation. However, the consistency problem of the SLAM system has been greatly ignored during the past decades. In this paper, a consistency constrained extended Kalman filter (EKF) SLAM algorithm, applying the idea of local consistency, is proposed and applied to the autonomous navigation of the C-Ranger AUV, which is developed as our experimental platform. The concept of local consistency (LC) is introduced after an explicit theoretical derivation of the EKF-SLAM system. Then, we present a locally consistency-constrained EKF-SLAM design, LC-EKF, in which the landmark estimates used for linearization are fixed at the beginning of each local time period, rather than evaluated at the latest landmark estimates. Finally, our proposed LC-EKF algorithm is experimentally verified, both in simulations and sea trials. The experimental results show that the LC-EKF performs well with regard to consistency, accuracy and computational efficiency

    Trajectory planning for multiple robots in bearing-only target localisation

    Full text link
    This paper provides a solution to the optimal trajectory planning problem in target localisation for multiple heterogeneous robots with bearing-only sensors. The objective here is to find robot trajectories that maximise the accuracy of the locations of the targets at a prescribed terminal time. The trajectory planning is formulated as an optimal control problem for a nonlinear system with a gradually identified model and then solved using nonlinear Model Predictive Control (MPC). The solution to the MPC optimisation problem is computed through Exhaustive Expansion Tree Search (EETS) plus Sequential Quadratic Programming (SQP). Simulations were conducted using the proposed methods. Results show that EETS alone performs considerably faster than EETS+SQP with only minor differences in information gain, and that a centralised approach outperforms a decentralised one in terms of information gain. We show that a centralised EETS provides a near optimal solution. We also demonstrate the significance of using a matrix to represent the information gathered. © 2005 IEEE

    Location utility-based map reduction

    Get PDF
    Maps used for navigation often include a database of location descriptions for place recognition (loop closing), which permits bounded-error performance. A standard pose-graph SLAM system adds a new entry for every new pose into the location database, which grows linearly and unbounded in time and thus becomes unsustainable. To address this issue, in this paper we propose a new map-reduction approach that pre-constructs a fixed-size place-recognition database amenable to the limited storage and processing resources of the vehicle by exploiting the high-level structure of the environment as well as the vehicle motion. In particular, we introduce the concept of location utility - which encapsulates the visitation probability of a location and its spatial distribution relative to nearby locations in the database - as a measure of the value of potential loop-closure events to occur at that location. While finding the optimal reduced location database is NP-hard, we develop an efficient greedy algorithm to sort all the locations in a map based on their relative utility without access to sensor measurements or the vehicle trajectory. This enables pre-determination of a generic, limited-size place-recognition database containing the N best locations in the environment. To validate the proposed approach, we develop an open-source street-map simulator using real city-map data and show that an accurate map (pose-graph) can be attained even when using a place-recognition database with only 1% of the entries of the corresponding full database.Charles Stark Draper Laboratory (Fellowship

    Hybrid Contact Preintegration for Visual-Inertial-Contact State Estimation Using Factor Graphs

    Full text link
    The factor graph framework is a convenient modeling technique for robotic state estimation where states are represented as nodes, and measurements are modeled as factors. When designing a sensor fusion framework for legged robots, one often has access to visual, inertial, joint encoder, and contact sensors. While visual-inertial odometry has been studied extensively in this framework, the addition of a preintegrated contact factor for legged robots has been only recently proposed. This allowed for integration of encoder and contact measurements into existing factor graphs, however, new nodes had to be added to the graph every time contact was made or broken. In this work, to cope with the problem of switching contact frames, we propose a hybrid contact preintegration theory that allows contact information to be integrated through an arbitrary number of contact switches. The proposed hybrid modeling approach reduces the number of required variables in the nonlinear optimization problem by only requiring new states to be added alongside camera or selected keyframes. This method is evaluated using real experimental data collected from a Cassie-series robot where the trajectory of the robot produced by a motion capture system is used as a proxy for ground truth. The evaluation shows that inclusion of the proposed preintegrated hybrid contact factor alongside visual-inertial navigation systems improves estimation accuracy as well as robustness to vision failure, while its generalization makes it more accessible for legged platforms.Comment: Detailed derivations are provided in the supplementary material document listed under "Ancillary files
    corecore