5 research outputs found

    Feature coordinate estimation by time differenced vision/GPS/INS

    Get PDF
    This paper proposes a multi-baseline method to estimate absolute coordinates of point clouds and the camera attitude parameter utilizing feature points in successive images. Conventionally, 3D map generation methodologies have been based on images acquired by aerial or land vehicles. Pixels corresponding to known landmarks are manually identified at first. Next, the coordinates are directly geo-referenced and automatically allocated to pixels with high-quality Global Positioning System (GPS) and Inertial Navigation System (INS). However, it is difficult to obtain accurate coordinates by the conventional methodology with low-cost GPS and INS. With camera positions and attitude parameters, image-based point clouds can be compensated accurately. A simulation was carried out to evaluate the performance of the proposed method

    GNSS/LiDAR-Based Navigation of an Aerial Robot in Sparse Forests

    Get PDF
    Autonomous navigation of unmanned vehicles in forests is a challenging task. In such environments, due to the canopies of the trees, information from Global Navigation Satellite Systems (GNSS) can be degraded or even unavailable. Also, because of the large number of obstacles, a previous detailed map of the environment is not practical. In this paper, we solve the complete navigation problem of an aerial robot in a sparse forest, where there is enough space for the flight and the GNSS signals can be sporadically detected. For localization, we propose a state estimator that merges information from GNSS, Attitude and Heading Reference Systems (AHRS), and odometry based on Light Detection and Ranging (LiDAR) sensors. In our LiDAR-based odometry solution, the trunks of the trees are used in a feature-based scan matching algorithm to estimate the relative movement of the vehicle. Our method employs a robust adaptive fusion algorithm based on the unscented Kalman filter. For motion control, we adopt a strategy that integrates a vector field, used to impose the main direction of the movement for the robot, with an optimal probabilistic planner, which is responsible for obstacle avoidance. Experiments with a quadrotor equipped with a planar LiDAR in an actual forest environment is used to illustrate the effectiveness of our approach

    Long Distance GNSS-Denied Visual Inertial Navigation for Autonomous Fixed Wing Unmanned Air Vehicles: SO(3) Manifold Filter based on Virtual Vision Sensor

    Full text link
    This article proposes a visual inertial navigation algorithm intended to diminish the horizontal position drift experienced by autonomous fixed wing UAVs (Unmanned Air Vehicles) in the absence of GNSS (Global Navigation Satellite System) signals. In addition to accelerometers, gyroscopes, and magnetometers, the proposed navigation filter relies on the accurate incremental displacement outputs generated by a VO (Visual Odometry) system, denoted here as a Virtual Vision Sensor or VVS, which relies on images of the Earth surface taken by an onboard camera and is itself assisted by the filter inertial estimations. Although not a full replacement for a GNSS receiver since its position observations are relative instead of absolute, the proposed system enables major reductions in the GNSS-Denied attitude and position estimation errors. In order to minimize the accumulation of errors in the absence of absolute observations, the filter is implemented in the manifold of rigid body rotations or SO (3). Stochastic high fidelity simulations of two representative scenarios involving the loss of GNSS signals are employed to evaluate the results. The authors release the C++ implementation of both the visual inertial navigation filter and the high fidelity simulation as open-source software.Comment: 27 pages, 14 figures. arXiv admin note: substantial text overlap with arXiv:2205.1324

    A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors

    No full text
    State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight
    corecore