11 research outputs found

    Sky-GVINS: a Sky-segmentation Aided GNSS-Visual-Inertial System for Robust Navigation in Urban Canyons

    Full text link
    Integrating Global Navigation Satellite Systems (GNSS) in Simultaneous Localization and Mapping (SLAM) systems draws increasing attention to a global and continuous localization solution. Nonetheless, in dense urban environments, GNSS-based SLAM systems will suffer from the Non-Line-Of-Sight (NLOS) measurements, which might lead to a sharp deterioration in localization results. In this paper, we propose to detect the sky area from the up-looking camera to improve GNSS measurement reliability for more accurate position estimation. We present Sky-GVINS: a sky-aware GNSS-Visual-Inertial system based on a recent work called GVINS. Specifically, we adopt a global threshold method to segment the sky regions and non-sky regions in the fish-eye sky-pointing image and then project satellites to the image using the geometric relationship between satellites and the camera. After that, we reject satellites in non-sky regions to eliminate NLOS signals. We investigated various segmentation algorithms for sky detection and found that the Otsu algorithm reported the highest classification rate and computational efficiency, despite the algorithm's simplicity and ease of implementation. To evaluate the effectiveness of Sky-GVINS, we built a ground robot and conducted extensive real-world experiments on campus. Experimental results show that our method improves localization accuracy in both open areas and dense urban environments compared to the baseline method. Finally, we also conduct a detailed analysis and point out possible further directions for future research. For detailed information, visit our project website at https://github.com/SJTU-ViSYS/Sky-GVINS

    GNSS/LiDAR-Based Navigation of an Aerial Robot in Sparse Forests

    Get PDF
    Autonomous navigation of unmanned vehicles in forests is a challenging task. In such environments, due to the canopies of the trees, information from Global Navigation Satellite Systems (GNSS) can be degraded or even unavailable. Also, because of the large number of obstacles, a previous detailed map of the environment is not practical. In this paper, we solve the complete navigation problem of an aerial robot in a sparse forest, where there is enough space for the flight and the GNSS signals can be sporadically detected. For localization, we propose a state estimator that merges information from GNSS, Attitude and Heading Reference Systems (AHRS), and odometry based on Light Detection and Ranging (LiDAR) sensors. In our LiDAR-based odometry solution, the trunks of the trees are used in a feature-based scan matching algorithm to estimate the relative movement of the vehicle. Our method employs a robust adaptive fusion algorithm based on the unscented Kalman filter. For motion control, we adopt a strategy that integrates a vector field, used to impose the main direction of the movement for the robot, with an optimal probabilistic planner, which is responsible for obstacle avoidance. Experiments with a quadrotor equipped with a planar LiDAR in an actual forest environment is used to illustrate the effectiveness of our approach

    Multi-sensor fusion for robust autonomous flight in indoor and outdoor environments with a rotorcraft MAV

    No full text
    We present a modular and extensible approach to integrate noisy measurements from multiple heterogeneous sensors that yield either absolute or relative observations at different and varying time intervals, and to provide smooth and globally consistent estimates of position in real time for autonomous flight. We describe the development of algorithms and software architecture for a new 1.9kg MAV platform equipped with an IMU, laser scanner, stereo cameras, pressure altimeter, magnetometer, and a GPS receiver, in which the state estimation and control are performed onboard on an Intel NUC 3rd generation i3 processor. We illustrate the robustness of our framework in large-scale, indoor-outdoor autonomous aerial navigation experiments involving traversals of over 440 meters at average speeds of 1.5 m/s with winds around 10 mph while entering and exiting buildings

    Improving the Robustness of Monocular Vision-Aided Navigation for Multirotors through Integrated Estimation and Guidance

    Get PDF
    Multirotors could be used to autonomously perform tasks in search-and-rescue, reconnaissance, or infrastructure-monitoring applications. In these environments, the vehicle may have limited or degraded GPS access. Researchers have investigated methods for simultaneous localization and mapping (SLAM) using on-board vision sensors, allowing vehicles to navigate in GPS-denied environments. In particular, SLAM solutions based on a monocular camera offer low-cost, low-weight, and accurate navigation indoors and outdoors without explicit range limitations. However, a monocular camera is a bearing-only sensor. Additional sensors are required to achieve metric pose estimation, and the structure of a scene can only be recovered through camera motion. Because of these challenges, the performance of monocular-based navigation solutions is typically very sensitive to the environment and the vehicle’s trajectory. This work proposes an integrated estimation and guidance approach for improving the robustness of monocular SLAM to environmental uncertainty. It is specifically intended for a multirotor carrying a monocular camera, downward-facing rangefinder, and inertial measurement unit (IMU). A guidance maneuver is proposed that takes advantage of the metric rangefinder measurements. When the environmental uncertainty is high, the vehicle simply moves up and down, initializing features with a confident and accurate baseline. In order to demonstrate this technique, a vision-aided navigation solution is implemented which includes a unique approach to feature covariance initialization that is based on consider least squares. Features are only initialized if there is enough information to accurately triangulate their position, providing an indirect metric of environmental uncertainty that could be used to signal the guidance maneuver. The navigation filter is validated using hardware and simulated data. Finally, simulations show that the proposed initialization maneuver is a simple, practical, and effective way to improve the robustness of monocular-vision-aided-navigation and could increase the amount of autonomy that GPS-denied multirotors are capable of achieving

    Design and Modeling of Smartphone Controlled Vehicle

    Get PDF
    While many have worked on the transition phases of more popular hybrid aerial vehicle configurations, In this paper, we explore a novel multi-mode hybrid Unmanned Aerial Vehicle (UAV). Due to its expanded flying range and adaptability, hybrid aerial vehicles—which integrates two or more operating configurations—have become more and more widespread. The stages of transition between these modes are reasonably important whether there are two or more flight forms present. Whereas numerous have worked on the early stages of more widely used hybrid aerial vehicle types, in this paper a brand-new multi-mode hybrid UAV will be investigated. In order to fully exploit the vehicle's propulsion equipment and aerodynamic surfaces in both a horizontal cruising configuration and a vertical hovering configuration, we combine a tailless fixed-wing with a four-wing monocopter. By increasing construction integrity over the whole operational range, this lowers drag and wasteful mass when the aircraft is in motion in both modes. The transformation between the two flight states can be carried out in midair with just its current flying actuators and sensors. Through a ground controller, this vehicle may be operated by an Android device
    corecore