454 research outputs found

    Discussion on event-based cameras for dynamic obstacles recognition and detection for UAVs in outdoor environments

    Get PDF
    To safely navigate and avoid obstacles in a complex dynamic environment, autonomous drones need a reaction time less than 10 milliseconds. Thus, event-based cameras have increasingly become more widespread in the academic research field for dynamic obstacles detection and avoidance for UAV, as their achievements outperform their frame-based counterparts in term of low-latency. Several publications showed significant results using these sensors. However, most of the experiments relied on indoor data. After a short introduction explaining the differences and features of an event-based camera compared to traditional RGB camera, this work explores the limits of the state-of-art event-based algorithms for obstacles recognition and detection by expanding their results from indoor experiments to real-world outdoor experiments. Indeed, this paper shows the inaccuracy of event-based algorithms for recognition due to insufficient amount of events generated and the inefficiency of event-based obstacles detection algorithms due to the high ration of noise

    Reliable Navigation for SUAS in Complex Indoor Environments

    Get PDF
    Indoor environments are a particular challenge for Unmanned Aerial Vehicles (UAVs). Effective navigation through these GPS-denied environments require alternative localization systems, as well as methods of sensing and avoiding obstacles while remaining on-task. Additionally, the relatively small clearances and human presence characteristic of indoor spaces necessitates a higher level of precision and adaptability than is common in traditional UAV flight planning and execution. This research blends the optimization of individual technologies, such as state estimation and environmental sensing, with system integration and high-level operational planning. The combination of AprilTag visual markers, multi-camera Visual Odometry, and IMU data can be used to create a robust state estimator that describes position, velocity, and rotation of a multicopter within an indoor environment. However these data sources have unique, nonlinear characteristics that should be understood to effectively plan for their usage in an automated environment. The research described herein begins by analyzing the unique characteristics of these data streams in order to create a highly-accurate, fault-tolerant state estimator. Upon this foundation, the system built, tested, and described herein uses Visual Markers as navigation anchors, visual odometry for motion estimation and control, and then uses depth sensors to maintain an up-to-date map of the UAV\u27s immediate surroundings. It develops and continually refines navigable routes through a novel combination of pre-defined and sensory environmental data. Emphasis is put on the real-world development and testing of the system, through discussion of computational resource management and risk reduction

    From Monocular SLAM to Autonomous Drone Exploration

    Full text link
    Micro aerial vehicles (MAVs) are strongly limited in their payload and power capacity. In order to implement autonomous navigation, algorithms are therefore desirable that use sensory equipment that is as small, low-weight, and low-power consuming as possible. In this paper, we propose a method for autonomous MAV navigation and exploration using a low-cost consumer-grade quadrocopter equipped with a monocular camera. Our vision-based navigation system builds on LSD-SLAM which estimates the MAV trajectory and a semi-dense reconstruction of the environment in real-time. Since LSD-SLAM only determines depth at high gradient pixels, texture-less areas are not directly observed so that previous exploration methods that assume dense map information cannot directly be applied. We propose an obstacle mapping and exploration approach that takes the properties of our semi-dense monocular SLAM system into account. In experiments, we demonstrate our vision-based autonomous navigation and exploration system with a Parrot Bebop MAV

    Depth-Camera-Aided Inertial Navigation Utilizing Directional Constraints.

    Full text link
    This paper presents a practical yet effective solution for integrating an RGB-D camera and an inertial sensor to handle the depth dropouts that frequently happen in outdoor environments, due to the short detection range and sunlight interference. In depth drop conditions, only the partial 5-degrees-of-freedom pose information (attitude and position with an unknown scale) is available from the RGB-D sensor. To enable continuous fusion with the inertial solutions, the scale ambiguous position is cast into a directional constraint of the vehicle motion, which is, in essence, an epipolar constraint in multi-view geometry. Unlike other visual navigation approaches, this can effectively reduce the drift in the inertial solutions without delay or under small parallax motion. If a depth image is available, a window-based feature map is maintained to compute the RGB-D odometry, which is then fused with inertial outputs in an extended Kalman filter framework. Flight results from the indoor and outdoor environments, as well as public datasets, demonstrate the improved navigation performance of the proposed approach

    Autonomous 3D mapping and surveillance of mines with MAVs

    Get PDF
    A dissertation Submitted to the Faculty of Science, University of the Witwatersrand, Johannesburg, for the degree of Master of Science. 12 July 2017.The mapping of mines, both operational and abandoned, is a long, di cult and occasionally dangerous task especially in the latter case. Recent developments in active and passive consumer grade sensors, as well as quadcopter drones present the opportunity to automate these challenging tasks providing cost and safety bene ts. The goal of this research is to develop an autonomous vision-based mapping system that employs quadrotor drones to explore and map sections of mine tunnels. The system is equipped with inexpensive, structured light, depth cameras in place of traditional laser scanners, making the quadrotor setup more viable to produce in bulk. A modi ed version of Microsoft's Kinect Fusion algorithm is used to construct 3D point clouds in real-time as the agents traverse the scene. Finally, the generated and merged point clouds from the system are compared with those produced by current Lidar scanners.LG201
    • …
    corecore