121 research outputs found

    Quadrotor UAV indoor localization using embedded stereo camera

    Get PDF
    Localization of Small-Size Unmanned Air Vehicles (UAVs) such as the Quadrotors in Global Positioning System (GPS)-denied environment such as indoors has been done using various techniques. Most of the experiment indoors that requires localization of UAVs, used cameras or ultrasonic sensors installed indoor or applied indoor environment modification such as patching (Infra Red) IR and visual markers. While these systems have high accuracy for the UAV localization, they are expensive and have less practicality in real situations. In this paper a system consisting of a stereo camera embedded on a quadrotor UAV (QUAV) for indoor localization was proposed. The optical flow data from the stereo camera then are fused with attitude and acceleration data from our sensors to get better estimation of the quadrotor location. The quadrotor altitude is estimated using Scale Invariant Feature Transform (SIFT) Feature Stereo Matching in addition to the one computed using optical flow. To avoid latency due to computational time, image processing and the quadrotor control are processed threads and core allocation. The performance of our QUAV altitude estimation is better compared to single-camera embedded QUAVs due to the stereo camera triangulation, where it leads to better estimation of the x-y position using optical flow when fused together

    Toward an Autonomous Lunar Landing Based on Low-Speed Optic Flow Sensors

    No full text
    International audienceFor the last few decades, growing interest has returned to the quite chal-lenging task of the autonomous lunar landing. Soft landing of payloads on the lu-nar surface requires the development of new means of ensuring safe descent with strong final conditions and aerospace-related constraints in terms of mass, cost and computational resources. In this paper, a two-phase approach is presented: first a biomimetic method inspired from the neuronal and sensory system of flying insects is presented as a solution to perform safe lunar landing. In order to design an au-topilot relying only on optic flow (OF) and inertial measurements, an estimation method based on a two-sensor setup is introduced: these sensors allow us to accu-rately estimate the orientation of the velocity vector which is mandatory to control the lander's pitch in a quasi-optimal way with respect to the fuel consumption. Sec-ondly a new low-speed Visual Motion Sensor (VMS) inspired by insects' visual systems performing local angular 1-D speed measurements ranging from 1.5 • /s to 25 • /s and weighing only 2.8 g is presented. It was tested under free-flying outdoor conditions over various fields onboard an 80 kg unmanned helicopter. These pre-liminary results show that the optic flow measured despite the complex disturbances encountered closely matched the ground-truth optic flow

    Advanced trajectory tracking for UAVs using combined feedforward/feedback control design

    Get PDF
    Trajectory tracking is a major challenge for UAVs. The more complex the trajectory is, the more accurate tracking is required with minimum divergence from the trajectory. Apart from active trajectory tracking mechanisms, current solutions to accurate trajectory tracking in narrow areas require low speed motions. This paper presents a systematic design methodology using centralised feedforward/feedback control architecture for advanced trajectory tracking without compromising the speed of the vehicle. Using the norm as a measure for the design criteria, the proposed method proves fast tracking with no overshooting and less actuators energy compared with single degree-of-freedom feedback control method. The results are verified using simulations for two systems: a tri-rotor VTOL UAV (fully actuated system), and a quadrotor trainer (over-actuated system)

    Integrating Millimeter Wave Radar with a Monocular Vision Sensor for On-Road Obstacle Detection Applications

    Get PDF
    This paper presents a systematic scheme for fusing millimeter wave (MMW) radar and a monocular vision sensor for on-road obstacle detection. As a whole, a three-level fusion strategy based on visual attention mechanism and driver’s visual consciousness is provided for MMW radar and monocular vision fusion so as to obtain better comprehensive performance. Then an experimental method for radar-vision point alignment for easy operation with no reflection intensity of radar and special tool requirements is put forward. Furthermore, a region searching approach for potential target detection is derived in order to decrease the image processing time. An adaptive thresholding algorithm based on a new understanding of shadows in the image is adopted for obstacle detection, and edge detection is used to assist in determining the boundary of obstacles. The proposed fusion approach is verified through real experimental examples of on-road vehicle/pedestrian detection. In the end, the experimental results show that the proposed method is simple and feasible
    corecore