16,812 research outputs found

    PIXHAWK: A micro aerial vehicle design for autonomous flight using onboard computer vision

    Get PDF
    We describe a novel quadrotor Micro Air Vehicle (MAV) system that is designed to use computer vision algorithms within the flight control loop. The main contribution is a MAV system that is able to run both the vision-based flight control and stereo-vision-based obstacle detection parallelly on an embedded computer onboard the MAV. The system design features the integration of a powerful onboard computer and the synchronization of IMU-Vision measurements by hardware timestamping which allows tight integration of IMU measurements into the computer vision pipeline. We evaluate the accuracy of marker-based visual pose estimation for flight control and demonstrate marker-based autonomous flight including obstacle detection using stereo vision. We also show the benefits of our IMU-Vision synchronization for egomotion estimation in additional experiments where we use the synchronized measurements for pose estimation using the 2pt+gravity formulation of the PnP proble

    Oncoming Vehicle Detection with Variable-Focus Liquid Lens

    Get PDF
    Computer vision plays an important role in autonomous vehicle, robotics and manufacturing fields. Depth perception in computer vision requires stereo vision, or fuse together a single camera with other depth sensors such as radar and Lidar. Depth from focus using adjustable lens has not been applied in autonomous vehicle. The goal of this paper is to investigate the application of depth from focus for oncoming vehicle detection. Liquid lens is used to adjust optical power while acquiring images with the camera. The distance of the oncoming vehicle can be estimated by measuring the oncoming vehicle’s sharpness in the images with known lens settings. The results show the system detecting oncoming vehicle at ±2 meter and ±4 meter using depth from focus technique. Estimation of oncoming vehicles above 4 meter can be done by analysing the relative size of the vehicle detected

    Integration of a stereo vision system into an autonomous underwater vehicle for pipe manipulation tasks

    Get PDF
    Underwater object detection and recognition using computer vision are challenging tasks due to the poor light condition of submerged environments. For intervention missions requiring grasping and manipulation of submerged objects, a vision system must provide an Autonomous Underwater Vehicles (AUV) with object detection, localization and tracking capabilities. In this paper, we describe the integration of a vision system in the MARIS intervention AUV and its configuration for detecting cylindrical pipes, a typical artifact of interest in underwater operations. Pipe edges are tracked using an alpha-beta filter to achieve robustness and return a reliable pose estimation even in case of partial pipe visibility. Experiments in an outdoor water pool in different light conditions show that the adopted algorithmic approach allows detection of target pipes and provides a sufficiently accurate estimation of their pose even when they become partially visible, thereby supporting the AUV in several successful pipe grasping operations

    FPGA-based stereo vision system for autonomous driving

    Get PDF
    The project consists on the design and implementation of a real-time stereo vision image sensor oriented to autonomous driving systems using an FPGA. The function of this sensor is to output a real-time depth image from an input of two grayscale luminance images, which can make further processing much easier and faster. The final objective of the project is to develop a standalone prototype for the implementation of the system on an autonomous vehicle, but it will be developed on an existing FPGA platform to prove its viability. Two low-cost digital cameras will be used as input sensors, and the output image will be transmitted to a PC

    Sensor Augmented Virtual Reality Based Teleoperation Using Mixed Autonomy

    Get PDF
    A multimodal teleoperation interface is introduced, featuring an integrated virtual reality (VR) based simulation augmented by sensors and image processing capabilities onboard the remotely operated vehicle. The proposed virtual reality interface fuses an existing VR model with live video feed and prediction states, thereby creating a multimodal control interface. VR addresses the typical limitations of video based teleoperation caused by signal lag and limited field of view, allowing the operator to navigate in a continuous fashion. The vehicle incorporates an onboard computer and a stereo vision system to facilitate obstacle detection. A vehicle adaptation system with a priori risk maps and a real-state tracking system enable temporary autonomous operation of the vehicle for local navigation around obstacles and automatic re-establishment of the vehicle’s teleoperated state. The system provides real time update of the virtual environment based on anomalies encountered by the vehicle. The VR based multimodal teleoperation interface is expected to be more adaptable and intuitive when compared with other interfaces

    Intelligent automatic overtaking system using vision for vehicle detection

    Get PDF
    There is clear evidence that investment in intelligent transportation system technologies brings major social and economic benefits. Technological advances in the area of automatic systems in particular are becoming vital for the reduction of road deaths. We here describe our approach to automation of one the riskiest autonomous manœuvres involving vehicles – overtaking. The approach is based on a stereo vision system responsible for detecting any preceding vehicle and triggering the autonomous overtaking manœuvre. To this end, a fuzzy-logic based controller was developed to emulate how humans overtake. Its input is information from the vision system and from a positioning-based system consisting of a differential global positioning system (DGPS) and an inertial measurement unit (IMU). Its output is the generation of action on the vehicle’s actuators, i.e., the steering wheel and throttle and brake pedals. The system has been incorporated into a commercial Citroën car and tested on the private driving circuit at the facilities of our research center, CAR, with different preceding vehicles – a motorbike, car, and truck – with encouraging results
    corecore