1,018 research outputs found

    Al-Robotics team: A cooperative multi-unmanned aerial vehicle approach for the Mohamed Bin Zayed International Robotic Challenge

    Get PDF
    The Al-Robotics team was selected as one of the 25 finalist teams out of 143 applications received to participate in the first edition of the Mohamed Bin Zayed International Robotic Challenge (MBZIRC), held in 2017. In particular, one of the competition Challenges offered us the opportunity to develop a cooperative approach with multiple unmanned aerial vehicles (UAVs) searching, picking up, and dropping static and moving objects. This paper presents the approach that our team Al-Robotics followed to address that Challenge 3 of the MBZIRC. First, we overview the overall architecture of the system, with the different modules involved. Second, we describe the procedure that we followed to design the aerial platforms, as well as all their onboard components. Then, we explain the techniques that we used to develop the software functionalities of the system. Finally, we discuss our experimental results and the lessons that we learned before and during the competition. The cooperative approach was validated with fully autonomous missions in experiments previous to the actual competition. We also analyze the results that we obtained during the competition trials.Unión Europea H2020 73166

    Autonomous Systems: Autonomous Systems: Indoor Drone Navigation

    Full text link
    Drones are a promising technology for autonomous data collection and indoor sensing. In situations when human-controlled UAVs may not be practical or dependable, such as in uncharted or dangerous locations, the usage of autonomous UAVs offers flexibility, cost savings, and reduced risk. The system creates a simulated quadcopter capable of autonomously travelling in an indoor environment using the gazebo simulation tool and the ros navigation system framework known as Navigaation2. While Nav2 has successfully shown the functioning of autonomous navigation in terrestrial robots and vehicles, the same hasn't been accomplished with unmanned aerial vehicles and still has to be done. The goal is to use the slam toolbox for ROS and the Nav2 navigation system framework to construct a simulated drone that can move autonomously in an indoor (gps-less) environment

    Quadrotor UAV Interface and Localization Design

    Get PDF
    Our project\u27s task was to assist Lincoln Laboratory in preparation for the future automation of a quadrotor UAV system. We created an interface between the quadrotor and ROS to allow for computerized-control of the UAV. Tests of our system indicated that our solution could be feasible with further research. In the next phase of the projects, we created a localization system automate take-off and landing in future mission environments by altering the augmented reality library, ARToolKit, to work with ROS. We performed accuracy, range, update rate, lighting, and tag occlusion tests on our modified code to determine its viability in real-world conditions. We concluded that our current system would not be a feasible due to inconsistencies in tag-detection, but that it merits further research

    Quadrotor UAV Interface and Localization Design

    Get PDF
    MIT Lincoln Laboratory has expressed growing interest in projects involving quadrotor Unmanned Aerial Vehicles (UAVs). Our tasks were to develop a system providing computerized remote control of the provided UAV, as well as to develop a high-accuracy localization system. We integrated the UAV\u27s control system with standard Robot Operating System (ROS) software tools. We measured the reliability of the control system, and determined performance characteristics. We found our control scheme to be usable pending minor improvements. To enable localization, we explored machine vision, ultimately altering the Augmented Reality library ARToolKit to interface with ROS. After several tests, we determined that ARToolKit is not currently a feasible alternative to standard localization techniques

    GNSS-Free Localization for UAVs in the Wild

    Get PDF
    Considering the accelerated development of Unmanned Aerial Vehicles (UAVs) applications in both industrial and research scenarios, there is an increasing need for localizing these aerial systems in non-urban environments, using GNSS-Free, vision-based methods. This project studies three different image feature matching techniques and proposes a final implementation of a vision-based localization algorithm that uses deep features to compute geographical coordinates of a UAV flying in the wild. The method is based on matching salient features of RGB photographs captured by the drone camera and sections of a pre-built map consisting of georeferenced open-source satellite images. Experimental results prove that vision-based localization has comparable accuracy with traditional GNSS-based methods, which serve as ground truth

    Multiple Targets Geolocation Using SIFT and Stereo Vision on Airborne Video Sequences

    Get PDF
    We propose a robust and accurate method for multi-target geo-localization from airborne video. The difference between our approach and other approaches in the literature is fourfold: 1) it does not require gimbal control of the camera or any particular path planning control for the UAV; 2) it can instantaneously geolocate multiple targets even if they were not previously observed by the camera; 3) it does not require a georeferenced terrain database nor an altimeter for estimating the UAV's and the target's altitudes; and 4) it requires only one camera, but it employs a multi-stereo technique using the image sequence for increased accuracy in target geo-location. The only requirements for our approach are: that the intrinsic parameters of the camera be known; that the on board camera be equipped with global positioning system (GPS) and inertial measurement unit (IMU); and that enough feature points can be extracted from the surroundings of the target. Since the first two constraints are easily satisfied, the only real requirement is regarding the feature points. However, as we explain later, this last constraint can also be alleviated if the ground is approximately planar. The result is a method that can reach a few meters of accuracy for an UAV flying at a few hundred meters above the ground. Such performance is demonstrated by computer simulation, in-scale data using a model city, and real airborne video with ground truth

    Real-Time Panoramic Tracking for Event Cameras

    Full text link
    Event cameras are a paradigm shift in camera technology. Instead of full frames, the sensor captures a sparse set of events caused by intensity changes. Since only the changes are transferred, those cameras are able to capture quick movements of objects in the scene or of the camera itself. In this work we propose a novel method to perform camera tracking of event cameras in a panoramic setting with three degrees of freedom. We propose a direct camera tracking formulation, similar to state-of-the-art in visual odometry. We show that the minimal information needed for simultaneous tracking and mapping is the spatial position of events, without using the appearance of the imaged scene point. We verify the robustness to fast camera movements and dynamic objects in the scene on a recently proposed dataset and self-recorded sequences.Comment: Accepted to International Conference on Computational Photography 201
    corecore