522 research outputs found

    On-Board Vision-Based 3D Relative Localization System for Multiple Quadrotors

    Get PDF
    This work proposes a novel relative localization system, based on active markers and an on-board camera, for tracking multiple quadrotors in a limited field of view. The system extracts the 3D poses of the markers including one that, by pulsating at a predefined frequency, provides an unique platform ID. We discuss how the camera field of view can be explored in presence of multiple targets, and what are the conditions on the system visibility that lead to the establishment of bidirectional sensing between robots with similar sensing capabilities. A visibility analysis is conducted to show that the developed relative localization system meets such requirements, and a closed-loop experiment is used to validate its performance under these conditions. Finally, its performance is compared with other results from the literature, and a metric is established with the intent of mapping different design solutions, facilitating design choices in presence of different requirements

    Fast, Autonomous Flight in GPS-Denied and Cluttered Environments

    Full text link
    One of the most challenging tasks for a flying robot is to autonomously navigate between target locations quickly and reliably while avoiding obstacles in its path, and with little to no a-priori knowledge of the operating environment. This challenge is addressed in the present paper. We describe the system design and software architecture of our proposed solution, and showcase how all the distinct components can be integrated to enable smooth robot operation. We provide critical insight on hardware and software component selection and development, and present results from extensive experimental testing in real-world warehouse environments. Experimental testing reveals that our proposed solution can deliver fast and robust aerial robot autonomous navigation in cluttered, GPS-denied environments.Comment: Pre-peer reviewed version of the article accepted in Journal of Field Robotic

    Cooperative Virtual Sensor for Fault Detection and Identification in Multi-UAV Applications

    Get PDF
    This paper considers the problem of fault detection and identification (FDI) in applications carried out by a group of unmanned aerial vehicles (UAVs) with visual cameras. In many cases, the UAVs have cameras mounted onboard for other applications, and these cameras can be used as bearing-only sensors to estimate the relative orientation of another UAV. The idea is to exploit the redundant information provided by these sensors onboard each of the UAVs to increase safety and reliability, detecting faults on UAV internal sensors that cannot be detected by the UAVs themselves. Fault detection is based on the generation of residuals which compare the expected position of a UAV, considered as target, with the measurements taken by one or more UAVs acting as observers that are tracking the target UAV with their cameras. Depending on the available number of observers and the way they are used, a set of strategies and policies for fault detection are defined. When the target UAV is being visually tracked by two or more observers, it is possible to obtain an estimation of its 3D position that could replace damaged sensors. Accuracy and reliability of this vision-based cooperative virtual sensor (CVS) have been evaluated experimentally in a multivehicle indoor testbed with quadrotors, injecting faults on data to validate the proposed fault detection methods.ComisiĂłn Europea H2020 644271ComisiĂłn Europea FP7 288082Ministerio de Economia, Industria y Competitividad DPI2015-71524-RMinisterio de Economia, Industria y Competitividad DPI2014-5983-C2-1-RMinisterio de EducaciĂłn, Cultura y Deporte FP

    Navigation, localization and stabilization of formations of unmanned aerial and ground vehicles

    Get PDF
    A leader-follower formation driving algorithm developed for control of heterogeneous groups of unmanned micro aerial and ground vehicles stabilized under a top-view relative localization is presented in this paper. The core of the proposed method lies in a novel avoidance function, in which the entire 3D formation is represented by a convex hull projected along a desired path to be followed by the group. Such a representation of the formation provides non-collision trajectories of the robots and respects requirements of the direct visibility between the team members in environment with static as well as dynamic obstacles, which is crucial for the top-view localization. The algorithm is suited for utilization of a simple yet stable visual based navigation of the group (referred to as GeNav), which together with the on-board relative localization enables deployment of large teams of micro-scale robots in environments without any available global localization system. We formulate a novel Model Predictive Control (MPC) based concept that enables to respond to the changing environment and that provides a robust solution with team members' failure tolerance included. The performance of the proposed method is verified by numerical and hardware experiments inspired by reconnaissance and surveillance missions

    A practical multirobot localization system

    Get PDF
    We present a fast and precise vision-based software intended for multiple robot localization. The core component of the software is a novel and efficient algorithm for black and white pattern detection. The method is robust to variable lighting conditions, achieves sub-pixel precision and its computational complexity is independent of the processed image size. With off-the-shelf computational equipment and low-cost cameras, the core algorithm is able to process hundreds of images per second while tracking hundreds of objects with a millimeter precision. In addition, we present the method's mathematical model, which allows to estimate the expected localization precision, area of coverage, and processing speed from the camera's intrinsic parameters and hardware's processing capacity. The correctness of the presented model and performance of the algorithm in real-world conditions is verified in several experiments. Apart from the method description, we also make its source code public at \emph{http://purl.org/robotics/whycon}; so, it can be used as an enabling technology for various mobile robotic problems

    Towards the development of a smart flying sensor: illustration in the field of precision agriculture

    Get PDF
    Sensing is an important element to quantify productivity, product quality and to make decisions. Applications, such as mapping, surveillance, exploration and precision agriculture, require a reliable platform for remote sensing. This paper presents the first steps towards the development of a smart flying sensor based on an unmanned aerial vehicle (UAV). The concept of smart remote sensing is illustrated and its performance tested for the task of mapping the volume of grain inside a trailer during forage harvesting. Novelty lies in: (1) the development of a position-estimation method with time delay compensation based on inertial measurement unit (IMU) sensors and image processing; (2) a method to build a 3D map using information obtained from a regular camera; and (3) the design and implementation of a path-following control algorithm using model predictive control (MPC). Experimental results on a lab-scale system validate the effectiveness of the proposed methodology

    Comparative Study of Indoor Navigation Systems for Autonomous Flight

    Get PDF
    Recently, Unmanned Aerial Vehicles (UAVs) have attracted the society and researchers due to the capability to perform in economic, scientific and emergency scenarios, and are being employed in large number of applications especially during the hostile environments. They can operate autonomously for both indoor and outdoor applications mainly including search and rescue, manufacturing, forest fire tracking, remote sensing etc. For both environments, precise localization plays a critical role in order to achieve high performance flight and interacting with the surrounding objects. However, for indoor areas with degraded or denied Global Navigation Satellite System (GNSS) situation, it becomes challenging to control UAV autonomously especially where obstacles are unidentified. A large number of techniques by using various technologies are proposed to get rid of these limits. This paper provides a comparison of such existing solutions and technologies available for this purpose with their strengths and limitations. Further, a summary of current research status with unresolved issues and opportunities is provided that would provide research directions to the researchers of the similar interests
    • …
    corecore