679 research outputs found

    High-frequency MAV state estimation using low-cost inertial and optical flow measurement units

    Get PDF
    The paper has supplementary downloadable material available at http://ieeexplore.ieee.org, provided by the authors. The material includes a video of the state estimation presented in the paper.This paper develops a new method for 3D, high rate vehicle state estimation, specially designed for free-flying Micro Aerial Vehicles (MAVs). We fuse observations from inertial and optical flow low-cost measurement units, and extend the current use of this optical sensors from hovering purposes to odometry estimation. Two Kalman filters, with its extended and error-state versions, are developed, and benchmarked alongside a large number of algorithm variations, using both simulations and real experiments with precise ground-truth. In contrast to state-of-the-art visual-inertial odometry methods, the proposed solution does not require image processing in the main CPU. Instead, the data correction is done taking advantage of the recently appeared optical flow sensors, which directly provide metric information about the MAV motion. We hence reduce the computational load of the main processor unit, and obtain an accurate estimation of the vehicle state at a high update rate.Peer ReviewedPostprint (author's final draft

    Learning Pose Estimation for UAV Autonomous Navigation and Landing Using Visual-Inertial Sensor Data

    Get PDF
    In this work, we propose a robust network-in-the-loop control system for autonomous navigation and landing of an Unmanned-Aerial-Vehicle (UAV). To estimate the UAV’s absolute pose, we develop a deep neural network (DNN) architecture for visual-inertial odometry, which provides a robust alternative to traditional methods. We first evaluate the accuracy of the estimation by comparing the prediction of our model to traditional visual-inertial approaches on the publicly available EuRoC MAV dataset. The results indicate a clear improvement in the accuracy of the pose estimation up to 25% over the baseline. Finally, we integrate the data-driven estimator in the closed-loop flight control system of Airsim, a simulator available as a plugin for Unreal Engine, and we provide simulation results for autonomous navigation and landing

    A Continuous-Time Nonlinear Observer for Estimating Structure from Motion from Omnidirectional Optic Flow

    Get PDF
    Various insect species utilize certain types of self-motion to perceive structure in their local environment, a process known as active vision. This dissertation presents the development of a continuous-time formulated observer for estimating structure from motion that emulates the biological phenomenon of active vision. In an attempt to emulate the wide-field of view of compound eyes and neurophysiology of insects, the observer utilizes an omni-directional optic flow field. Exponential stability of the observer is assured provided the persistency of excitation condition is met. Persistency of excitation is assured by altering the direction of motion sufficiently quickly. An equal convergence rate on the entire viewable area can be achieved by executing certain prototypical maneuvers. Practical implementation of the observer is accomplished both in simulation and via an actual flying quadrotor testbed vehicle. Furthermore, this dissertation presents the vehicular implementation of a complimentary navigation methodology known as wide-field integration of the optic flow field. The implementation of the developed insect-inspired navigation methodologies on physical testbed vehicles utilized in this research required the development of many subsystems that comprise a control and navigation suite, including avionics development and state sensing, model development via system identification, feedback controller design, and state estimation strategies. These requisite subsystems and their development are discussed

    Multi-Antenna Vision-and-Inertial-Aided CDGNSS for Micro Aerial Vehicle Pose Estimation

    Get PDF
    A system is presented for multi-antenna carrier phase differential GNSS (CDGNSS)-based pose (position and orientation) estimation aided by monocular visual measurements and a smartphone-grade inertial sensor. The system is designed for micro aerial vehicles, but can be applied generally for low-cost, lightweight, high-accuracy, geo-referenced pose estimation. Visual and inertial measurements enable robust operation despite GNSS degradation by constraining uncertainty in the dynamics propagation, which improves fixed-integer CDGNSS availability and reliability in areas with limited sky visibility. No prior work has demonstrated an increased CDGNSS integer fixing rate when incorporating visual measurements with smartphone-grade inertial sensing. A central pose estimation filter receives measurements from separate CDGNSS position and attitude estimators, visual feature measurements based on the ROVIO measurement model, and inertial measurements. The filter's pose estimates are fed back as a prior for CDGNSS integer fixing. A performance analysis under both simulated and real-world GNSS degradation shows that visual measurements greatly increase the availability and accuracy of low-cost inertial-aided CDGNSS pose estimation.Aerospace Engineering and Engineering Mechanic

    Visual guidance of unmanned aerial manipulators

    Get PDF
    The ability to fly has greatly expanded the possibilities for robots to perform surveillance, inspection or map generation tasks. Yet it was only in recent years that research in aerial robotics was mature enough to allow active interactions with the environment. The robots responsible for these interactions are called aerial manipulators and usually combine a multirotor platform and one or more robotic arms. The main objective of this thesis is to formalize the concept of aerial manipulator and present guidance methods, using visual information, to provide them with autonomous functionalities. A key competence to control an aerial manipulator is the ability to localize it in the environment. Traditionally, this localization has required external infrastructure of sensors (e.g., GPS or IR cameras), restricting the real applications. Furthermore, localization methods with on-board sensors, exported from other robotics fields such as simultaneous localization and mapping (SLAM), require large computational units becoming a handicap in vehicles where size, load, and power consumption are important restrictions. In this regard, this thesis proposes a method to estimate the state of the vehicle (i.e., position, orientation, velocity and acceleration) by means of on-board, low-cost, light-weight and high-rate sensors. With the physical complexity of these robots, it is required to use advanced control techniques during navigation. Thanks to their redundancy on degrees-of-freedom, they offer the possibility to accomplish not only with mobility requirements but with other tasks simultaneously and hierarchically, prioritizing them depending on their impact to the overall mission success. In this work we present such control laws and define a number of these tasks to drive the vehicle using visual information, guarantee the robot integrity during flight, and improve the platform stability or increase arm operability. The main contributions of this research work are threefold: (1) Present a localization technique to allow autonomous navigation, this method is specifically designed for aerial platforms with size, load and computational burden restrictions. (2) Obtain control commands to drive the vehicle using visual information (visual servo). (3) Integrate the visual servo commands into a hierarchical control law by exploiting the redundancy of the robot to accomplish secondary tasks during flight. These tasks are specific for aerial manipulators and they are also provided. All the techniques presented in this document have been validated throughout extensive experimentation with real robotic platforms.La capacitat de volar ha incrementat molt les possibilitats dels robots per a realitzar tasques de vigilància, inspecció o generació de mapes. Tot i això, no és fins fa pocs anys que la recerca en robòtica aèria ha estat prou madura com per començar a permetre interaccions amb l’entorn d’una manera activa. Els robots per a fer-ho s’anomenen manipuladors aeris i habitualment combinen una plataforma multirotor i un braç robòtic. L’objectiu d’aquesta tesi és formalitzar el concepte de manipulador aeri i presentar mètodes de guiatge, utilitzant informació visual, per dotar d’autonomia aquest tipus de vehicles. Una competència clau per controlar un manipulador aeri és la capacitat de localitzar-se en l’entorn. Tradicionalment aquesta localització ha requerit d’infraestructura sensorial externa (GPS, càmeres IR, etc.), limitant així les aplicacions reals. Pel contrari, sistemes de localització exportats d’altres camps de la robòtica basats en sensors a bord, com per exemple mètodes de localització i mapejat simultànis (SLAM), requereixen de gran capacitat de còmput, característica que penalitza molt en vehicles on la mida, pes i consum elèctric son grans restriccions. En aquest sentit, aquesta tesi proposa un mètode d’estimació d’estat del robot (posició, velocitat, orientació i acceleració) a partir de sensors instal·lats a bord, de baix cost, baix consum computacional i que proporcionen mesures a alta freqüència. Degut a la complexitat física d’aquests robots, és necessari l’ús de tècniques de control avançades. Gràcies a la seva redundància de graus de llibertat, aquests robots ens ofereixen la possibilitat de complir amb els requeriments de mobilitat i, simultàniament, realitzar tasques de manera jeràrquica, ordenant-les segons l’impacte en l’acompliment de la missió. En aquest treball es presenten aquestes lleis de control, juntament amb la descripció de tasques per tal de guiar visualment el vehicle, garantir la integritat del robot durant el vol, millorar de l’estabilitat del vehicle o augmentar la manipulabilitat del braç. Aquesta tesi es centra en tres aspectes fonamentals: (1) Presentar una tècnica de localització per dotar d’autonomia el robot. Aquest mètode està especialment dissenyat per a plataformes amb restriccions de capacitat computacional, mida i pes. (2) Obtenir les comandes de control necessàries per guiar el vehicle a partir d’informació visual. (3) Integrar aquestes accions dins una estructura de control jeràrquica utilitzant la redundància del robot per complir altres tasques durant el vol. Aquestes tasques son específiques per a manipuladors aeris i també es defineixen en aquest document. Totes les tècniques presentades en aquesta tesi han estat avaluades de manera experimental amb plataformes robòtiques real

    Biologically Inspired Monocular Vision Based Navigation and Mapping in GPS-Denied Environments

    Get PDF
    This paper presents an in-depth theoretical study of bio-vision inspired feature extraction and depth perception method integrated with vision-based simultaneous localization and mapping (SLAM). We incorporate the key functions of developed visual cortex in several advanced species, including humans, for depth perception and pattern recognition. Our navigation strategy assumes GPS-denied manmade environment consisting of orthogonal walls, corridors and doors. By exploiting the architectural features of the indoors, we introduce a method for gathering useful landmarks from a monocular camera for SLAM use, with absolute range information without using active ranging sensors. Experimental results show that the system is only limited by the capabilities of the camera and the availability of good corners. The proposed methods are experimentally validated by our self-contained MAV inside a conventional building

    Aerial Vehicles

    Get PDF
    This book contains 35 chapters written by experts in developing techniques for making aerial vehicles more intelligent, more reliable, more flexible in use, and safer in operation.It will also serve as an inspiration for further improvement of the design and application of aeral vehicles. The advanced techniques and research described here may also be applicable to other high-tech areas such as robotics, avionics, vetronics, and space
    • …
    corecore