461 research outputs found
Effective Target Aware Visual Navigation for UAVs
In this paper we propose an effective vision-based navigation method that
allows a multirotor vehicle to simultaneously reach a desired goal pose in the
environment while constantly facing a target object or landmark. Standard
techniques such as Position-Based Visual Servoing (PBVS) and Image-Based Visual
Servoing (IBVS) in some cases (e.g., while the multirotor is performing fast
maneuvers) do not allow to constantly maintain the line of sight with a target
of interest. Instead, we compute the optimal trajectory by solving a non-linear
optimization problem that minimizes the target re-projection error while
meeting the UAV's dynamic constraints. The desired trajectory is then tracked
by means of a real-time Non-linear Model Predictive Controller (NMPC): this
implicitly allows the multirotor to satisfy both the required constraints. We
successfully evaluate the proposed approach in many real and simulated
experiments, making an exhaustive comparison with a standard approach.Comment: Conference paper at "European Conference on Mobile Robotics" (ECMR)
201
Hybrid visual servoing with hierarchical task composition for aerial manipulation
© 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.In this paper a hybrid visual servoing with a hierarchical task-composition control framework is described for aerial manipulation, i.e. for the control of an aerial vehicle endowed with a robot arm. The proposed approach suitably combines into a unique hybrid-control framework the main benefits of both image-based and position-based control schemes. Moreover, the underactuation of the aerial vehicle has been explicitly taken into account in a general formulation, together with a dynamic smooth activation mechanism. Both simulation case studies and experiments are presented to demonstrate the performance of the proposed technique.Peer ReviewedPostprint (author's final draft
Learning Pose Estimation for UAV Autonomous Navigation and Landing Using Visual-Inertial Sensor Data
In this work, we propose a robust network-in-the-loop control system for autonomous navigation and landing of an Unmanned-Aerial-Vehicle (UAV). To estimate the UAV’s absolute pose, we develop a deep neural network (DNN) architecture for visual-inertial odometry, which provides a robust alternative to traditional methods. We first evaluate the accuracy of the estimation by comparing the prediction of our model to traditional visual-inertial approaches on the publicly available EuRoC MAV dataset. The results indicate a clear improvement in the accuracy of the pose estimation up to 25% over the baseline. Finally, we integrate the data-driven estimator in the closed-loop flight control system of Airsim, a simulator available as a plugin for Unreal Engine, and we provide simulation results for autonomous navigation and landing
Output Feedback Image-Based Visual Servoing of Rotorcrafts
© 2018, Springer Nature B.V. This paper presents an improved output feedback based image-based visual servoing (IBVS) law for rotorcraft unmanned aerial vehicles (RUAVs). The control law enables a RUAV with a minimal set of sensors, i.e. an inertial measurement unit (IMU) and a single downward facing camera, to regulate its position and heading relative to a planar visual target consisting of multiple points. As compared to our previous work, twofold improvement is made. First, the desired value of the image feature of controlling the vertical motion of the RUAV is a function of other image features instead of a constant. This modification helps to keep the visual target stay in the camera’s field of view by indirectly adjusting the height of the vehicle. Second, the proposed approach simplifies our previous output feedback law by reducing the dimension of the observer filter state space while the same asymptotic stability result is kept. Both simulation and experimental results are presented to demonstrate the performance of the proposed controller
Visual Servoing Approach for Autonomous UAV Landing on a Moving Vehicle
We present a method to autonomously land an Unmanned Aerial Vehicle on a
moving vehicle with a circular (or elliptical) pattern on the top. A visual
servoing controller approaches the ground vehicle using velocity commands
calculated directly in image space. The control laws generate velocity commands
in all three dimensions, eliminating the need for a separate height controller.
The method has shown the ability to approach and land on the moving deck in
simulation, indoor and outdoor environments, and compared to the other
available methods, it has provided the fastest landing approach. It does not
rely on additional external setup, such as RTK, motion capture system, ground
station, offboard processing, or communication with the vehicle, and it
requires only a minimal set of hardware and localization sensors. The videos
and source codes can be accessed from http://theairlab.org/landing-on-vehicle.Comment: 24 page
Homography-based pose estimation to guide a miniature helicopter during 3D-trajectory tracking
This work proposes a pose-based visual servoing control, through using planar homography, to estimate the position and orientation of a miniature helicopter relative to a known pattern. Once having the current flight information, the nonlinear underactuated controller presented in one of our previous works, which attends all flight phases, is used to guide the rotorcraft during a 3Dtrajectory tracking task. In the sequel, the simulation framework and the results obtained using it are presented and discussed, validating the proposed controller when a visual system is used to determine the helicopter pose information.Fil: Brandão, Alexandre . Universidade Federal Do Espirito Santo. Centro Tecnologico. Departamento de Ingenieria Electrica; BrasilFil: Sarapura, Jorge Antonio. Consejo Nacional de Investigaciones CientÃficas y Técnicas. Centro CientÃfico Tecnológico San Juan. Instituto de Automática; Argentina. Universidad Nacional de San Juan; ArgentinaFil: Sarcinelli Filho, Mario . Universidade Federal Do Espirito Santo. Centro Tecnologico. Departamento de Ingenieria Electrica; BrasilFil: Carelli Albarracin, Ricardo Oscar. Consejo Nacional de Investigaciones CientÃficas y Técnicas. Centro CientÃfico Tecnológico San Juan. Instituto de Automática; Argentina. Universidad Nacional de San Juan; Argentin
Visual guidance of unmanned aerial manipulators
The ability to fly has greatly expanded the possibilities for robots to perform surveillance, inspection or map generation tasks. Yet it was only in recent years that research in aerial robotics was mature enough to allow active interactions with the environment. The robots responsible for these interactions are called aerial manipulators and usually combine a multirotor platform and one or more robotic arms.
The main objective of this thesis is to formalize the concept of aerial manipulator and present guidance methods, using visual information, to provide them with autonomous functionalities.
A key competence to control an aerial manipulator is the ability to localize it in the environment.
Traditionally, this localization has required external infrastructure of sensors (e.g., GPS or IR cameras), restricting the real applications. Furthermore, localization methods with on-board sensors, exported from other robotics fields such as simultaneous localization and mapping (SLAM), require large computational units becoming a handicap in vehicles where size, load,
and power consumption are important restrictions. In this regard, this thesis proposes a method to estimate the state of the vehicle (i.e., position, orientation, velocity and acceleration) by means of on-board, low-cost, light-weight and high-rate sensors.
With the physical complexity of these robots, it is required to use advanced control techniques during navigation. Thanks to their redundancy on degrees-of-freedom, they offer the possibility to accomplish not only with mobility requirements but with other tasks simultaneously and hierarchically, prioritizing them depending on their impact to the overall mission success. In this work we present such control laws and define a number of these tasks to drive the vehicle using visual information, guarantee the robot integrity during flight, and improve
the platform stability or increase arm operability.
The main contributions of this research work are threefold: (1) Present a localization technique to allow autonomous navigation, this method is specifically designed for aerial platforms with size, load and computational burden restrictions. (2) Obtain control commands to drive the vehicle using visual information (visual servo). (3) Integrate the visual servo commands into
a hierarchical control law by exploiting the redundancy of the robot to accomplish secondary tasks during flight. These tasks are specific for aerial manipulators and they are also provided.
All the techniques presented in this document have been validated throughout extensive experimentation with real robotic platforms.La capacitat de volar ha incrementat molt les possibilitats dels robots per a realitzar tasques de vigilà ncia, inspecció o generació de mapes. Tot i això, no és fins fa pocs anys que la recerca en robòtica aèria ha estat prou madura com per començar a permetre interaccions amb l’entorn d’una manera activa. Els robots per a fer-ho s’anomenen manipuladors aeris i habitualment combinen una plataforma multirotor i un braç robòtic.
L’objectiu d’aquesta tesi és formalitzar el concepte de manipulador aeri i presentar mètodes de guiatge, utilitzant informació visual, per dotar d’autonomia aquest tipus de vehicles.
Una competència clau per controlar un manipulador aeri és la capacitat de localitzar-se en l’entorn. Tradicionalment aquesta localització ha requerit d’infraestructura sensorial externa (GPS, cà meres IR, etc.), limitant aixà les aplicacions reals. Pel contrari, sistemes de localització exportats d’altres camps de la robòtica basats en sensors a bord, com per exemple mètodes de localització i mapejat simultà nis (SLAM), requereixen de gran capacitat de còmput, caracterÃstica que penalitza molt en vehicles on la mida, pes i consum elèctric son grans restriccions. En aquest sentit, aquesta tesi proposa un mètode d’estimació d’estat del robot (posició, velocitat, orientació i acceleració) a partir de sensors instal·lats a bord, de baix cost, baix consum computacional i que proporcionen mesures a alta freqüència.
Degut a la complexitat fÃsica d’aquests robots, és necessari l’ús de tècniques de control avançades. Grà cies a la seva redundà ncia de graus de llibertat, aquests robots ens ofereixen la possibilitat de complir amb els requeriments de mobilitat i, simultà niament, realitzar tasques de manera jerà rquica, ordenant-les segons l’impacte en l’acompliment de la missió. En aquest treball es presenten aquestes lleis de control, juntament amb la descripció de tasques per tal de guiar visualment el vehicle, garantir la integritat del robot durant el vol, millorar de l’estabilitat del vehicle o augmentar la manipulabilitat del braç.
Aquesta tesi es centra en tres aspectes fonamentals: (1) Presentar una tècnica de localització per dotar d’autonomia el robot. Aquest mètode està especialment dissenyat per a plataformes amb restriccions de capacitat computacional, mida i pes. (2) Obtenir les comandes de control necessà ries per guiar el vehicle a partir d’informació visual. (3) Integrar aquestes accions dins una estructura de control jerà rquica utilitzant la redundà ncia del robot per complir altres tasques durant el vol. Aquestes tasques son especÃfiques per a manipuladors aeris i també es defineixen en aquest document.
Totes les tècniques presentades en aquesta tesi han estat avaluades de manera experimental amb plataformes robòtiques real
- …