8 research outputs found

    On-board real-time pose estimation for UAVs using deformable visual contour registration

    Get PDF
    Presentado al ICRA 2014 celebrado en Hong Kong del 31 de mayo al 7 de junio.We present a real time algorithm for estimating the pose of non-planar objects on which we have placed a visual marker. It is designed to overcome the limitations of small aerial robots, such as slow CPUs, low image resolution and geometric distortions produced by wide angle lenses or viewpoint changes. The method initially registers the shape of a known marker to the contours extracted in an image. For this purpose, and in contrast to state-of-the art, we do not seek to match textured patches or points of interest. Instead, we optimize a geometric alignment cost computed directly from raw polygonal representations of the observed regions using very simple and efficient clipping algorithms. Further speed is achieved by performing the optimization in the polygon representation space, avoiding the need of 2D image processing operations. Deformation modes are easily included in the optimization scheme, allowing an accurate registration of different markers attached to curved surfaces using a single deformable prototype. Once this initial registration is solved, the object pose is retrieved using a standard PnP approach. As a result, the method achieves accurate object pose estimation in real-time, which is very important for interactive UAV tasks, for example for short distance surveillance or bar assembly. We present experiments where our method yields, at about 30Hz, an average error of less than 5mm in estimating the position of a 19×19mm marker placed at 0.7m of the camera.This work has been partially funded by the Spanish Ministry of Economy and Competitiveness under project TaskCoop DPI2010-17112, by the ERANet Chistera project ViSen PCIN-2013-047 and by the EU project ARCAS FP7-ICT-2011-28761. A. Ruiz is supported by FEDER funds under grant TIN2012-38341-C04-03.Peer Reviewe

    Precise localization for aerial inspection using augmented reality markers

    Get PDF
    The final publication is available at link.springer.comThis chapter is devoted to explaining a method for precise localization using augmented reality markers. This method can achieve precision of less of 5 mm in position at a distance of 0.7 m, using a visual mark of 17 mm × 17 mm, and it can be used by controller when the aerial robot is doing a manipulation task. The localization method is based on optimizing the alignment of deformable contours from textureless images working from the raw vertexes of the observed contour. The algorithm optimizes the alignment of the XOR area computed by means of computer graphics clipping techniques. The method can run at 25 frames per second.Peer ReviewedPostprint (author's final draft

    Estimación monocular y eficiente de la pose usando modelos 3D complejos

    Get PDF
    El siguiente documento presenta un método robusto y eficiente para estimar la pose de una cámara. El método propuesto asume el conocimiento previo de un modelo 3D del entorno, y compara una nueva imagen de entrada únicamente con un conjunto pequeño de imágenes similares seleccionadas previamente por un algoritmo dePeer ReviewedPostprint (author’s final draft

    On-board real-time pose estimation for UAVs using deformable visual contour registration

    No full text
    We present a real time method for pose estimation of objects from an UAV, using visual marks placed on non planar surfaces. It is designed to overcome constraints in small aerial robots, such as slow CPUs, low resolution cameras and image deformations due to distortions introduced by the lens or by the viewpoint changes produced during the flight navigation. The method consists of shape registration from extracted contours in an image. Instead of working with dense image patches or corresponding image features, we optimize a geometric alignment cost computed directly from the raw polygonal representations of the observed regions using efficient clipping algorithms. Moreover, instead of doing 2D image processing operations, the optimization is performed in the polygon representation space, allowing real-time projective matching. Deformation modes are easily included in the optimization scheme, allowing an accurate registration of different markers attached to curved surfaces using a single deformable prototype. As a result, the method achieves accurate object pose estimation precision in real-time, which is very important for interactive UAV tasks, for example for short distance surveillance or bar assembly. We describe the main algorithmic components of the method and present experiments where our method yields an average error of less than 5mm in position at a distance of 0.7m, using a visual mark of 19mm x 19mm. Finally, we compare these results with current computer vision state-of-the-art systems.Peer ReviewedPostprint (published version

    On-board real-time pose estimation for UAVs using deformable visual contour registration

    No full text
    We present a real time method for pose estimation of objects from an UAV, using visual marks placed on non planar surfaces. It is designed to overcome constraints in small aerial robots, such as slow CPUs, low resolution cameras and image deformations due to distortions introduced by the lens or by the viewpoint changes produced during the flight navigation. The method consists of shape registration from extracted contours in an image. Instead of working with dense image patches or corresponding image features, we optimize a geometric alignment cost computed directly from the raw polygonal representations of the observed regions using efficient clipping algorithms. Moreover, instead of doing 2D image processing operations, the optimization is performed in the polygon representation space, allowing real-time projective matching. Deformation modes are easily included in the optimization scheme, allowing an accurate registration of different markers attached to curved surfaces using a single deformable prototype. As a result, the method achieves accurate object pose estimation precision in real-time, which is very important for interactive UAV tasks, for example for short distance surveillance or bar assembly. We describe the main algorithmic components of the method and present experiments where our method yields an average error of less than 5mm in position at a distance of 0.7m, using a visual mark of 19mm x 19mm. Finally, we compare these results with current computer vision state-of-the-art systems.Peer Reviewe

    Corrección de la odometría visual basada en la detección de cierre de lazo

    Get PDF
    En el ámbito de la robótica y la automoción resulta de interés conocer la posición que ocupa el robot en todo momento, así como la trayectoria que este describe, haciendo uso de los sensores a bordo del mismo, para lo cual existen ya en la actualidad diferentes métodos. Este proyecto se focaliza en el uso de cámaras como sensores de percepción del entorno y propone una metodología que permita realizar una odometría visual robusta, aplicando técnicas de corrección basadas en la detección de cierres de lazo en situaciones de localización a largo plazo. Para ello, se va a llevar a cabo una mejora metodológica de algunas técnicas clásicas de visión artificial y se implementarán nuevos algoritmos, con el fin de corregir la deriva que implica el uso de la odometría visual en la estimación del recorrido realizado por un agente. Se pretende obtener una estimación precisa de la posición, orientación y trayectoria seguida por un vehículo, a partir del análisis de una secuencia de imágenes adquiridas a través de un sistema estéreo de cámaras que lleva a bordo, sin tener un conocimiento previo del espacio físico en el que se encuentra, y aplicando las técnicas de corrección necesarias una vez que el vehículo recorra una zona previamente visitada.An essential requirement in the eld of robotics and automation is to know the position of a mobile robot along the time, as well as the trajectory that it describes by using on-board sensors. Nowadays, several methods exist for accomplishing this goal. In this work, we propose a novel approach focused on the use of cameras as perception sensors of the environment, that allows to perform a robust visual odometry, where correction algorithms based on loop closure detection are applied for localization in long-term situations. In order to satisfy the previous conditions, we carry out a methodological improvement of some classic computer vision techniques. In addition, new algorithms are implemented with the aim of correcting the drift produced in the visual odometry estimation along the traversed path. The main objective is to obtain an accurate estimation of the position, orientation and trajectory followed by a vehicle. Sequences of images acquired by an on-board stereo camera system are analyzed without any previous knowledge about the real environment. Due to this, correction techniques are needed when a place is revisited by the vehicle.Grado en Ingeniería en Electrónica y Automática Industria

    A Control Architecture for Unmanned Aerial Vehicles Operating in Human-Robot Team for Service Robotic Tasks

    Get PDF
    In this thesis a Control architecture for an Unmanned Aerial Vehicle (UAV) is presented. The aim of the thesis is to address the problem of control a flying robot operating in human robot team at different level of abstraction. For this purpose, three different layers in the design of the architecture were considered, namely, the high level, the middle level and the low level layers. The special case of an UAV operating in service robotics tasks and in particular in Search&Rescue mission in alpine scenario is considered. Different methodologies for each layer are presented with simulated or real-world experimental validation

    Simulation of autonomous UAV navigation with collision avoidance and spatial awareness.

    Get PDF
    The goal of this thesis is to design a collision-free autonomous UAV navigation system with spatial awareness ability within a comprehensive simulation framework. The navigation system is required to find a collision-free trajectory to a randomly assigned 3D target location without any prior map information. The implemented navigation system contains four main components: mapping, localisation, cognition and control system, where the cognition system makes execution command based on the perceived position information about obstacles and UAV itself from mapping and localisation system respectively. The control system is responsible for executing the input command made from the cognition system. The implementation for the cognition system is split into three case studies for real-life scenarios, which are restricted area avoidance, static obstacle avoidance and dynamic obstacles. The experiment results in the three cases have been conducted, and the UAV is capable of determining a collision-free trajectory under all three cases of environments. All simulated components were designed to be analogous to their real-world counterpart. Ideally, the simulated navigation framework can be transferred to a real UAV without any changes. The simulation framework provides a platform for future robotic research. As it is implemented in a modular way, it is easier to debug. Hence, the system has good reliability. Moreover, the system has good readability, maintainability and extendability.PhD in Manufacturin
    corecore