252 research outputs found
Visual-inertial self-calibration on informative motion segments
Environmental conditions and external effects, such as shocks, have a
significant impact on the calibration parameters of visual-inertial sensor
systems. Thus long-term operation of these systems cannot fully rely on factory
calibration. Since the observability of certain parameters is highly dependent
on the motion of the device, using short data segments at device initialization
may yield poor results. When such systems are additionally subject to energy
constraints, it is also infeasible to use full-batch approaches on a big
dataset and careful selection of the data is of high importance. In this paper,
we present a novel approach for resource efficient self-calibration of
visual-inertial sensor systems. This is achieved by casting the calibration as
a segment-based optimization problem that can be run on a small subset of
informative segments. Consequently, the computational burden is limited as only
a predefined number of segments is used. We also propose an efficient
information-theoretic selection to identify such informative motion segments.
In evaluations on a challenging dataset, we show our approach to significantly
outperform state-of-the-art in terms of computational burden while maintaining
a comparable accuracy
Sampling-based Motion Planning for Active Multirotor System Identification
This paper reports on an algorithm for planning trajectories that allow a
multirotor micro aerial vehicle (MAV) to quickly identify a set of unknown
parameters. In many problems like self calibration or model parameter
identification some states are only observable under a specific motion. These
motions are often hard to find, especially for inexperienced users. Therefore,
we consider system model identification in an active setting, where the vehicle
autonomously decides what actions to take in order to quickly identify the
model. Our algorithm approximates the belief dynamics of the system around a
candidate trajectory using an extended Kalman filter (EKF). It uses
sampling-based motion planning to explore the space of possible beliefs and
find a maximally informative trajectory within a user-defined budget. We
validate our method in simulation and on a real system showing the feasibility
and repeatability of the proposed approach. Our planner creates trajectories
which reduce model parameter convergence time and uncertainty by a factor of
four.Comment: Published at ICRA 2017. Video available at
https://www.youtube.com/watch?v=xtqrWbgep5
Encoderless Gimbal Calibration of Dynamic Multi-Camera Clusters
Dynamic Camera Clusters (DCCs) are multi-camera systems where one or more
cameras are mounted on actuated mechanisms such as a gimbal. Existing methods
for DCC calibration rely on joint angle measurements to resolve the
time-varying transformation between the dynamic and static camera. This
information is usually provided by motor encoders, however, joint angle
measurements are not always readily available on off-the-shelf mechanisms. In
this paper, we present an encoderless approach for DCC calibration which
simultaneously estimates the kinematic parameters of the transformation chain
as well as the unknown joint angles. We also demonstrate the integration of an
encoderless gimbal mechanism with a state-of-the art VIO algorithm, and show
the extensions required in order to perform simultaneous online estimation of
the joint angles and vehicle localization state. The proposed calibration
approach is validated both in simulation and on a physical DCC composed of a
2-DOF gimbal mounted on a UAV. Finally, we show the experimental results of the
calibrated mechanism integrated into the OKVIS VIO package, and demonstrate
successful online joint angle estimation while maintaining localization
accuracy that is comparable to a standard static multi-camera configuration.Comment: ICRA 201
Observability-aware Self-Calibration of Visual and Inertial Sensors for Ego-Motion Estimation
External effects such as shocks and temperature variations affect the
calibration of visual-inertial sensor systems and thus they cannot fully rely
on factory calibrations. Re-calibrations performed on short user-collected
datasets might yield poor performance since the observability of certain
parameters is highly dependent on the motion. Additionally, on
resource-constrained systems (e.g mobile phones), full-batch approaches over
longer sessions quickly become prohibitively expensive.
In this paper, we approach the self-calibration problem by introducing
information theoretic metrics to assess the information content of trajectory
segments, thus allowing to select the most informative parts from a dataset for
calibration purposes. With this approach, we are able to build compact
calibration datasets either: (a) by selecting segments from a long session with
limited exciting motion or (b) from multiple short sessions where a single
sessions does not necessarily excite all modes sufficiently. Real-world
experiments in four different environments show that the proposed method
achieves comparable performance to a batch calibration approach, yet, at a
constant computational complexity which is independent of the duration of the
session
Visual guidance of unmanned aerial manipulators
The ability to fly has greatly expanded the possibilities for robots to perform surveillance, inspection or map generation tasks. Yet it was only in recent years that research in aerial robotics was mature enough to allow active interactions with the environment. The robots responsible for these interactions are called aerial manipulators and usually combine a multirotor platform and one or more robotic arms.
The main objective of this thesis is to formalize the concept of aerial manipulator and present guidance methods, using visual information, to provide them with autonomous functionalities.
A key competence to control an aerial manipulator is the ability to localize it in the environment.
Traditionally, this localization has required external infrastructure of sensors (e.g., GPS or IR cameras), restricting the real applications. Furthermore, localization methods with on-board sensors, exported from other robotics fields such as simultaneous localization and mapping (SLAM), require large computational units becoming a handicap in vehicles where size, load,
and power consumption are important restrictions. In this regard, this thesis proposes a method to estimate the state of the vehicle (i.e., position, orientation, velocity and acceleration) by means of on-board, low-cost, light-weight and high-rate sensors.
With the physical complexity of these robots, it is required to use advanced control techniques during navigation. Thanks to their redundancy on degrees-of-freedom, they offer the possibility to accomplish not only with mobility requirements but with other tasks simultaneously and hierarchically, prioritizing them depending on their impact to the overall mission success. In this work we present such control laws and define a number of these tasks to drive the vehicle using visual information, guarantee the robot integrity during flight, and improve
the platform stability or increase arm operability.
The main contributions of this research work are threefold: (1) Present a localization technique to allow autonomous navigation, this method is specifically designed for aerial platforms with size, load and computational burden restrictions. (2) Obtain control commands to drive the vehicle using visual information (visual servo). (3) Integrate the visual servo commands into
a hierarchical control law by exploiting the redundancy of the robot to accomplish secondary tasks during flight. These tasks are specific for aerial manipulators and they are also provided.
All the techniques presented in this document have been validated throughout extensive experimentation with real robotic platforms.La capacitat de volar ha incrementat molt les possibilitats dels robots per a realitzar tasques de vigilà ncia, inspecció o generació de mapes. Tot i això, no és fins fa pocs anys que la recerca en robòtica aèria ha estat prou madura com per començar a permetre interaccions amb l’entorn d’una manera activa. Els robots per a fer-ho s’anomenen manipuladors aeris i habitualment combinen una plataforma multirotor i un braç robòtic.
L’objectiu d’aquesta tesi és formalitzar el concepte de manipulador aeri i presentar mètodes de guiatge, utilitzant informació visual, per dotar d’autonomia aquest tipus de vehicles.
Una competència clau per controlar un manipulador aeri és la capacitat de localitzar-se en l’entorn. Tradicionalment aquesta localització ha requerit d’infraestructura sensorial externa (GPS, cà meres IR, etc.), limitant aixà les aplicacions reals. Pel contrari, sistemes de localització exportats d’altres camps de la robòtica basats en sensors a bord, com per exemple mètodes de localització i mapejat simultà nis (SLAM), requereixen de gran capacitat de còmput, caracterÃstica que penalitza molt en vehicles on la mida, pes i consum elèctric son grans restriccions. En aquest sentit, aquesta tesi proposa un mètode d’estimació d’estat del robot (posició, velocitat, orientació i acceleració) a partir de sensors instal·lats a bord, de baix cost, baix consum computacional i que proporcionen mesures a alta freqüència.
Degut a la complexitat fÃsica d’aquests robots, és necessari l’ús de tècniques de control avançades. Grà cies a la seva redundà ncia de graus de llibertat, aquests robots ens ofereixen la possibilitat de complir amb els requeriments de mobilitat i, simultà niament, realitzar tasques de manera jerà rquica, ordenant-les segons l’impacte en l’acompliment de la missió. En aquest treball es presenten aquestes lleis de control, juntament amb la descripció de tasques per tal de guiar visualment el vehicle, garantir la integritat del robot durant el vol, millorar de l’estabilitat del vehicle o augmentar la manipulabilitat del braç.
Aquesta tesi es centra en tres aspectes fonamentals: (1) Presentar una tècnica de localització per dotar d’autonomia el robot. Aquest mètode està especialment dissenyat per a plataformes amb restriccions de capacitat computacional, mida i pes. (2) Obtenir les comandes de control necessà ries per guiar el vehicle a partir d’informació visual. (3) Integrar aquestes accions dins una estructura de control jerà rquica utilitzant la redundà ncia del robot per complir altres tasques durant el vol. Aquestes tasques son especÃfiques per a manipuladors aeris i també es defineixen en aquest document.
Totes les tècniques presentades en aquesta tesi han estat avaluades de manera experimental amb plataformes robòtiques real
Optimal Sensing and Actuation Policies for Networked Mobile Agents in a Class of Cyber-Physical Systems
The main purpose of this dissertation is to define and solve problems on optimal sensing and actuating policies in Cyber-Physical Systems (CPSs). Cyber-physical system is a term that was introduced recently to define the increasing complexity of the interactions between computational hardwares and their physical environments. The problem of designing the ``cyber\u27\u27 part may not be trivial but can be solved from scratch. However, the ``physical\u27\u27 part, usually a natural physical process, is inherently given and has to be identified in order to propose an appropriate ``cyber\u27\u27 part to be adopted. Therefore, one of the first steps in designing a CPS is to identify its ``physical\u27\u27 part. The ``physical\u27\u27 part can belong to a large array of system classes. Among the possible candidates, we focus our interest on Distributed Parameter Systems (DPSs) whose dynamics can be modeled by Partial Differential Equations (PDE). DPSs are by nature very challenging to observe as their states are distributed throughout the spatial domain of interest. Therefore, systematic approaches have to be developed to obtain the optimal locations of sensors to optimally estimate the parameters of a given DPS. In this dissertation, we first review the recent methods from the literature as the foundations of our contributions. Then, we define new research problems within the above optimal parameter estimation framework. Two different yet important problems considered are the optimal mobile sensor trajectory planning and the accuracy effects and allocation of heterogeneous sensors. Under the remote sensing setting, we are able to determine the optimal trajectories of remote sensors. The problem of optimal robust estimation is then introduced and solved using an interlaced ``online\u27\u27 or ``real-time\u27\u27 scheme. Actuation policies are introduced into the framework to improve the estimation by providing the best stimulation of the DPS for optimal parameter identification, where trajectories of both sensors and actuators are optimized simultaneously. We also introduce a new methodology to solving fractional-order optimal control problems, with which we demonstrate that we can solve optimal sensing policy problems when sensors move in complex media, displaying fractional dynamics. We consider and solve the problem of optimal scale reconciliation using satellite imagery, ground measurements, and Unmanned Aerial Vehicles (UAV)-based personal remote sensing. Finally, to provide the reader with all the necessary background, the appendices contain important concepts and theorems from the literature as well as the Matlab codes used to numerically solve some of the described problems
Detección y evasión de obstáculos usando redes neuronales hÃbridas convolucionales y recurrentes
[ES] Los términos "detección y evasión" hacen referencia al requerimiento esencial de un piloto para "ver y evitar" colisiones aire-aire. Para introducir UAVs en el dÃa a dÃa, esta funcion del piloto debe ser replicada por el UAV. En pequeños UAVs como pueden ser los destinados a la entrega de pedidos, existen ciertos aspectos limitantes en relación a tamaño, peso y potencia, por lo que sistemas cooperativos como TCAS o ADS-B no pueden ser utilizados y en su lugar otros sistemas como cámaras electro-ópticas son candidatos potenciales para obtener soluciones efectivas. En este tipo de aplicaciones, la solución debe evitar no solo otras aeronaves sino también otros obstáculos que puedan haber cerca de la superficie donde probablemente se operará la mayorÃa del tiempo.
En este proyecto se han utilizado redes neuronales hÃbridas que incluyen redes neuronales convolucionales como primera etapa para clasificar objetos y redes neuronales recurrentes a continuación para deteminar la secuencia de eventos y actuar consecuentemente. Este tipo de red neuronal es muy actual y no se ha investigado en exceso hasta la fecha, por lo que el principal objetivo del proyecto es estudiar si podrÃan ser aplicadas en sistemas de "detección y evasión". Algoritmos de acceso libre han sido fusionados y mejorados para crear un nuevo modelo capaz de funcionar en este tipo de aplicaciones.
A parte del algoritmo de detección y seguimiento, la parte correspondiente a la evasión de colisiones también fue desarrollada. Un filtro Kalman extendido se utilizó para estimar el rango relativo entre un obstáculo y el UAV. Para obtener una resolución sobre la posibilidad de conflicto, una aproximación estocástica fue considerada. Finalmente, una maniobra de evasión geométrica fue diseñada para utilizar si fuera necesario. Esta segunda parte fue evaluada mediante una simulación que también fue creada para el proyecto.
Adicionalmente, un ensayo experimental se llevó a cabo para integrar las dos partes del algoritmo. Datos del ruido de la medida fueron experimentalmente obtenidos y se comprobó que las colisiones se podÃan evitar satisfactoriamente con dicho valor.
Las principales conclusiones fueron que este nuevo tipo funciona más rápido que los métodos basados en redes neuronales más comunes, por lo que se recomiendo seguir investigando en ellas. Con la técnica diseñada, se encuentran disponibles multiples parámetros de diseño que pueden ser adaptados a diferentes circumstancias y factores. Las limitaciones principales encontradas se centran en la detección de obstáculos y en la estimación del rango relativo, por lo que se sugiere que la futura investigación se dirija en estas direcciones.[EN] A Sense and Avoid technique has been developed in this master thesis. A special method for small UAVs which use only an electro-optical camera as the sensor has been considered. This method is based on a sophisticated processing solution using hybrid Convolutional and Recurrent Neural Networks. The aim is to study the feasibility of this kind of neural networks in Sense and Avoid applications.
First, the detection and tracking part of the algorithm is presented. Two models were used for this purpose: a Convolutional Neural Network called YOLO and a hybrid Convolutional and Recurrent Neural Network called Re3.
After that, the collision avoidance part was designed. This consisted of the obstacle relative range estimation using an Extended Kalman Filter, the conflict probability calculation using an analytical approach and the geometric avoidance manoeuvre generation.
Both parts were assessed separately by videos and simulations respectively, and then an experimental test was carried out to integrate them. Measurement noise was experimentally tested and simulations were performed again to check that collisions were avoided with the considered detection and tracking approach.
Results showed that the considered approach can track objects faster than the most common computer vision methods based on neural networks. Furthermore, the conflict was successfully avoided with the proposed technique. Design parameters were allowed to adjust speed and maneuvers accordingly to the expected environment or the required level of safety.
The main conclusion was that this kind of neural network could be successfully applied to Sense and Avoid systems.Vidal Navarro, D. (2018). Sense and avoid using hybrid convolutional and recurrent neural networks. Universitat Politècnica de València. http://hdl.handle.net/10251/142606TFG
- …