5 research outputs found

    Catadioptric panoramic stereovision for humanoid robots

    Get PDF
    This paper proposes a novel design of a reconfigurable humanoid robot head, based on biological likeness of human being so that the humanoid robot could agreeably interact with people in various everyday tasks. The proposed humanoid head has a modular and adaptive structural design and is equipped with three main components: frame, neck motion system and omnidirectional stereovision system modules. The omnidirectional stereovision system module being the last module, a motivating contribution with regard to other computer vision systems implemented in former humanoids, it opens new research possibilities for achieving human-like behaviour. A proposal for a real-time catadioptric stereovision system is presented, including stereo geometry for rectifying the system configuration and depth estimation. The methodology for an initial approach for visual servoing tasks is divided into two phases, first related to the robust detection of moving objects, their depth estimation and position calculation, and second the development of attention-based control strategies. Perception capabilities provided allow the extraction of 3D information from a wide range of visions from uncontrolled dynamic environments, and work results are illustrated through a number of experiments

    Design of a training tool for improving the use of hand-held detectors in humanitarian demining

    Get PDF
    Purpose - The purpose of this paper is to introduce the design of a training tool intended to improve deminers' technique during close-in detection tasks. Design/methodology/approach - Following an introduction that highlights the impact of mines and improvised explosive devices (IEDs), and the importance of training for enhancing the safety and the efficiency of the deminers, this paper considers the utilization of a sensory tracking system to study the skill of the hand-held detector expert operators. With the compiled information, some critical performance variables can be extracted, assessed, and quantified, so that they can be used afterwards as reference values for the training task. In a second stage, the sensory tracking system is used for analysing the trainee skills. The experimentation phase aims to test the effectiveness of the elements that compose the sensory system to track the hand-held detector during the training sessions. Findings - The proposed training tool will be able to evaluate the deminers' efficiency during the scanning tasks and will provide important information for improving their competences. Originality/value - This paper highlights the need of introducing emerging technologies for enhancing the current training techniques for deminers and proposes a sensory tracking system that can be successfully utilised for evaluating trainees' performance with hand-held detectors. © Emerald Group Publishing Limited.The authors acknowledge funding from the European Community's Seventh Framework Programme (FP7/2007‐2013 TIRAMISU) under Grant Agreement No. 284747 and partial funding under Robocity2030 S‐0505/DPI‐0176 and FORTUNA A1/039883/11 (Agencia Española de Cooperación Internacional para el Desarrollo – AECID). Dr Roemi Fernández acknowledges support from CSIC under grant JAE‐DOC. Dr Héctor Montes acknowledges support from Universidad Tecnológica de Panamá and from CSIC under grant JAE‐DOC.Peer Reviewe

    Design of a training tool for improving the use of hand-held detectors in humanitarian demining

    Full text link
    Purpose – The purpose of this paper is to introduce the design of a training tool intended to improve deminers' technique during close-in detection tasks. Design/methodology/approach – Following an introduction that highlights the impact of mines and improvised explosive devices (IEDs), and the importance of training for enhancing the safety and the efficiency of the deminers, this paper considers the utilization of a sensory tracking system to study the skill of the hand-held detector expert operators. With the compiled information, some critical performance variables can be extracted, assessed, and quantified, so that they can be used afterwards as reference values for the training task. In a second stage, the sensory tracking system is used for analysing the trainee skills. The experimentation phase aims to test the effectiveness of the elements that compose the sensory system to track the hand-held detector during the training sessions. Findings – The proposed training tool will be able to evaluate the deminers' efficiency during the scanning tasks and will provide important information for improving their competences. Originality/value – This paper highlights the need of introducing emerging technologies for enhancing the current training techniques for deminers and proposes a sensory tracking system that can be successfully utilised for evaluating trainees' performance with hand-held detectors

    Método para el registro automático de imágenes basado en transformaciones proyectivas planas dependientes de las distancias y orientado a imágenes sin características comunes

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Ciencias Físicas, Departamento de Arquitectura de Computadores y Automática, leída el 18-12-2015Multisensory data fusion oriented to image-based application improves the accuracy, quality and availability of the data, and consequently, the performance of robotic systems, by means of combining the information of a scene acquired from multiple and different sources into a unified representation of the 3D world scene, which is more enlightening and enriching for the subsequent image processing, improving either the reliability by using the redundant information, or the capability by taking advantage of complementary information. Image registration is one of the most relevant steps in image fusion techniques. This procedure aims the geometrical alignment of two or more images. Normally, this process relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. For instance, in the combination of ToF and RGB cameras, the robust feature-matching is not reliable. Typically, the fusion of these two sensors has been addressed from the computation of the cameras calibration parameters for coordinate transformation between them. As a result, a low resolution colour depth map is provided. For improving the resolution of these maps and reducing the loss of colour information, extrapolation techniques are adopted. A crucial issue for computing high quality and accurate dense maps is the presence of noise in the depth measurement from the ToF camera, which is normally reduced by means of sensor calibration and filtering techniques. However, the filtering methods, implemented for the data extrapolation and denoising, usually over-smooth the data, reducing consequently the accuracy of the registration procedure...La fusión multisensorial orientada a aplicaciones de procesamiento de imágenes, conocida como fusión de imágenes, es una técnica que permite mejorar la exactitud, la calidad y la disponibilidad de datos de un entorno tridimensional, que a su vez permite mejorar el rendimiento y la operatividad de sistemas robóticos. Dicha fusión, se consigue mediante la combinación de la información adquirida por múltiples y diversas fuentes de captura de datos, la cual se agrupa del tal forma que se obtiene una mejor representación del entorno 3D, que es mucho más ilustrativa y enriquecedora para la implementación de métodos de procesamiento de imágenes. Con ello se consigue una mejora en la fiabilidad y capacidad del sistema, empleando la información redundante que ha sido adquirida por múltiples sensores. El registro de imágenes es uno de los procedimientos más importantes que componen la fusión de imágenes. El objetivo principal del registro de imágenes es la consecución de la alineación geométrica entre dos o más imágenes. Normalmente, este proceso depende de técnicas de búsqueda de patrones comunes entre imágenes, lo cual puede ser un inconveniente cuando se combinan sensores que no proporcionan datos con características similares. Un ejemplo de ello, es la fusión de cámaras de color de alta resolución (RGB) con cámaras de Tiempo de Vuelo de baja resolución (Time-of-Flight (ToF)), con las cuales no es posible conseguir una detección robusta de patrones comunes entre las imágenes capturadas por ambos sensores. Por lo general, la fusión entre estas cámaras se realiza mediante el cálculo de los parámetros de calibración de las mismas, que permiten realizar la trasformación homogénea entre ellas. Y como resultado de este xii Abstract procedimiento, se obtienen mapas de profundad y de color de baja resolución. Con el objetivo de mejorar la resolución de estos mapas y de evitar la pérdida de información de color, se utilizan diversas técnicas de extrapolación de datos. Un factor crucial a tomar en cuenta para la obtención de mapas de alta calidad y alta exactitud, es la presencia de ruido en las medidas de profundidad obtenidas por las cámaras ToF. Este problema, normalmente se reduce mediante la calibración de estos sensores y con técnicas de filtrado de datos. Sin embargo, las técnicas de filtrado utilizadas, tanto para la interpolación de datos, como para la reducción del ruido, suelen producir el sobre-alisamiento de los datos originales, lo cual reduce la exactitud del registro de imágenes...Sección Deptal. de Arquitectura de Computadores y Automática (Físicas)Fac. de Ciencias FísicasTRUEunpu

    CONTRIBUTION A LA STEREOVISION OMNIDIRECTIONNELLE ET AU TRAITEMENT DES IMAGES CATADIOPTRIQUES : APPLICATION AUX SYSTEMES AUTONOMES

    Get PDF
    Computer vision and digital image processing are two disciplines aiming to endow computers with a sense of perception and image analysis, similar to that of humans. Artificial visual perception can be greatly enhanced when a large field of view is available. This thesis deals with the use of omnidirectional cameras as a mean of expanding the field of view of computer vision systems. The visual perception of depth (3D) by means of omnistereo configurations, and special processing algorithms adapted to catadioptric images, are the main subjects studied in this thesis. Firstly a survey on 3D omnidirectional vision systems is conducted. It highlights the main approaches for obtaining depth information, and provides valuable indications for the choice of the configuration according to the application requirements. Then the design of an omnistereo sensor is addressed, we present a new configuration of the proposed sensor formed by a unique catadioptric camera, dedicated to robotic applications. An experimental investigation of depth estimation accuracy was conducted to validate the new configuration.Digital images acquired by catadioptric cameras present various special geometrical proprieties, such as non-uniform resolution and severe radial distortions. The application of conventional algorithms to process such images is limited in terms of performance. For that, new algorithms adapted to the spherical geometry of catadioptric images have been developed.Gathered omnidirectional computer vision techniques were finally used in two real applications. The first concerns the integration of catadioptric cameras to a mobile robot. The second focuses on the design of a solar tracker, based on a catadioptric camera.The results confirm that the adoption of such sensors for autonomous systems offer more performance and flexibility in regards to conventional sensors.La vision par ordinateur est une discipline qui vise doter les ordinateurs d’un sens de perception et d’analyse d'image semblable à celui de l’homme. La perception visuelle artificielle peut être grandement améliorée quand un grand champ de vision est disponible. Cette thèse traite de l'utilisation des caméras omnidirectionnelles comme un moyen d'élargir le champ de vision des systèmes de vision artificielle. La perception visuelle de la profondeur (3D) par le biais de configurations omnistéréo, et les algorithmes de traitement adaptés aux images catadioptriques, sont les principaux sujets étudiés.Tout d'abord une étude des systèmes de vision omnidirectionnelle 3D est menée. Elle met en évidence les principales approches pour obtenir l’information sur la profondeur et fournit des indications précieuses sur le choix de la configuration en fonction des besoins de l'application. Ensuite, la conception d'un capteur omnistéréo est adressée ; nous présentons une nouvelle configuration du capteur proposé basé une caméra catadioptrique unique, et dédié à la robotique mobile. Des expérimentations sur la précision d’estimation de la profondeur ont été menées pour valider la nouvelle configuration. Les images catadioptriques présentent diverses propriétés géométriques particulières, telles que la résolution non-uniforme et de fortes distorsions radiales. L’application des algorithmes de traitement classiques à ce type d’images se trouve limité en termes de performances. Dans ce sens, de nouveaux algorithmes adaptés à la géométrie sphérique de ces images ont été développés.Les techniques de vision omnidirectionnelle artificielle recueillies ont été finalement exploitées dans deux applications réelles. La première concerne l’intégration des caméras catadioptriques à un robot mobile. La seconde porte sur la conception d’un suiveur solaire, à base d’une caméra catadioptrique.Les résultats obtenus confirment que l’adoption de tels capteurs pour les systèmes autonomes offre plus de performances et de flexibilité en regards aux capteurs classiques
    corecore