250 research outputs found

    Design of a training tool for improving the use of hand-held detectors in humanitarian demining

    Get PDF
    Purpose - The purpose of this paper is to introduce the design of a training tool intended to improve deminers' technique during close-in detection tasks. Design/methodology/approach - Following an introduction that highlights the impact of mines and improvised explosive devices (IEDs), and the importance of training for enhancing the safety and the efficiency of the deminers, this paper considers the utilization of a sensory tracking system to study the skill of the hand-held detector expert operators. With the compiled information, some critical performance variables can be extracted, assessed, and quantified, so that they can be used afterwards as reference values for the training task. In a second stage, the sensory tracking system is used for analysing the trainee skills. The experimentation phase aims to test the effectiveness of the elements that compose the sensory system to track the hand-held detector during the training sessions. Findings - The proposed training tool will be able to evaluate the deminers' efficiency during the scanning tasks and will provide important information for improving their competences. Originality/value - This paper highlights the need of introducing emerging technologies for enhancing the current training techniques for deminers and proposes a sensory tracking system that can be successfully utilised for evaluating trainees' performance with hand-held detectors. © Emerald Group Publishing Limited.The authors acknowledge funding from the European Community's Seventh Framework Programme (FP7/2007‐2013 TIRAMISU) under Grant Agreement No. 284747 and partial funding under Robocity2030 S‐0505/DPI‐0176 and FORTUNA A1/039883/11 (Agencia Española de Cooperación Internacional para el Desarrollo – AECID). Dr Roemi Fernández acknowledges support from CSIC under grant JAE‐DOC. Dr Héctor Montes acknowledges support from Universidad Tecnológica de Panamá and from CSIC under grant JAE‐DOC.Peer Reviewe

    The flow of baseline estimation using a single omnidirectional camera

    Get PDF
    Baseline is a distance between two cameras, but we cannot get information from a single camera. Baseline is one of the important parameters to find the depth of objects in stereo image triangulation. The flow of baseline is produced by moving the camera in horizontal axis from its original location. Using baseline estimation, we can determined the depth of an object by using only an omnidirectional camera. This research focus on determining the flow of baseline before calculating the disparity map. To estimate the flow and to tracking the object, we use three and four points in the surface of an object from two different data (panoramic image) that were already chosen. By moving the camera horizontally, we get the tracks of them. The obtained tracks are visually similar. Each track represent the coordinate of each tracking point. Two of four tracks have a graphical representation similar to second order polynomial

    Design of a training tool for improving the use of hand-held detectors in humanitarian demining

    Full text link
    Purpose – The purpose of this paper is to introduce the design of a training tool intended to improve deminers' technique during close-in detection tasks. Design/methodology/approach – Following an introduction that highlights the impact of mines and improvised explosive devices (IEDs), and the importance of training for enhancing the safety and the efficiency of the deminers, this paper considers the utilization of a sensory tracking system to study the skill of the hand-held detector expert operators. With the compiled information, some critical performance variables can be extracted, assessed, and quantified, so that they can be used afterwards as reference values for the training task. In a second stage, the sensory tracking system is used for analysing the trainee skills. The experimentation phase aims to test the effectiveness of the elements that compose the sensory system to track the hand-held detector during the training sessions. Findings – The proposed training tool will be able to evaluate the deminers' efficiency during the scanning tasks and will provide important information for improving their competences. Originality/value – This paper highlights the need of introducing emerging technologies for enhancing the current training techniques for deminers and proposes a sensory tracking system that can be successfully utilised for evaluating trainees' performance with hand-held detectors

    OmniSCV: An omnidirectional synthetic image generator for computer vision

    Get PDF
    Omnidirectional and 360º images are becoming widespread in industry and in consumer society, causing omnidirectional computer vision to gain attention. Their wide field of view allows the gathering of a great amount of information about the environment from only an image. However, the distortion of these images requires the development of specific algorithms for their treatment and interpretation. Moreover, a high number of images is essential for the correct training of computer vision algorithms based on learning. In this paper, we present a tool for generating datasets of omnidirectional images with semantic and depth information. These images are synthesized from a set of captures that are acquired in a realistic virtual environment for Unreal Engine 4 through an interface plugin. We gather a variety of well-known projection models such as equirectangular and cylindrical panoramas, different fish-eye lenses, catadioptric systems, and empiric models. Furthermore, we include in our tool photorealistic non-central-projection systems as non-central panoramas and non-central catadioptric systems. As far as we know, this is the first reported tool for generating photorealistic non-central images in the literature. Moreover, since the omnidirectional images are made virtually, we provide pixel-wise information about semantics and depth as well as perfect knowledge of the calibration parameters of the cameras. This allows the creation of ground-truth information with pixel precision for training learning algorithms and testing 3D vision approaches. To validate the proposed tool, different computer vision algorithms are tested as line extractions from dioptric and catadioptric central images, 3D Layout recovery and SLAM using equirectangular panoramas, and 3D reconstruction from non-central panoramas

    Modeling the environment with egocentric vision systems

    Get PDF
    Cada vez más sistemas autónomos, ya sean robots o sistemas de asistencia, están presentes en nuestro día a día. Este tipo de sistemas interactúan y se relacionan con su entorno y para ello necesitan un modelo de dicho entorno. En función de las tareas que deben realizar, la información o el detalle necesario del modelo varía. Desde detallados modelos 3D para sistemas de navegación autónomos, a modelos semánticos que incluyen información importante para el usuario como el tipo de área o qué objetos están presentes. La creación de estos modelos se realiza a través de las lecturas de los distintos sensores disponibles en el sistema. Actualmente, gracias a su pequeño tamaño, bajo precio y la gran información que son capaces de capturar, las cámaras son sensores incluidos en todos los sistemas autónomos. El objetivo de esta tesis es el desarrollar y estudiar nuevos métodos para la creación de modelos del entorno a distintos niveles semánticos y con distintos niveles de precisión. Dos puntos importantes caracterizan el trabajo desarrollado en esta tesis: - El uso de cámaras con punto de vista egocéntrico o en primera persona ya sea en un robot o en un sistema portado por el usuario (wearable). En este tipo de sistemas, las cámaras son solidarias al sistema móvil sobre el que van montadas. En los últimos años han aparecido muchos sistemas de visión wearables, utilizados para multitud de aplicaciones, desde ocio hasta asistencia de personas. - El uso de sistemas de visión omnidireccional, que se distinguen por su gran campo de visión, incluyendo mucha más información en cada imagen que las cámara convencionales. Sin embargo plantean nuevas dificultades debido a distorsiones y modelos de proyección más complejos. Esta tesis estudia distintos tipos de modelos del entorno: - Modelos métricos: el objetivo de estos modelos es crear representaciones detalladas del entorno en las que localizar con precisión el sistema autónomo. Ésta tesis se centra en la adaptación de estos modelos al uso de visión omnidireccional, lo que permite capturar más información en cada imagen y mejorar los resultados en la localización. - Modelos topológicos: estos modelos estructuran el entorno en nodos conectados por arcos. Esta representación tiene menos precisión que la métrica, sin embargo, presenta un nivel de abstracción mayor y puede modelar el entorno con más riqueza. %, por ejemplo incluyendo el tipo de área de cada nodo, la localización de objetos importantes o el tipo de conexión entre los distintos nodos. Esta tesis se centra en la creación de modelos topológicos con información adicional sobre el tipo de área de cada nodo y conexión (pasillo, habitación, puertas, escaleras...). - Modelos semánticos: este trabajo también contribuye en la creación de nuevos modelos semánticos, más enfocados a la creación de modelos para aplicaciones en las que el sistema interactúa o asiste a una persona. Este tipo de modelos representan el entorno a través de conceptos cercanos a los usados por las personas. En particular, esta tesis desarrolla técnicas para obtener y propagar información semántica del entorno en secuencias de imágen

    04251 -- Imaging Beyond the Pinhole Camera

    Get PDF
    From 13.06.04 to 18.06.04, the Dagstuhl Seminar 04251 ``Imaging Beyond the Pin-hole Camera. 12th Seminar on Theoretical Foundations of Computer Vision\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    ''FlyVIZ'': A Novel Display Device to Provide Humans with 360o Vision by Coupling Catadioptric Camera with HMD.

    Get PDF
    International audienceHave you ever dreamed of having eyes in the back of your head? In this paper we present a novel display device called FlyVIZ which enables humans to experience a real-time 360° vision of their surroundings for the first time. To do so, we combine a panoramic image acquisition system (positioned on top of the user's head) with a Head-Mounted Display (HMD). The omnidirectional images are transformed to fit the characteristics of HMD screens. As a result, the user can see his/her surroundings, in real-time, with 360° images mapped into the HMD field-of- view. We foresee potential applications in different fields where augmented human capacity (an extended field-of-view) could benefit, such as surveillance, security, or entertainment. FlyVIZ could also be used in novel perception and neuroscience studies
    corecore