444 research outputs found

    Viewfinder: final activity report

    Get PDF
    The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources. The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation. The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein

    AltURI: a thin middleware for simulated robot vision applications

    Get PDF
    Fast software performance is often the focus when developing real-time vision-based control applications for robot simulators. In this paper we have developed a thin, high performance middleware for USARSim and other simulators designed for real-time vision-based control applications. It includes a fast image server providing images in OpenCV, Matlab or web formats and a simple command/sensor processor. The interface has been tested in USARSim with an Unmanned Aerial Vehicle using two control applications; landing using a reinforcement learning algorithm and altitude control using elementary motion detection. The middleware has been found to be fast enough to control the flying robot as well as very easy to set up and use

    Teleoperated visual inspection and surveillance with unmanned ground and aerial vehicles,” Int

    Get PDF
    Abstract—This paper introduces our robotic system named UGAV (Unmanned Ground-Air Vehicle) consisting of two semi-autonomous robot platforms, an Unmanned Ground Vehicle (UGV) and an Unmanned Aerial Vehicles (UAV). The paper focuses on three topics of the inspection with the combined UGV and UAV: (A) teleoperated control by means of cell or smart phones with a new concept of automatic configuration of the smart phone based on a RKI-XML description of the vehicles control capabilities, (B) the camera and vision system with the focus to real time feature extraction e.g. for the tracking of the UAV and (C) the architecture and hardware of the UAV

    Development of an autonomous mobile robot with planning and location in a structured environment

    Get PDF
    Mestrado de dupla diplomação com a UTFPR - Universidade Tecnológica Federal do ParanáWith the advance of technology mobile robots have been increasingly applied in the industry, performing repetitive work with high performance, and in environments that pose risks to human health. The present work plans and develops a mobile robot platform for the micromouse competition. The micromouse consists of a small autonomous mobile robot that, when placed in an unknown labyrinth, is able to map it, search for the best path between the starting point and the goal and travel it in the shortest possible time. To accomplish these tasks, the robot must be able to self-locate, map the maze as it traverses it and plan paths based on the map obtained. The developed self-localization method is based on the odometry, the laser sensors present in the robot and on a previous knowledge of the start point and the configuration of the environment. Several methodologies of locomotion in unknown environment and route planning are analyzed in order to obtain the combination with the best performance. In order to verify the results, the present work is developed in real environment, in 3D simulation and also with a hardware in the loop capability. Labyrinths from previous competitions are used as basis for comparing methodologies and validating results. At the end it presents the algorithm capable of fulfilling all the requirements of the micromouse competition together with the results of its evaluation run.Com o avanço da tecnologia, os robôs móveis têm sido cada vez mais aplicados na indústria, realizando trabalhos repetitivos com alto desempenho e em ambientes que expõem riscos à saúde humana. O presente trabalho planeja e desenvolve um robô móvel para a competição micromouse. O micromouse consiste em um pequeno robô autônomo que, ao ser colocado em um labirinto desconhecido, é capaz de mapeá-lo, procurar o melhor caminho entre o ponto de partida e o objetivo, e percorrê-lo no menor tempo possível. Para realizar estas tarefas, o robô deve ser capaz de se auto-localizar, mapear o labirinto enquanto o percorre e planejar caminhos com base no mapa obtido. O método de auto-localização desenvolvido baseia-se na odometria, nos sensores a laser presentes no robô e em um prévio conhecimento do ponto de início e da configuração do ambiente. Diversas metodologias de locomoção em ambiente desconhecido e planejamento de rotas são analisadas buscando-se obter a combinação com o melhor desempenho. Para averiguação de resultados o presente trabalho desenvolve-se em ambiente real e em simulação 3D com hardware in the loop. Labirintos de competições anteriores são utilizados de base para o comparativo de metodologias e validação de resultados. Ao final apresenta-se o algoritmo capaz de cumprir todas as exigências da competição micromouse juntamente com os resultados em sua corrida de avaliação

    10371 Abstracts Collection -- Dynamic Maps

    Get PDF
    From September 12th to 17th, 2010, the Dagstuhl Seminar 10371 ``Dynamic Maps \u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Towards real-time 3D sound sources mapping with linear microphone arrays

    Full text link
    © 2017 IEEE. In this paper, we present a method for real-time 3D sound sources mapping using an off-the-shelf robotic perception sensor equipped with a linear microphone array. Conventional approaches to map sound sources in 3D scenarios use dedicated 3D microphone arrays, as this type of arrays provide two degrees of freedom (DOF) observations. Our method addresses the problem of 3D sound sources mapping using a linear microphone array, which only provides one DOF observations making the estimation of the sound sources location more challenging. In the proposed method, multi hypotheses tracking is combined with a new sound source parametrisation to provide with a good initial guess for an online optimisation strategy. A joint optimisation is carried out to estimate 6 DOF sensor poses and 3 DOF landmarks together with the sound sources locations. Additionally, a dedicated sensor model is proposed to accurately model the noise of the Direction of Arrival (DOA) observation when using a linear microphone array. Comprehensive simulation and experimental results show the effectiveness of the proposed method. In addition, a real-time implementation of our method has been made available as open source software for the benefit of the community
    • …
    corecore