3,112 research outputs found

    Enhancing Mobile Object Classification Using Geo-referenced Maps and Evidential Grids

    Full text link
    Evidential grids have recently shown interesting properties for mobile object perception. Evidential grids are a generalisation of Bayesian occupancy grids using Dempster- Shafer theory. In particular, these grids can handle efficiently partial information. The novelty of this article is to propose a perception scheme enhanced by geo-referenced maps used as an additional source of information, which is fused with a sensor grid. The paper presents the key stages of such a data fusion process. An adaptation of conjunctive combination rule is presented to refine the analysis of the conflicting information. The method uses temporal accumulation to make the distinction between stationary and mobile objects, and applies contextual discounting for modelling information obsolescence. As a result, the method is able to better characterise the occupied cells by differentiating, for instance, moving objects, parked cars, urban infrastructure and buildings. Experiments carried out on real- world data illustrate the benefits of such an approach.Comment: 6 pp. arXiv admin note: substantial text overlap with arXiv:1207.101

    Percepção do ambiente urbano e navegação usando visão robótica : concepção e implementação aplicado à veículo autônomo

    Get PDF
    Orientadores: Janito Vaqueiro Ferreira, Alessandro Corrêa VictorinoTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia MecânicaResumo: O desenvolvimento de veículos autônomos capazes de se locomover em ruas urbanas pode proporcionar importantes benefícios na redução de acidentes, no aumentando da qualidade de vida e também na redução de custos. Veículos inteligentes, por exemplo, frequentemente baseiam suas decisões em observações obtidas a partir de vários sensores tais como LIDAR, GPS e câmeras. Atualmente, sensores de câmera têm recebido grande atenção pelo motivo de que eles são de baixo custo, fáceis de utilizar e fornecem dados com rica informação. Ambientes urbanos representam um interessante mas também desafiador cenário neste contexto, onde o traçado das ruas podem ser muito complexos, a presença de objetos tais como árvores, bicicletas, veículos podem gerar observações parciais e também estas observações são muitas vezes ruidosas ou ainda perdidas devido a completas oclusões. Portanto, o processo de percepção por natureza precisa ser capaz de lidar com a incerteza no conhecimento do mundo em torno do veículo. Nesta tese, este problema de percepção é analisado para a condução nos ambientes urbanos associado com a capacidade de realizar um deslocamento seguro baseado no processo de tomada de decisão em navegação autônoma. Projeta-se um sistema de percepção que permita veículos robóticos a trafegar autonomamente nas ruas, sem a necessidade de adaptar a infraestrutura, sem o conhecimento prévio do ambiente e considerando a presença de objetos dinâmicos tais como veículos. Propõe-se um novo método baseado em aprendizado de máquina para extrair o contexto semântico usando um par de imagens estéreo, a qual é vinculada a uma grade de ocupação evidencial que modela as incertezas de um ambiente urbano desconhecido, aplicando a teoria de Dempster-Shafer. Para a tomada de decisão no planejamento do caminho, aplica-se a abordagem dos tentáculos virtuais para gerar possíveis caminhos a partir do centro de referencia do veículo e com base nisto, duas novas estratégias são propostas. Em primeiro, uma nova estratégia para escolher o caminho correto para melhor evitar obstáculos e seguir a tarefa local no contexto da navegação hibrida e, em segundo, um novo controle de malha fechada baseado na odometria visual e o tentáculo virtual é modelado para execução do seguimento de caminho. Finalmente, um completo sistema automotivo integrando os modelos de percepção, planejamento e controle são implementados e validados experimentalmente em condições reais usando um veículo autônomo experimental, onde os resultados mostram que a abordagem desenvolvida realiza com sucesso uma segura navegação local com base em sensores de câmeraAbstract: The development of autonomous vehicles capable of getting around on urban roads can provide important benefits in reducing accidents, in increasing life comfort and also in providing cost savings. Intelligent vehicles for example often base their decisions on observations obtained from various sensors such as LIDAR, GPS and Cameras. Actually, camera sensors have been receiving large attention due to they are cheap, easy to employ and provide rich data information. Inner-city environments represent an interesting but also very challenging scenario in this context, where the road layout may be very complex, the presence of objects such as trees, bicycles, cars might generate partial observations and also these observations are often noisy or even missing due to heavy occlusions. Thus, perception process by nature needs to be able to deal with uncertainties in the knowledge of the world around the car. While highway navigation and autonomous driving using a prior knowledge of the environment have been demonstrating successfully, understanding and navigating general inner-city scenarios with little prior knowledge remains an unsolved problem. In this thesis, this perception problem is analyzed for driving in the inner-city environments associated with the capacity to perform a safe displacement based on decision-making process in autonomous navigation. It is designed a perception system that allows robotic-cars to drive autonomously on roads, without the need to adapt the infrastructure, without requiring previous knowledge of the environment and considering the presence of dynamic objects such as cars. It is proposed a novel method based on machine learning to extract the semantic context using a pair of stereo images, which is merged in an evidential grid to model the uncertainties of an unknown urban environment, applying the Dempster-Shafer theory. To make decisions in path-planning, it is applied the virtual tentacle approach to generate possible paths starting from ego-referenced car and based on it, two news strategies are proposed. First one, a new strategy to select the correct path to better avoid obstacles and to follow the local task in the context of hybrid navigation, and second, a new closed loop control based on visual odometry and virtual tentacle is modeled to path-following execution. Finally, a complete automotive system integrating the perception, path-planning and control modules are implemented and experimentally validated in real situations using an experimental autonomous car, where the results show that the developed approach successfully performs a safe local navigation based on camera sensorsDoutoradoMecanica dos Sólidos e Projeto MecanicoDoutor em Engenharia Mecânic

    Collaborative Grid Mapping for Moving Object Tracking Evaluation

    Full text link
    Perception of other road users is a crucial task for intelligent vehicles. Perception systems can use on-board sensors only or be in cooperation with other vehicles or with roadside units. In any case, the performance of perception systems has to be evaluated against ground-truth data, which is a particularly tedious task and requires numerous manual operations. In this article, we propose a novel semi-automatic method for pseudo ground-truth estimation. The principle consists in carrying out experiments with several vehicles equipped with LiDAR sensors and with fixed perception systems located at the roadside in order to collaboratively build reference dynamic data. The method is based on grid mapping and in particular on the elaboration of a background map that holds relevant information that remains valid during a whole dataset sequence. Data from all agents is converted in time-stamped observations grids. A data fusion method that manages uncertainties combines the background map with observations to produce dynamic reference information at each instant. Several datasets have been acquired with three experimental vehicles and a roadside unit. An evaluation of this method is finally provided in comparison to a handmade ground truth

    Improved dynamic object detection within evidential grids framework

    Get PDF
    International audienceThe deployment of autonomous robots/vehicles is increasing in several domains. To perform tasks properly, a robot must have a good perception about its environment while detecting dynamic obstacles. Recently, evidential grids have attracted more interest for environment perception since they permit more effective uncertainty handling. The latest studies on evidential grids relied on the use of thresholds for information management e.g. the use of a threshold, for the conflict characterized by the mass of empty set, in order to detect dynamic objects. Nevertheless, the mass of empty set alone is not consistent in some cases. Also, the thresholds used were chosen either arbitrary or tuned manually without any computational method. In this paper, first the conflict is composed of two parameters instead of mass of empty set alone, and dynamic objects detection is performed using a threshold on the evolution of this conflict pair. Secondly, the paper introduces a general threshold along with a mathematical demonstration to compute it which can be used in different dynamic object detection cases. A real-time experiment is performed using the RB1-BASE robot equipped with a RGB-D camera and a laser scanner
    corecore