415 research outputs found

    Perception and intelligent localization for autonomous driving

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaVisão por computador e fusão sensorial são temas relativamente recentes, no entanto largamente adoptados no desenvolvimento de robôs autónomos que exigem adaptabilidade ao seu ambiente envolvente. Esta dissertação foca-se numa abordagem a estes dois temas para alcançar percepção no contexto de condução autónoma. O uso de câmaras para atingir este fim é um processo bastante complexo. Ao contrário dos meios sensoriais clássicos que fornecem sempre o mesmo tipo de informação precisa e atingida de forma determinística, as sucessivas imagens adquiridas por uma câmara estão repletas da mais variada informação e toda esta ambígua e extremamente difícil de extrair. A utilização de câmaras como meio sensorial em robótica é o mais próximo que chegamos na semelhança com aquele que é o de maior importância no processo de percepção humana, o sistema de visão. Visão por computador é uma disciplina científica que engloba àreas como: processamento de sinal, inteligência artificial, matemática, teoria de controlo, neurobiologia e física. A plataforma de suporte ao estudo desenvolvido no âmbito desta dissertação é o ROTA (RObô Triciclo Autónomo) e todos os elementos que consistem o seu ambiente. No contexto deste, são descritas abordagens que foram introduzidas com fim de desenvolver soluções para todos os desafios que o robô enfrenta no seu ambiente: detecção de linhas de estrada e consequente percepção desta, detecção de obstáculos, semáforos, zona da passadeira e zona de obras. É também descrito um sistema de calibração e aplicação da remoção da perspectiva da imagem, desenvolvido de modo a mapear os elementos percepcionados em distâncias reais. Em consequência do sistema de percepção, é ainda abordado o desenvolvimento de auto-localização integrado numa arquitectura distribuída incluindo navegação com planeamento inteligente. Todo o trabalho desenvolvido no decurso da dissertação é essencialmente centrado no desenvolvimento de percepção robótica no contexto de condução autónoma.Computer vision and sensor fusion are subjects that are quite recent, however widely adopted in the development of autonomous robots that require adaptability to their surrounding environment. This thesis gives an approach on both in order to achieve perception in the scope of autonomous driving. The use of cameras to achieve this goal is a rather complex subject. Unlike the classic sensorial devices that provide the same type of information with precision and achieve this in a deterministic way, the successive images acquired by a camera are replete with the most varied information, that this ambiguous and extremely dificult to extract. The use of cameras for robotic sensing is the closest we got within the similarities with what is of most importance in the process of human perception, the vision system. Computer vision is a scientific discipline that encompasses areas such as signal processing, artificial intelligence, mathematics, control theory, neurobiology and physics. The support platform in which the study within this thesis was developed, includes ROTA (RObô Triciclo Autónomo) and all elements comprising its environment. In its context, are described approaches that introduced in the platform in order to develop solutions for all the challenges facing the robot in its environment: detection of lane markings and its consequent perception, obstacle detection, trafic lights, crosswalk and road maintenance area. It is also described a calibration system and implementation for the removal of the image perspective, developed in order to map the elements perceived in actual real world distances. As a result of the perception system development, it is also addressed self-localization integrated in a distributed architecture that allows navigation with long term planning. All the work developed in the course of this work is essentially focused on robotic perception in the context of autonomous driving

    Belief Space-Guided Navigation for Robots and Autonomous Vehicles

    Get PDF
    Navigating through the environment is a fundamental capability for mobile robots, which is still very challenging today. Most robotic applications these days, such as mining, disaster response, and agriculture, require the robots to move and perform tasks in a variety of environments which are stochastic and sometimes even unpredictable. A robot often cannot directly observe its current state but instead estimates a distribution over the set of possible states based on sensor measurements that are both noisy and partial. The actual robot position differs from its prediction after applying a motion command, due to actuation noise. Classic algorithms for navigation should adapt themselves to where the behavior of the environment is stochastic, and the execution of the motions has great uncertainty. To solve such challenging problems, we propose to guide the robot's navigation in the belief space. Belief space-guided navigation differs fundamentally from planning without uncertainty where the state of the robot is always assumed to be known precisely. The robot senses its environment, estimates its current state due to perception uncertainty, and decides whether a new (or priori) action is appropriate. Based on that determination, it actuates its sensors to move with motion uncertainty in the environment. This inspires us to connect robot perception and motion planning, and reason about the uncertainty to improve the quality of plan so that the robot can follow a collision-free, feasible kinodynamic, and task-optimal trajectory. In this dissertation, we explore the belief space-guided robotic navigation problems, which include belief space-based scene understanding for autonomous vehicles and introduce belief space guided robotic planning. We first investigate how belief space can facilitate scene understanding under the context of lane marking quality assessment in the application of autonomous driving. We propose a new problem by measuring the quality of roads and ensuring they are ready for autonomous driving. We focus on developing three quality metrics for lane markings (LMs), correctness metric, shape metric, and visibility metric, and algorithms to assess LM qualities to facilitate scene understanding. As another example of using belief space for better scene understanding, we utilize crowdsourced images from multiple vehicles to help verify LMs for high-definition (HD) map maintenance. An LM is consistent if belief functions from the map and the image satisfy statistical hypothesis testing. We further extend the Bayesian belief model into a sequential belief update using crowdsourced images. LMs with a higher probability of existence are kept in the HD map whereas those with a lower probability of existence are removed from the HD map. Belief space can also help us to tightly connect perception and motion planning. As an example, we develop a motion planning strategy for autonomous vehicles. Named as virtual lane boundary approach, this framework considers obstacle avoidance, trajectory smoothness (to satisfy vehicle kinodynamic constraints), trajectory continuity (to avoid sudden movements), global positioning system (GPS) following quality (to execute the global plan), and lane following or partial direction following (to meet human expectation). Consequently, vehicle motion is more human-compatible than existing approaches. As another example of how belief space can help guide robots for different tasks, we propose to use it for the probabilistic boundary coverage of unknown target fields (UTFs). We employ Gaussian processes as a local belief function to approximate a field boundary distribution in an ellipse-shaped local region. The local belief function allows us to predict UTF boundary trends and establish an adjacent ellipse for further exploration. The process is governed by a depth-first search process until UTF is approximately enclosed by connected ellipses when the boundary coverage process ends. We formally prove that our boundary coverage process guarantees the enclosure above a given coverage ratio with a preset probability threshold

    Overview of Environment Perception for Intelligent Vehicles

    Get PDF
    This paper presents a comprehensive literature review on environment perception for intelligent vehicles. The state-of-the-art algorithms and modeling methods for intelligent vehicles are given, with a summary of their pros and cons. A special attention is paid to methods for lane and road detection, traffic sign recognition, vehicle tracking, behavior analysis, and scene understanding. In addition, we provide information about datasets, common performance analysis, and perspectives on future research directions in this area

    Semantic Visual Localization

    Full text link
    Robust visual localization under a wide range of viewing conditions is a fundamental problem in computer vision. Handling the difficult cases of this problem is not only very challenging but also of high practical relevance, e.g., in the context of life-long localization for augmented reality or autonomous robots. In this paper, we propose a novel approach based on a joint 3D geometric and semantic understanding of the world, enabling it to succeed under conditions where previous approaches failed. Our method leverages a novel generative model for descriptor learning, trained on semantic scene completion as an auxiliary task. The resulting 3D descriptors are robust to missing observations by encoding high-level 3D geometric and semantic information. Experiments on several challenging large-scale localization datasets demonstrate reliable localization under extreme viewpoint, illumination, and geometry changes
    corecore