1,082 research outputs found

    Pushbroom Stereo for High-Speed Navigation in Cluttered Environments

    Full text link
    We present a novel stereo vision algorithm that is capable of obstacle detection on a mobile-CPU processor at 120 frames per second. Our system performs a subset of standard block-matching stereo processing, searching only for obstacles at a single depth. By using an onboard IMU and state-estimator, we can recover the position of obstacles at all other depths, building and updating a full depth-map at framerate. Here, we describe both the algorithm and our implementation on a high-speed, small UAV, flying at over 20 MPH (9 m/s) close to obstacles. The system requires no external sensing or computation and is, to the best of our knowledge, the first high-framerate stereo detection system running onboard a small UAV

    Vision and Learning for Deliberative Monocular Cluttered Flight

    Full text link
    Cameras provide a rich source of information while being passive, cheap and lightweight for small and medium Unmanned Aerial Vehicles (UAVs). In this work we present the first implementation of receding horizon control, which is widely used in ground vehicles, with monocular vision as the only sensing mode for autonomous UAV flight in dense clutter. We make it feasible on UAVs via a number of contributions: novel coupling of perception and control via relevant and diverse, multiple interpretations of the scene around the robot, leveraging recent advances in machine learning to showcase anytime budgeted cost-sensitive feature selection, and fast non-linear regression for monocular depth prediction. We empirically demonstrate the efficacy of our novel pipeline via real world experiments of more than 2 kms through dense trees with a quadrotor built from off-the-shelf parts. Moreover our pipeline is designed to combine information from other modalities like stereo and lidar as well if available

    Embedded visual perception system applied to safe navigation of vehicles

    Get PDF
    Orientadores: Douglas Eduardo Zampieri, Isabelle Fantoni CoichotTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia MecanicaResumo: Esta tese aborda o problema de evitamento de obstáculos para plataformas terrestres semie autônomas em ambientes dinâmicos e desconhecidos. Baseado num sistema monocular, propõe-se um conjunto de ferramentas que monitoram continuamente a estrada a frente do veículo, provendo-o de informações adequadas em tempo real. A partir de um algoritmo robusto de detecção da linha do horizonte é possível investigar dinamicamente somente a porção da estrada a frente do veículo, a fim de determinar a área de navegação, e da deteção de obstáculos. Uma área de navegação livre de obstáculos é então representa a partir de uma imagem multimodal 2D. Esta representação permite que um nível de segurança possa ser selecionado de acordo com o ambiente e o contexto de operação. A fim de reduzir o custo computacional, um método automático para descarte de imagens é proposto. Levando-se em conta a coerência temporal entre consecutivas imagens, uma nova metodologia de gerenciamento de energia (Dynamic Power Management) é aplicada ao sistema de percepção visual a fim de otimizar o consumo de energia. Estas propostas foram testadas em diferentes tipos de ambientes, e incluem a deteção da área de navegação, navegação reativa e estimação do risco de colisão. Uma característica das metodologias apresentadas é a independência em relação ao sistema de aquisição de imagem e do próprio veículo. Este sistema de percepção em tempo real foi avaliado a partir de diferentes bancos de testes e também a partir de dados reais obtidos por diferentes plataformas inteligentes. Em tarefas realizadas com uma plataforma semi-autônoma, testes foram conduzidos em velocidades acima de 100 Km/h. A partir de um sistema em malha aberta, deslocamentos reativos autônomos foram realizados com sucessoResumé: Les études développées dans ce projet doctoral ont concerné deux problématiques actuelles dans le domaine des systèmes robotiques pour la mobilité terrestre: premièrement, le problème associé à la navigation autonome et (semi)-autonome des véhicules terrestres dans un environnement inconnu ou partiellement connu. Cela constitue un enjeu qui prend de l'importance sur plusieurs fronts, notamment dans le domaine militaire. Récemment, l'agence DARPA1 aux États-Unis a soutenu plusieurs challenges sur cette problématique robotique; deuxièmement, le développement de systèmes d'assistance à la conduite basés sur la vision par ordinateur. Les acteurs de l'industrie automobile s'intéressent de plus en plus au développement de tels systèmes afin de rendre leurs produits plus sûrs et plus confortables à toutes conditions climatiques ou de terrain. De plus, grâce à l'électronique embarquée et à l'utilisation des systèmes visuels, une interaction avec l'environnement est possible, rendant les routes et les villes plus sûres pour les conducteurs et les piétons. L'objectif principal de ce projet doctoral a été le développement de méthodologies qui permettent à des systèmes mobiles robotisés de naviguer de manière autonome dans un environnement inconnu ou partiellement connu, basées sur la perception visuelle fournie par un système de vision monoculaire embarqué. Un véhicule robotisé qui doit effectuer des tâches précises dans un environnement inconnu, doit avoir la faculté de percevoir son environnement proche et avoir un degré minimum d'interaction avec celui-ci. Nous avons proposé un système de vision embarquée préliminaire, où le temps de traitement de l'information (point critique dans des systèmes de vision utilisés en temps-réel) est optimisé par une méthode d'identification et de rejet d'informations redondantes. Suite à ces résultats, on a proposé une étude innovante par rapport à l'état de l'art en ce qui concerne la gestion énergétique du système de vision embarqué, également pour le calcul du temps de collision à partir d'images monoculaires. Ainsi, nous proposons le développement des travaux en étudiant une méthodologie robuste et efficace (utile en temps-réel) pour la détection de la route et l'extraction de primitives d'intérêts appliquée à la navigation autonome des véhicules terrestres. Nous présentons des résultats dans un environnement réel, dynamique et inconnu. Afin d'évaluer la performance de l'algorithme proposé, nous avons utilisé un banc d'essai urbain et réel. Pour la détection de la route et afin d'éviter les obstacles, les résultats sont présents en utilisant un véhicule réel afin d'évaluer la performance de l'algorithme dans un déplacement autonome. Cette Thèse de Doctorat a été réalisée à partir d'un accord de cotutelle entre l' Université de Campinas (UNICAMP) et l'Université de Technologie de Compiègne (UTC), sous la direction du Professeur Docteur Douglas Eduardo ZAMPIERI, Faculté de Génie Mécanique, UNICAMP, Campinas, Brésil, et Docteur Isabelle FANTONI-COICHOT du Laboratoire HEUDIASYC UTC, Compiègne, France. Cette thèse a été soutenue le 26 août 2011 à la Faculté de Génie Mécanique, UNICAMP, devant un jury composé des Professeurs suivantsAbstract: This thesis addresses the problem of obstacle avoidance for semi- and autonomous terrestrial platforms in dynamic and unknown environments. Based on monocular vision, it proposes a set of tools that continuously monitors the way forward, proving appropriate road informations in real time. A horizon finding algorithm was developed to sky removal. This algorithm generates the region of interest from a dynamic threshold search method, allowing to dynamically investigate only a small portion of the image ahead of the vehicle, in order to road and obstacle detection. A free-navigable area is therefore represented from a multimodal 2D drivability road image. This multimodal result enables that a level of safety can be selected according to the environment and operational context. In order to reduce processing time, this thesis also proposes an automatic image discarding criteria. Taking into account the temporal coherence between consecutive frames, a new Dynamic Power Management methodology is proposed and applied to a robotic visual machine perception, which included a new environment observer method to optimize energy consumption used by a visual machine. This proposal was tested in different types of image texture (road surfaces), which includes free-area detection, reactive navigation and time-to-collision estimation. A remarkable characteristic of these methodologies is its independence of the image acquiring system and of the robot itself. This real-time perception system has been evaluated from different test-banks and also from real data obtained by two intelligent platforms. In semi-autonomous tasks, tests were conducted at speeds above 100 Km/h. Autonomous displacements were also carried out successfully. The algorithms presented here showed an interesting robustnessDoutoradoMecanica dos Sólidos e Projeto MecanicoDoutor em Engenharia Mecânic

    Attention and Anticipation in Fast Visual-Inertial Navigation

    Get PDF
    We study a Visual-Inertial Navigation (VIN) problem in which a robot needs to estimate its state using an on-board camera and an inertial sensor, without any prior knowledge of the external environment. We consider the case in which the robot can allocate limited resources to VIN, due to tight computational constraints. Therefore, we answer the following question: under limited resources, what are the most relevant visual cues to maximize the performance of visual-inertial navigation? Our approach has four key ingredients. First, it is task-driven, in that the selection of the visual cues is guided by a metric quantifying the VIN performance. Second, it exploits the notion of anticipation, since it uses a simplified model for forward-simulation of robot dynamics, predicting the utility of a set of visual cues over a future time horizon. Third, it is efficient and easy to implement, since it leads to a greedy algorithm for the selection of the most relevant visual cues. Fourth, it provides formal performance guarantees: we leverage submodularity to prove that the greedy selection cannot be far from the optimal (combinatorial) selection. Simulations and real experiments on agile drones show that our approach ensures state-of-the-art VIN performance while maintaining a lean processing time. In the easy scenarios, our approach outperforms appearance-based feature selection in terms of localization errors. In the most challenging scenarios, it enables accurate visual-inertial navigation while appearance-based feature selection fails to track robot's motion during aggressive maneuvers.Comment: 20 pages, 7 figures, 2 table
    • …
    corecore