744 research outputs found

    Efficient vanishing point detection method in unstructured road environments based on dark channel prior

    Get PDF
    Vanishing point detection is a key technique in the fields such as road detection, camera calibration and visual navigation. This study presents a new vanishing point detection method, which delivers efficiency by using a dark channel prior‐based segmentation method and an adaptive straight lines search mechanism in the road region. First, the dark channel prior information is used to segment the image into a series of regions. Then the straight lines are extracted from the region contours, and the straight lines in the road region are estimated by a vertical envelope and a perspective quadrilateral constraint. The vertical envelope roughly divides the whole image into sky region, vertical region and road region. The perspective quadrilateral constraint, as the authors defined herein, eliminates the vertical lines interference inside the road region to extract the approximate straight lines in the road region. Finally, the vanishing point is estimated by the meanshift clustering method, which are computed based on the proposed grouping strategies and the intersection principles. Experiments have been conducted with a large number of road images under different environmental conditions, and the results demonstrate that the authors’ proposed algorithm can estimate vanishing point accurately and efficiently in unstructured road scenes

    Pedestrian lane detection in unstructured scenes for assistive navigation

    Get PDF
    Automatic detection of the pedestrian lane in a scene is an important task in assistive and autonomous navigation. This paper presents a vision-based algorithm for pedestrian lane detection in unstructured scenes, where lanes vary significantly in color, texture, and shape and are not indicated by any painted markers. In the proposed method, a lane appearance model is constructed adaptively from a sample image region, which is identified automatically from the image vanishing point. This paper also introduces a fast and robust vanishing point estimation method based on the color tensor and dominant orientations of color edge pixels. The proposed pedestrian lane detection method is evaluated on a new benchmark dataset that contains images from various indoor and outdoor scenes with different types of unmarked lanes. Experimental results are presented which demonstrate its efficiency and robustness in comparison with several existing methods

    Learning to See the Wood for the Trees: Deep Laser Localization in Urban and Natural Environments on a CPU

    Full text link
    Localization in challenging, natural environments such as forests or woodlands is an important capability for many applications from guiding a robot navigating along a forest trail to monitoring vegetation growth with handheld sensors. In this work we explore laser-based localization in both urban and natural environments, which is suitable for online applications. We propose a deep learning approach capable of learning meaningful descriptors directly from 3D point clouds by comparing triplets (anchor, positive and negative examples). The approach learns a feature space representation for a set of segmented point clouds that are matched between a current and previous observations. Our learning method is tailored towards loop closure detection resulting in a small model which can be deployed using only a CPU. The proposed learning method would allow the full pipeline to run on robots with limited computational payload such as drones, quadrupeds or UGVs.Comment: Accepted for publication at RA-L/ICRA 2019. More info: https://ori.ox.ac.uk/esm-localizatio

    Interfaz de software Autonavi3at para navegar de forma autónoma en vías urbanas mediante visión omnidireccional y un robot móvil

    Get PDF
    The design of efficient autonomous navigation systems for mobile robots or autonomous vehicles is fundamental to perform the programmed tasks. Basically, two kind of sensors are used in urban road following: LIDAR and cameras. LIDAR sensors are highly accurate but expensive and extra work is needed for human understanding of the point cloud scenes; however, visual content is understood better by human beings, which should be used to develop human-robot interfaces. In this work, a computer vision-based urban road following software tool called AutoNavi3AT for mobile robots and autonomous vehicles is presented. The urban road following scheme proposed in AutoNavi3AT uses vanishing point estimation and tracking on panoramic images to control the mobile robot heading on the urban road. To do that, Gabor filters, region growing, and particle filters were used. In addition, laser range data are also employed for local obstacle avoidance. Quantitative results were achieved using two kind of tests, one uses datasets acquired at the Universidad del Valle campus, and field tests using a Pioneer 3AT mobile robot. As a result, important improvements in the vanishing point estimation of 68.26 % and 61.46 % in average were achieved, which is useful for mobile robots and autonomous vehicles when they are moving on urban roads.El diseño de sistemas de navegación autónomos eficientes para robots móviles o vehículos autónomos es fundamental para realizar las tareas programadas. Básicamente, se utilizan dos tipos de sensores en el seguimiento de vías urbanas: LIDAR y cámaras. Los sensores LIDAR son muy precisos, pero costosos y se necesita trabajo adicional para la comprensión humana de las escenas de nubes de puntos; sin embargo, los seres humanos entienden mejor el contenido visual, lo que debería usarse para desarrollar interfaces humano-robot. En este trabajo, se presenta una herramienta de software de seguimiento de carreteras urbanas basada en visión artificial llamada AutoNavi3AT para robots móviles y vehículos autónomos. El esquema de seguimiento de vías urbanas propuesto en AutoNavi3AT utiliza la estimación del punto de fuga y el seguimiento de imágenes panorámicas para controlar el avance del robot móvil en la vía urbana. Para ello se utilizaron filtros Gabor, crecimiento de regiones y filtros de partículas. Además, los datos de alcance del láser también se emplean para evitar obstáculos locales. Los resultados cuantitativos se lograron utilizando dos tipos de pruebas, una utiliza conjuntos de datos adquiridos en el campus de la Universidad del Valle y pruebas de campo utilizando un robot móvil Pioneer 3AT. Como resultado, se lograron mejoras importantes en la estimación del punto de fuga de 68.26% y 61.46% en promedio, lo cual es útil para robots móviles y vehículos autónomos cuando se desplazan por vías urbanas
    corecore