4 research outputs found

    Interfaz de software Autonavi3at para navegar de forma aut贸noma en v铆as urbanas mediante visi贸n omnidireccional y un robot m贸vil

    Get PDF
    The design of efficient autonomous navigation systems for mobile robots or autonomous vehicles is fundamental to perform the programmed tasks. Basically, two kind of sensors are used in urban road following: LIDAR and cameras. LIDAR sensors are highly accurate but expensive and extra work is needed for human understanding of the point cloud scenes; however, visual content is understood better by human beings, which should be used to develop human-robot interfaces. In this work, a computer vision-based urban road following software tool called AutoNavi3AT for mobile robots and autonomous vehicles is presented. The urban road following scheme proposed in AutoNavi3AT uses vanishing point estimation and tracking on panoramic images to control the mobile robot heading on the urban road. To do that, Gabor filters, region growing, and particle filters were used. In addition, laser range data are also employed for local obstacle avoidance. Quantitative results were achieved using two kind of tests, one uses datasets acquired at the Universidad del Valle campus, and field tests using a Pioneer 3AT mobile robot. As a result, important improvements in the vanishing point estimation of 68.26 % and 61.46 % in average were achieved, which is useful for mobile robots and autonomous vehicles when they are moving on urban roads.El disen虄o de sistemas de navegacio虂n auto虂nomos eficientes para robots mo虂viles o vehi虂culos auto虂nomos es fundamental para realizar las tareas programadas. Ba虂sicamente, se utilizan dos tipos de sensores en el seguimiento de vi虂as urbanas: LIDAR y ca虂maras. Los sensores LIDAR son muy precisos, pero costosos y se necesita trabajo adicional para la comprensio虂n humana de las escenas de nubes de puntos; sin embargo, los seres humanos entienden mejor el contenido visual, lo que deberi虂a usarse para desarrollar interfaces humano-robot. En este trabajo, se presenta una herramienta de software de seguimiento de carreteras urbanas basada en visio虂n artificial llamada AutoNavi3AT para robots mo虂viles y vehi虂culos auto虂nomos. El esquema de seguimiento de vi虂as urbanas propuesto en AutoNavi3AT utiliza la estimacio虂n del punto de fuga y el seguimiento de ima虂genes panora虂micas para controlar el avance del robot mo虂vil en la vi虂a urbana. Para ello se utilizaron filtros Gabor, crecimiento de regiones y filtros de parti虂culas. Adema虂s, los datos de alcance del la虂ser tambie虂n se emplean para evitar obsta虂culos locales. Los resultados cuantitativos se lograron utilizando dos tipos de pruebas, una utiliza conjuntos de datos adquiridos en el campus de la Universidad del Valle y pruebas de campo utilizando un robot mo虂vil Pioneer 3AT. Como resultado, se lograron mejoras importantes en la estimacio虂n del punto de fuga de 68.26% y 61.46% en promedio, lo cual es u虂til para robots mo虂viles y vehi虂culos auto虂nomos cuando se desplazan por vi虂as urbanas

    Locating Anchor Drilling Holes Based on Binocular Vision in Coal Mine Roadways

    Get PDF
    The implementation of roof bolt support within a coal mine roadway has the capacity to bolster the stability of the encompassing rock strata and thereby mitigate the potential for accidents. To enhance the automation of support operations, this paper introduces a binocular vision positioning method for drilling holes, which relies on the adaptive adjustment of parameters. Through the establishment of a predictive model, the correlation between the radius of the target circular hole in the image and the shooting distance is ascertained. Based on the structural model of the anchor drilling robot and the related sensing data, the shooting distance range is defined. Exploiting the geometric constraints inherent to adjacent anchor holes, the precise identification of anchor holes is detected by a Hough transformer with an adaptive parameter-adjusted method. On this basis, the matching of the anchor hole contour is realized by using linear slope and geometric constraints, and the spatial coordinates of the anchor hole center in the camera coordinate system are determined based on the binocular vision positioning principle. The outcomes of the experiments reveal that the method attains a po-sitioning accuracy of 95.2%, with an absolute error of around 1.52 mm. When compared with manual operation, this technique distinctly enhances drilling accuracy and augments support efficiency

    Vision-Based Real-Time Traversable Region Detection for Mobile Robot in the Outdoors

    No full text
    Environment perception is essential for autonomous mobile robots in human-robot coexisting outdoor environments. One of the important tasks for such intelligent robots is to autonomously detect the traversable region in an unstructured 3D real world. The main drawback of most existing methods is that of high computational complexity. Hence, this paper proposes a binocular vision-based, real-time solution for detecting traversable region in the outdoors. In the proposed method, an appearance model based on multivariate Gaussian is quickly constructed from a sample region in the left image adaptively determined by the vanishing point and dominant borders. Then, a fast, self-supervised segmentation scheme is proposed to classify the traversable and non-traversable regions. The proposed method is evaluated on public datasets as well as a real mobile robot. Implementation on the mobile robot has shown its ability in the real-time navigation applications
    corecore