1,537 research outputs found

    Indoor assistance for visually impaired people using a RGB-D camera

    Get PDF
    In this paper a navigational aid for visually impaired people is presented. The system uses a RGB-D camera to perceive the environment and implements self-localization, obstacle detection and obstacle classification. The novelty of this work is threefold. First, self-localization is performed by means of a novel camera tracking approach that uses both depth and color information. Second, to provide the user with semantic information, obstacles are classified as walls, doors, steps and a residual class that covers isolated objects and bumpy parts on the floor. Third, in order to guarantee real time performance, the system is accelerated by offloading parallel operations to the GPU. Experiments demonstrate that the whole system is running at 9 Hz

    Fast Staircase Detection and Estimation using 3D Point Clouds with Multi-detection Merging for Heterogeneous Robots

    Full text link
    Robotic systems need advanced mobility capabilities to operate in complex, three-dimensional environments designed for human use, e.g., multi-level buildings. Incorporating some level of autonomy enables robots to operate robustly, reliably, and efficiently in such complex environments, e.g., automatically ``returning home'' if communication between an operator and robot is lost during deployment. This work presents a novel method that enables mobile robots to robustly operate in multi-level environments by making it possible to autonomously locate and climb a range of different staircases. We present results wherein a wheeled robot works together with a quadrupedal system to quickly detect different staircases and reliably climb them. The performance of this novel staircase detection algorithm that is able to run on the heterogeneous platforms is compared to the current state-of-the-art detection algorithm. We show that our approach significantly increases the accuracy and speed at which detections occur.Comment: 7 pages, 8 Figures, 2 Table

    Detección y modelado de escaleras con sensor RGB-D para asistencia personal

    Get PDF
    La habilidad de avanzar y moverse de manera efectiva por el entorno resulta natural para la mayoría de la gente, pero no resulta fácil de realizar bajo algunas circunstancias, como es el caso de las personas con problemas visuales o cuando nos movemos en entornos especialmente complejos o desconocidos. Lo que pretendemos conseguir a largo plazo es crear un sistema portable de asistencia aumentada para ayudar a quienes se enfrentan a esas circunstancias. Para ello nos podemos ayudar de cámaras, que se integran en el asistente. En este trabajo nos hemos centrado en el módulo de detección, dejando para otros trabajos el resto de módulos, como podría ser la interfaz entre la detección y el usuario. Un sistema de guiado de personas debe mantener al sujeto que lo utiliza apartado de peligros, pero también debería ser capaz de reconocer ciertas características del entorno para interactuar con ellas. En este trabajo resolvemos la detección de uno de los recursos más comunes que una persona puede tener que utilizar a lo largo de su vida diaria: las escaleras. Encontrar escaleras es doblemente beneficioso, puesto que no sólo permite evitar posibles caídas sino que ayuda a indicar al usuario la posibilidad de alcanzar otro piso en el edificio. Para conseguir esto hemos hecho uso de un sensor RGB-D, que irá situado en el pecho del sujeto, y que permite captar de manera simultánea y sincronizada información de color y profundidad de la escena. El algoritmo usa de manera ventajosa la captación de profundidad para encontrar el suelo y así orientar la escena de la manera que aparece ante el usuario. Posteriormente hay un proceso de segmentación y clasificación de la escena de la que obtenemos aquellos segmentos que se corresponden con "suelo", "paredes", "planos horizontales" y una clase residual, de la que todos los miembros son considerados "obstáculos". A continuación, el algoritmo de detección de escaleras determina si los planos horizontales son escalones que forman una escalera y los ordena jerárquicamente. En el caso de que se haya encontrado una escalera, el algoritmo de modelado nos proporciona toda la información de utilidad para el usuario: cómo esta posicionada con respecto a él, cuántos escalones se ven y cuáles son sus medidas aproximadas. En definitiva, lo que se presenta en este trabajo es un nuevo algoritmo de ayuda a la navegación humana en entornos de interior cuya mayor contribución es un algoritmo de detección y modelado de escaleras que determina toda la información de mayor relevancia para el sujeto. Se han realizado experimentos con grabaciones de vídeo en distintos entornos, consiguiendo buenos resultados tanto en precisión como en tiempo de respuesta. Además se ha realizado una comparación de nuestros resultados con los extraídos de otras publicaciones, demostrando que no sólo se consigue una eciencia que iguala al estado de la materia sino que también se aportan una serie de mejoras. Especialmente, nuestro algoritmo es el primero capaz de obtener las dimensiones de las escaleras incluso con obstáculos obstruyendo parcialmente la vista, como puede ser gente subiendo o bajando. Como resultado de este trabajo se ha elaborado una publicación aceptada en el Second Workshop on Assitive Computer Vision and Robotics del ECCV, cuya presentación tiene lugar el 12 de Septiembre de 2014 en Zúrich, Suiza

    Stairs detection with odometry-aided traversal from a wearable RGB-D camera

    Get PDF
    Stairs are one of the most common structures present in human-made scenarios, but also one of the most dangerous for those with vision problems. In this work we propose a complete method to detect, locate and parametrise stairs with a wearable RGB-D camera. Our algorithm uses the depth data to determine if the horizontal planes in the scene are valid steps of a staircase judging their dimensions and relative positions. As a result we obtain a scaled model of the staircase with the spatial location and orientation with respect to the subject. The visual odometry is also estimated to continuously recover the current position and orientation of the user while moving. This enhances the system giving the ability to come back to previously detected features and providing location awareness of the user during the climb. Simultaneously, the detection of the staircase during the traversal is used to correct the drift of the visual odometry. A comparison of results of the stair detection with other state-of-the-art algorithms was performed using public dataset. Additional experiments have also been carried out, recording our own natural scenes with a chest-mounted RGB-D camera in indoor scenarios. The algorithm is robust enough to work in real-time and even under partial occlusions of the stair

    A software tool for the semi-automatic segmentation of architectural 3D models with semantic annotation and Web fruition

    Get PDF
    The thorough documentation of Cultural Heritage artifacts is a fundamental concern for management and preservation. In this context, the semantic segmentation and annotation of 3D models of historic buildings is an important modern topic. This work describes a software tool currently under development, for interactive and semi-automatic segmentation, characterization, and annotation of 3D models produced by photogrammetric surveys. The system includes some generic and well-known segmentation approaches, such as region growing and Locally Convex Connected Patches segmentation, but it also contains original code for specific semantic segmentation of parts of buildings, in particular straight stairs and circular-section columns. Furthermore, a method for automatic wall-surface characterization is devoted to rusticated-ashlar detection, in view of masonry-unit segmentation. The software is modular, so allowing easy expandability. It also has tools for data encoding into formats ready for model fruition by Web technologies. These results were partly obtained in collaboration with Corvallis SPA (Padua-Italy, http://www.corvallis.it)

    Extraction of main levels of a building from a large point cloud

    Get PDF
    Horizontal levels are references entities, the base of man-made environments. Their creation is the first step for various applications including the BIM (Building Information Modelling). BIM is an emerging methodology, widely used for new constructions, and increasingly applied to existing buildings (scan-to-BIM). The as-built BIM process is still mainly manual or semi-automatic and therefore is highly time-consuming. The automation of the as-built BIM is a challenging topic among the research community. This study is part of an ongoing research into the scan-to-BIM process regarding the extraction of the principal structure of a building. More specifically, here we present a strategy to automatically detect the building levels from a large point cloud obtained with a terrestrial laser scanner survey. The identification of the horizontal planes is the first indispensable step to produce an as-built BIM model. Our algorithm, developed in C++, is based on plane extraction by means of the RANSAC algorithm followed by the minimization of the quadrate sum of points-plane distance. Moreover, this paper will take an in-depth look at the influence of data resolution in the accuracy of plane extraction and at the necessary accuracy for the construction of a BIM model. A laser scanner survey of a three floors building composed by 36 scan stations has produced a point cloud of about 550 million points. The estimated plane parameters at different data resolution are analysed in terms of distance from the full points cloud resolution

    StairNetV3: Depth-aware Stair Modeling using Deep Learning

    Full text link
    Vision-based stair perception can help autonomous mobile robots deal with the challenge of climbing stairs, especially in unfamiliar environments. To address the problem that current monocular vision methods are difficult to model stairs accurately without depth information, this paper proposes a depth-aware stair modeling method for monocular vision. Specifically, we take the extraction of stair geometric features and the prediction of depth images as joint tasks in a convolutional neural network (CNN), with the designed information propagation architecture, we can achieve effective supervision for stair geometric feature learning by depth information. In addition, to complete the stair modeling, we take the convex lines, concave lines, tread surfaces and riser surfaces as stair geometric features and apply Gaussian kernels to enable the network to predict contextual information within the stair lines. Combined with the depth information obtained by depth sensors, we propose a stair point cloud reconstruction method that can quickly get point clouds belonging to the stair step surfaces. Experiments on our dataset show that our method has a significant improvement over the previous best monocular vision method, with an intersection over union (IOU) increase of 3.4 %, and the lightweight version has a fast detection speed and can meet the requirements of most real-time applications. Our dataset is available at https://data.mendeley.com/datasets/6kffmjt7g2/1
    corecore