1,744 research outputs found

    Detección y modelado de escaleras con sensor RGB-D para asistencia personal

    Get PDF
    La habilidad de avanzar y moverse de manera efectiva por el entorno resulta natural para la mayoría de la gente, pero no resulta fácil de realizar bajo algunas circunstancias, como es el caso de las personas con problemas visuales o cuando nos movemos en entornos especialmente complejos o desconocidos. Lo que pretendemos conseguir a largo plazo es crear un sistema portable de asistencia aumentada para ayudar a quienes se enfrentan a esas circunstancias. Para ello nos podemos ayudar de cámaras, que se integran en el asistente. En este trabajo nos hemos centrado en el módulo de detección, dejando para otros trabajos el resto de módulos, como podría ser la interfaz entre la detección y el usuario. Un sistema de guiado de personas debe mantener al sujeto que lo utiliza apartado de peligros, pero también debería ser capaz de reconocer ciertas características del entorno para interactuar con ellas. En este trabajo resolvemos la detección de uno de los recursos más comunes que una persona puede tener que utilizar a lo largo de su vida diaria: las escaleras. Encontrar escaleras es doblemente beneficioso, puesto que no sólo permite evitar posibles caídas sino que ayuda a indicar al usuario la posibilidad de alcanzar otro piso en el edificio. Para conseguir esto hemos hecho uso de un sensor RGB-D, que irá situado en el pecho del sujeto, y que permite captar de manera simultánea y sincronizada información de color y profundidad de la escena. El algoritmo usa de manera ventajosa la captación de profundidad para encontrar el suelo y así orientar la escena de la manera que aparece ante el usuario. Posteriormente hay un proceso de segmentación y clasificación de la escena de la que obtenemos aquellos segmentos que se corresponden con "suelo", "paredes", "planos horizontales" y una clase residual, de la que todos los miembros son considerados "obstáculos". A continuación, el algoritmo de detección de escaleras determina si los planos horizontales son escalones que forman una escalera y los ordena jerárquicamente. En el caso de que se haya encontrado una escalera, el algoritmo de modelado nos proporciona toda la información de utilidad para el usuario: cómo esta posicionada con respecto a él, cuántos escalones se ven y cuáles son sus medidas aproximadas. En definitiva, lo que se presenta en este trabajo es un nuevo algoritmo de ayuda a la navegación humana en entornos de interior cuya mayor contribución es un algoritmo de detección y modelado de escaleras que determina toda la información de mayor relevancia para el sujeto. Se han realizado experimentos con grabaciones de vídeo en distintos entornos, consiguiendo buenos resultados tanto en precisión como en tiempo de respuesta. Además se ha realizado una comparación de nuestros resultados con los extraídos de otras publicaciones, demostrando que no sólo se consigue una eciencia que iguala al estado de la materia sino que también se aportan una serie de mejoras. Especialmente, nuestro algoritmo es el primero capaz de obtener las dimensiones de las escaleras incluso con obstáculos obstruyendo parcialmente la vista, como puede ser gente subiendo o bajando. Como resultado de este trabajo se ha elaborado una publicación aceptada en el Second Workshop on Assitive Computer Vision and Robotics del ECCV, cuya presentación tiene lugar el 12 de Septiembre de 2014 en Zúrich, Suiza

    Deep Detection of People and their Mobility Aids for a Hospital Robot

    Full text link
    Robots operating in populated environments encounter many different types of people, some of whom might have an advanced need for cautious interaction, because of physical impairments or their advanced age. Robots therefore need to recognize such advanced demands to provide appropriate assistance, guidance or other forms of support. In this paper, we propose a depth-based perception pipeline that estimates the position and velocity of people in the environment and categorizes them according to the mobility aids they use: pedestrian, person in wheelchair, person in a wheelchair with a person pushing them, person with crutches and person using a walker. We present a fast region proposal method that feeds a Region-based Convolutional Network (Fast R-CNN). With this, we speed up the object detection process by a factor of seven compared to a dense sliding window approach. We furthermore propose a probabilistic position, velocity and class estimator to smooth the CNN's detections and account for occlusions and misclassifications. In addition, we introduce a new hospital dataset with over 17,000 annotated RGB-D images. Extensive experiments confirm that our pipeline successfully keeps track of people and their mobility aids, even in challenging situations with multiple people from different categories and frequent occlusions. Videos of our experiments and the dataset are available at http://www2.informatik.uni-freiburg.de/~kollmitz/MobilityAidsComment: 7 pages, ECMR 2017, dataset and videos: http://www2.informatik.uni-freiburg.de/~kollmitz/MobilityAids

    Indoor assistance for visually impaired people using a RGB-D camera

    Get PDF
    In this paper a navigational aid for visually impaired people is presented. The system uses a RGB-D camera to perceive the environment and implements self-localization, obstacle detection and obstacle classification. The novelty of this work is threefold. First, self-localization is performed by means of a novel camera tracking approach that uses both depth and color information. Second, to provide the user with semantic information, obstacles are classified as walls, doors, steps and a residual class that covers isolated objects and bumpy parts on the floor. Third, in order to guarantee real time performance, the system is accelerated by offloading parallel operations to the GPU. Experiments demonstrate that the whole system is running at 9 Hz

    Unifying terrain awareness for the visually impaired through real-time semantic segmentation.

    Get PDF
    Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework

    Stairs detection with odometry-aided traversal from a wearable RGB-D camera

    Get PDF
    Stairs are one of the most common structures present in human-made scenarios, but also one of the most dangerous for those with vision problems. In this work we propose a complete method to detect, locate and parametrise stairs with a wearable RGB-D camera. Our algorithm uses the depth data to determine if the horizontal planes in the scene are valid steps of a staircase judging their dimensions and relative positions. As a result we obtain a scaled model of the staircase with the spatial location and orientation with respect to the subject. The visual odometry is also estimated to continuously recover the current position and orientation of the user while moving. This enhances the system giving the ability to come back to previously detected features and providing location awareness of the user during the climb. Simultaneously, the detection of the staircase during the traversal is used to correct the drift of the visual odometry. A comparison of results of the stair detection with other state-of-the-art algorithms was performed using public dataset. Additional experiments have also been carried out, recording our own natural scenes with a chest-mounted RGB-D camera in indoor scenarios. The algorithm is robust enough to work in real-time and even under partial occlusions of the stair

    NavGPT: Explicit Reasoning in Vision-and-Language Navigation with Large Language Models

    Full text link
    Trained with an unprecedented scale of data, large language models (LLMs) like ChatGPT and GPT-4 exhibit the emergence of significant reasoning abilities from model scaling. Such a trend underscored the potential of training LLMs with unlimited language data, advancing the development of a universal embodied agent. In this work, we introduce the NavGPT, a purely LLM-based instruction-following navigation agent, to reveal the reasoning capability of GPT models in complex embodied scenes by performing zero-shot sequential action prediction for vision-and-language navigation (VLN). At each step, NavGPT takes the textual descriptions of visual observations, navigation history, and future explorable directions as inputs to reason the agent's current status, and makes the decision to approach the target. Through comprehensive experiments, we demonstrate NavGPT can explicitly perform high-level planning for navigation, including decomposing instruction into sub-goal, integrating commonsense knowledge relevant to navigation task resolution, identifying landmarks from observed scenes, tracking navigation progress, and adapting to exceptions with plan adjustment. Furthermore, we show that LLMs is capable of generating high-quality navigational instructions from observations and actions along a path, as well as drawing accurate top-down metric trajectory given the agent's navigation history. Despite the performance of using NavGPT to zero-shot R2R tasks still falling short of trained models, we suggest adapting multi-modality inputs for LLMs to use as visual navigation agents and applying the explicit reasoning of LLMs to benefit learning-based models

    Map data representation for indoor navigation - a design framework towards a construction of indoor map

    No full text
    A map is a basic component used in a part of navigation in everyday life, which helps people to find information regarding locations, landmarks, and routes. By GPS and online service map e.g. Google maps, navigating outdoors is easier. Inside buildings, however, navigating would not be so easy due to natural characteristics and limitations of GPS, which has led to the creations of indoor navigation system. Even though the indoor navigation systems have been developed for long time, there are still some limitation in accuracy, reliability and indoor spatial information. Navigating inside without indoor spatial information would be a challenge for the users. Regarding the indoor spatial information, a research question has been drawn on finding an appropriate framework towards map data representation of an indoor public spaces and buildings in order to promote indoor navigation for people, robotics, and autonomous systems. This paper has purposed a list of factors and components used towards the design framework for map data representation of indoor public spaces and buildings. The framework, in this paper, has been presented as a form of a multiple-layered model, which each layer designed for a different propose, with object and information classifications
    corecore