8 research outputs found

    Unifying terrain awareness for the visually impaired through real-time semantic segmentation.

    Get PDF
    Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework

    PASS: Panoramic Annular Semantic Segmentation

    Get PDF
    Pixel-wise semantic segmentation is capable of unifying most of driving scene perception tasks, and has enabled striking progress in the context of navigation assistance, where an entire surrounding sensing is vital. However, current mainstream semantic segmenters are predominantly benchmarked against datasets featuring narrow Field of View (FoV), and a large part of vision-based intelligent vehicles use only a forward-facing camera. In this paper, we propose a Panoramic Annular Semantic Segmentation (PASS) framework to perceive the whole surrounding based on a compact panoramic annular lens system and an online panorama unfolding process. To facilitate the training of PASS models, we leverage conventional FoV imaging datasets, bypassing the efforts entailed to create fully dense panoramic annotations. To consistently exploit the rich contextual cues in the unfolded panorama, we adapt our real-time ERF-PSPNet to predict semantically meaningful feature maps in different segments, and fuse them to fulfill panoramic scene parsing. The innovation lies in the network adaptation to enable smooth and seamless segmentation, combined with an extended set of heterogeneous data augmentations to attain robustness in panoramic imagery. A comprehensive variety of experiments demonstrates the effectiveness for real-world surrounding perception in a single PASS, while the adaptation proposal is exceptionally positive for state-of-the-art efficient networks.Ministerio de Econom铆a y CompetitividadComunidad de Madri

    Helping the Blind to Get through COVID-19: Social Distancing Assistant Using Real-Time Semantic Segmentation on RGB-D Video

    Get PDF
    The current COVID-19 pandemic is having a major impact on our daily lives. Social distancing is one of the measures that has been implemented with the aim of slowing the spread of the disease, but it is difficult for blind people to comply with this. In this paper, we present a system that helps blind people to maintain physical distance to other persons using a combination of RGB and depth cameras. We use a real-time semantic segmentation algorithm on the RGB camera to detect where persons are and use the depth camera to assess the distance to them; then, we provide audio feedback through bone-conducting headphones if a person is closer than 1.5 m. Our system warns the user only if persons are nearby but does not react to non-person objects such as walls, trees or doors; thus, it is not intrusive, and it is possible to use it in combination with other assistive devices. We have tested our prototype system on one blind and four blindfolded persons, and found that the system is precise, easy to use, and amounts to low cognitive load

    Detecci贸n y modelado de escaleras con sensor RGB-D para asistencia personal

    Get PDF
    La habilidad de avanzar y moverse de manera efectiva por el entorno resulta natural para la mayor铆a de la gente, pero no resulta f谩cil de realizar bajo algunas circunstancias, como es el caso de las personas con problemas visuales o cuando nos movemos en entornos especialmente complejos o desconocidos. Lo que pretendemos conseguir a largo plazo es crear un sistema portable de asistencia aumentada para ayudar a quienes se enfrentan a esas circunstancias. Para ello nos podemos ayudar de c谩maras, que se integran en el asistente. En este trabajo nos hemos centrado en el m贸dulo de detecci贸n, dejando para otros trabajos el resto de m贸dulos, como podr铆a ser la interfaz entre la detecci贸n y el usuario. Un sistema de guiado de personas debe mantener al sujeto que lo utiliza apartado de peligros, pero tambi茅n deber铆a ser capaz de reconocer ciertas caracter铆sticas del entorno para interactuar con ellas. En este trabajo resolvemos la detecci贸n de uno de los recursos m谩s comunes que una persona puede tener que utilizar a lo largo de su vida diaria: las escaleras. Encontrar escaleras es doblemente beneficioso, puesto que no s贸lo permite evitar posibles ca铆das sino que ayuda a indicar al usuario la posibilidad de alcanzar otro piso en el edificio. Para conseguir esto hemos hecho uso de un sensor RGB-D, que ir谩 situado en el pecho del sujeto, y que permite captar de manera simult谩nea y sincronizada informaci贸n de color y profundidad de la escena. El algoritmo usa de manera ventajosa la captaci贸n de profundidad para encontrar el suelo y as铆 orientar la escena de la manera que aparece ante el usuario. Posteriormente hay un proceso de segmentaci贸n y clasificaci贸n de la escena de la que obtenemos aquellos segmentos que se corresponden con "suelo", "paredes", "planos horizontales" y una clase residual, de la que todos los miembros son considerados "obst谩culos". A continuaci贸n, el algoritmo de detecci贸n de escaleras determina si los planos horizontales son escalones que forman una escalera y los ordena jer谩rquicamente. En el caso de que se haya encontrado una escalera, el algoritmo de modelado nos proporciona toda la informaci贸n de utilidad para el usuario: c贸mo esta posicionada con respecto a 茅l, cu谩ntos escalones se ven y cu谩les son sus medidas aproximadas. En definitiva, lo que se presenta en este trabajo es un nuevo algoritmo de ayuda a la navegaci贸n humana en entornos de interior cuya mayor contribuci贸n es un algoritmo de detecci贸n y modelado de escaleras que determina toda la informaci贸n de mayor relevancia para el sujeto. Se han realizado experimentos con grabaciones de v铆deo en distintos entornos, consiguiendo buenos resultados tanto en precisi贸n como en tiempo de respuesta. Adem谩s se ha realizado una comparaci贸n de nuestros resultados con los extra铆dos de otras publicaciones, demostrando que no s贸lo se consigue una eciencia que iguala al estado de la materia sino que tambi茅n se aportan una serie de mejoras. Especialmente, nuestro algoritmo es el primero capaz de obtener las dimensiones de las escaleras incluso con obst谩culos obstruyendo parcialmente la vista, como puede ser gente subiendo o bajando. Como resultado de este trabajo se ha elaborado una publicaci贸n aceptada en el Second Workshop on Assitive Computer Vision and Robotics del ECCV, cuya presentaci贸n tiene lugar el 12 de Septiembre de 2014 en Z煤rich, Suiza

    Compact Environment Modelling from Unconstrained Camera Platforms

    Get PDF
    Mobile robotic systems need to perceive their surroundings in order to act independently. In this work a perception framework is developed which interprets the data of a binocular camera in order to transform it into a compact, expressive model of the environment. This model enables a mobile system to move in a targeted way and interact with its surroundings. It is shown how the developed methods also provide a solid basis for technical assistive aids for visually impaired people

    Road terrain detection for Advanced Driver Assistance Systems

    Get PDF
    K眉hnl T. Road terrain detection for Advanced Driver Assistance Systems. Bielefeld: Bielefeld University; 2013

    Climbing and Walking Robots

    Get PDF
    With the advancement of technology, new exciting approaches enable us to render mobile robotic systems more versatile, robust and cost-efficient. Some researchers combine climbing and walking techniques with a modular approach, a reconfigurable approach, or a swarm approach to realize novel prototypes as flexible mobile robotic platforms featuring all necessary locomotion capabilities. The purpose of this book is to provide an overview of the latest wide-range achievements in climbing and walking robotic technology to researchers, scientists, and engineers throughout the world. Different aspects including control simulation, locomotion realization, methodology, and system integration are presented from the scientific and from the technical point of view. This book consists of two main parts, one dealing with walking robots, the second with climbing robots. The content is also grouped by theoretical research and applicative realization. Every chapter offers a considerable amount of interesting and useful information
    corecore