1,777 research outputs found

    Unifying terrain awareness for the visually impaired through real-time semantic segmentation.

    Get PDF
    Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework

    Indoor/outdoor navigation system based on possibilistic traversable area segmentation for visually impaired people

    Get PDF
    Autonomous collision avoidance for visually impaired people requires a specific processing for an accurate definition of traversable area. Processing of a real time image sequence for traversable area segmentation is quite mandatory. Low cost systems suggest use of poor quality cameras. However, real time low cost camera suffers from great variability of traversable area appearance at indoor as well as outdoor environments. Taking into account ambiguity affecting object and traversable area appearance induced by reflections, illumination variations, occlusions (, etc...), an accurate segmentation of traversable area in such conditions remains a challenge. Moreover, indoor and outdoor environments add additional variability to traversable areas. In this paper, we present a real-time approach for fast traversable area segmentation from image sequence recorded by a low-cost monocular camera for navigation system. Taking into account all kinds of variability in the image, we apply possibility theory for modeling information ambiguity. An efficient way of updating the traversable area model in each environment condition is to consider traversable area samples from the same processed image for building its possibility maps. Then fusing these maps allows making a fair model definition of the traversable area. Performance of the proposed system was evaluated on public databases, with indoor and outdoor environments. Experimental results show that this method is challenging leading to higher segmentation rates

    A comparative study in real-time scene sonification for visually impaired people

    Get PDF
    In recent years, with the development of depth cameras and scene detection algorithms, a wide variety of electronic travel aids for visually impaired people have been proposed. However, it is still challenging to convey scene information to visually impaired people efficiently. In this paper, we propose three different auditory-based interaction methods, i.e., depth image sonification, obstacle sonification as well as path sonification, which convey raw depth images, obstacle information and path information respectively to visually impaired people. Three sonification methods are compared comprehensively through a field experiment attended by twelve visually impaired participants. The results show that the sonification of high-level scene information, such as the direction of pathway, is easier to learn and adapt, and is more suitable for point-to-point navigation. In contrast, through the sonification of low-level scene information, such as raw depth images, visually impaired people can understand the surrounding environment more comprehensively. Furthermore, there is no interaction method that is best suited for all participants in the experiment, and visually impaired individuals need a period of time to find the most suitable interaction method. Our findings highlight the features and the differences of three scene detection algorithms and the corresponding sonification methods. The results provide insights into the design of electronic travel aids, and the conclusions can also be applied in other fields, such as the sound feedback of virtual reality applications

    A survey on computer vision technology in Camera Based ETA devices

    Get PDF
    Electronic Travel Aid systems are expected to make impaired persons able to perform their everyday tasks such as finding an object and avoiding obstacles easier. Among ETA devices, Camera Based ETA devices are the new one and with a high potential for helping Visually Impaired Persons. With recent advances in computer science and specially computer vision, Camera Based ETA devices used several computer vision algorithms and techniques such as object recognition and stereo vision in order to help VIP to perform tasks such as reading banknotes, recognizing people and avoiding obstacles. This paper analyses and appraises a number of literatures in this area with focus on stereo vision technique. Finally, after discussing about the methods and techniques used in different literatures, it is concluded that the stereo vision is the best technique for helping VIP in their everyday navigation

    Helping the Blind to Get through COVID-19: Social Distancing Assistant Using Real-Time Semantic Segmentation on RGB-D Video

    Get PDF
    The current COVID-19 pandemic is having a major impact on our daily lives. Social distancing is one of the measures that has been implemented with the aim of slowing the spread of the disease, but it is difficult for blind people to comply with this. In this paper, we present a system that helps blind people to maintain physical distance to other persons using a combination of RGB and depth cameras. We use a real-time semantic segmentation algorithm on the RGB camera to detect where persons are and use the depth camera to assess the distance to them; then, we provide audio feedback through bone-conducting headphones if a person is closer than 1.5 m. Our system warns the user only if persons are nearby but does not react to non-person objects such as walls, trees or doors; thus, it is not intrusive, and it is possible to use it in combination with other assistive devices. We have tested our prototype system on one blind and four blindfolded persons, and found that the system is precise, easy to use, and amounts to low cognitive load
    • …
    corecore