411 research outputs found

    Indoor assistance for visually impaired people using a RGB-D camera

    Get PDF
    In this paper a navigational aid for visually impaired people is presented. The system uses a RGB-D camera to perceive the environment and implements self-localization, obstacle detection and obstacle classification. The novelty of this work is threefold. First, self-localization is performed by means of a novel camera tracking approach that uses both depth and color information. Second, to provide the user with semantic information, obstacles are classified as walls, doors, steps and a residual class that covers isolated objects and bumpy parts on the floor. Third, in order to guarantee real time performance, the system is accelerated by offloading parallel operations to the GPU. Experiments demonstrate that the whole system is running at 9 Hz

    COVID-19 and Visual Disability: Can’t Look and Now Don’t Touch

    Get PDF
    Article provides a scientific explanation for pandemic-related challenges blind and visually impaired (BVI) people experience. These challenges include spatial cognition, nonvisual information access, and environmental perception. Also offers promising technical solutions for the above challenges

    Real-Time Obstacle Detection System in Indoor Environment for the Visually Impaired Using Microsoft Kinect Sensor

    Get PDF
    Any mobility aid for the visually impaired people should be able to accurately detect and warn about nearly obstacles. In this paper, we present a method for support system to detect obstacle in indoor environment based on Kinect sensor and 3D-image processing. Color-Depth data of the scene in front of the user is collected using the Kinect with the support of the standard framework for 3D sensing OpenNI and processed by PCL library to extract accurate 3D information of the obstacles. The experiments have been performed with the dataset in multiple indoor scenarios and in different lighting conditions. Results showed that our system is able to accurately detect the four types of obstacle: walls, doors, stairs, and a residual class that covers loose obstacles on the floor. Precisely, walls and loose obstacles on the floor are detected in practically all cases, whereas doors are detected in 90.69% out of 43 positive image samples. For the step detection, we have correctly detected the upstairs in 97.33% out of 75 positive images while the correct rate of downstairs detection is lower with 89.47% from 38 positive images. Our method further allows the computation of the distance between the user and the obstacles

    A Systematic Review of Urban Navigation Systems for Visually Impaired People

    Get PDF
    Blind and Visually impaired people (BVIP) face a range of practical difficulties when undertaking outdoor journeys as pedestrians. Over the past decade, a variety of assistive devices have been researched and developed to help BVIP navigate more safely and independently. In~addition, research in overlapping domains are addressing the problem of automatic environment interpretation using computer vision and machine learning, particularly deep learning, approaches. Our aim in this article is to present a comprehensive review of research directly in, or relevant to, assistive outdoor navigation for BVIP. We breakdown the navigation area into a series of navigation phases and tasks. We then use this structure for our systematic review of research, analysing articles, methods, datasets and current limitations by task. We also provide an overview of commercial and non-commercial navigation applications targeted at BVIP. Our review contributes to the body of knowledge by providing a comprehensive, structured analysis of work in the domain, including the state of the art, and guidance on future directions. It will support both researchers and other stakeholders in the domain to establish an informed view of research progress

    Unifying terrain awareness for the visually impaired through real-time semantic segmentation.

    Get PDF
    Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework

    Long range LiDAR characterisation for obstacle detection for use by the visually impaired and blind

    Get PDF
    Obstacle detection and avoidance is a huge area of interest for autonomous vehicles and, as such, has become an important research topic. Detecting and identifying obstacles enables navigation through an ever changing environment. This work looks at the technology used in self-driving vehicles and examines whether the same technology could be used to aid in navigation for visually impaired and blind (VIB) people. For autonomous vehicles, obstacle detection relies on different sensor modalities to provide information on the vehicles surroundings. A combination of the same sensors placed on a white cane could be used to perform free-space assessment over the whole height of the user and provide additional environmental information not available from the cane alone. This provides its own challenges and advantages. The speeds are much slower when dealing with pedestrians and scanning can be achieved by the movement of the cane. However, the weight and size must be significantly reduced. The full system will be integrated into a smart cane and will consist of four main sensors as well as range sensors. The aim of this work is to report on the characterisation of a long range LiDAR (up to 10m) that will be integrated into a smart white cane developed as part of the INSPEX H2020 project

    An Orientation & Mobility Aid for People with Visual Impairments

    Get PDF
    Orientierung&Mobilität (O&M) umfasst eine Reihe von Techniken für Menschen mit Sehschädigungen, die ihnen helfen, sich im Alltag zurechtzufinden. Dennoch benötigen sie einen umfangreichen und sehr aufwendigen Einzelunterricht mit O&M Lehrern, um diese Techniken in ihre täglichen Abläufe zu integrieren. Während einige dieser Techniken assistive Technologien benutzen, wie zum Beispiel den Blinden-Langstock, Points of Interest Datenbanken oder ein Kompass gestütztes Orientierungssystem, existiert eine unscheinbare Kommunikationslücke zwischen verfügbaren Hilfsmitteln und Navigationssystemen. In den letzten Jahren sind mobile Rechensysteme, insbesondere Smartphones, allgegenwärtig geworden. Dies eröffnet modernen Techniken des maschinellen Sehens die Möglichkeit, den menschlichen Sehsinn bei Problemen im Alltag zu unterstützen, die durch ein nicht barrierefreies Design entstanden sind. Dennoch muss mit besonderer Sorgfalt vorgegangen werden, um dabei nicht mit den speziellen persönlichen Kompetenzen und antrainierten Verhaltensweisen zu kollidieren, oder schlimmstenfalls O&M Techniken sogar zu widersprechen. In dieser Dissertation identifizieren wir eine räumliche und systembedingte Lücke zwischen Orientierungshilfen und Navigationssystemen für Menschen mit Sehschädigung. Die räumliche Lücke existiert hauptsächlich, da assistive Orientierungshilfen, wie zum Beispiel der Blinden-Langstock, nur dabei helfen können, die Umgebung in einem limitierten Bereich wahrzunehmen, während Navigationsinformationen nur sehr weitläufig gehalten sind. Zusätzlich entsteht diese Lücke auch systembedingt zwischen diesen beiden Komponenten — der Blinden-Langstock kennt die Route nicht, während ein Navigationssystem nahegelegene Hindernisse oder O&M Techniken nicht weiter betrachtet. Daher schlagen wir verschiedene Ansätze zum Schließen dieser Lücke vor, um die Verbindung und Kommunikation zwischen Orientierungshilfen und Navigationsinformationen zu verbessern und betrachten das Problem dabei aus beiden Richtungen. Um nützliche relevante Informationen bereitzustellen, identifizieren wir zuerst die bedeutendsten Anforderungen an assistive Systeme und erstellen einige Schlüsselkonzepte, die wir bei unseren Algorithmen und Prototypen beachten. Existierende assistive Systeme zur Orientierung basieren hauptsächlich auf globalen Navigationssatellitensystemen. Wir versuchen, diese zu verbessern, indem wir einen auf Leitlinien basierenden Routing Algorithmus erstellen, der auf individuelle Bedürfnisse anpassbar ist und diese berücksichtigt. Generierte Routen sind zwar unmerklich länger, aber auch viel sicherer, gemäß den in Zusammenarbeit mit O&M Lehrern erstellten objektiven Kriterien. Außerdem verbessern wir die Verfügbarkeit von relevanten georeferenzierten Datenbanken, die für ein derartiges bedarfsgerechtes Routing benötigt werden. Zu diesem Zweck erstellen wir einen maschinellen Lernansatz, mit dem wir Zebrastreifen in Luftbildern erkennen, was auch über Ländergrenzen hinweg funktioniert, und verbessern dabei den Stand der Technik. Um den Nutzen von Mobilitätsassistenz durch maschinelles Sehen zu optimieren, erstellen wir O&M Techniken nachempfundene Ansätze, um die räumliche Wahrnehmung der unmittelbaren Umgebung zu erhöhen. Zuerst betrachten wir dazu die verfügbare Freifläche und informieren auch über mögliche Hindernisse. Weiterhin erstellen wir einen neuartigen Ansatz, um die verfügbaren Leitlinien zu erkennen und genau zu lokalisieren, und erzeugen virtuelle Leitlinien, welche Unterbrechungen überbrücken und bereits frühzeitig Informationen über die nächste Leitlinie bereitstellen. Abschließend verbessern wir die Zugänglichkeit von Fußgängerübergängen, insbesondere Zebrastreifen und Fußgängerampeln, mit einem Deep Learning Ansatz. Um zu analysieren, ob unsere erstellten Ansätze und Algorithmen einen tatsächlichen Mehrwert für Menschen mit Sehschädigung erzeugen, vollziehen wir ein kleines Wizard-of-Oz-Experiment zu unserem bedarfsgerechten Routing — mit einem sehr ermutigendem Ergebnis. Weiterhin führen wir eine umfangreichere Studie mit verschiedenen Komponenten und dem Fokus auf Fußgängerübergänge durch. Obwohl unsere statistischen Auswertungen nur eine geringfügige Verbesserung aufzeigen, beeinflußt durch technische Probleme mit dem ersten Prototypen und einer zu geringen Eingewöhnungszeit der Probanden an das System, bekommen wir viel versprechende Kommentare von fast allen Studienteilnehmern. Dies zeigt, daß wir bereits einen wichtigen ersten Schritt zum Schließen der identifizierten Lücke geleistet haben und Orientierung&Mobilität für Menschen mit Sehschädigung damit verbessern konnten

    Sensory navigation device for blind people

    Full text link
    [EN] This paper presents a new Electronic Travel Aid (ETA) 'Acoustic Prototype' which is especially suited to facilitate the navigation of visually impaired users. The device consists of a set of 3-Dimensional Complementary Metal Oxide Semiconductor (3-D CMOS) image sensors based on the three-dimensional integration and Complementary Metal-Oxide Semiconductor (CMOS) processing techniques implemented into a pair of glasses, stereo headphones as well as a Field-Programmable Gate Array (FPGA) used as processing unit. The device is intended to be used as a complementary device to navigation through both open known and unknown environments. The FPGA and the 3D-CMOS image sensor electronics control object detection. Distance measurement is achieved by using chip-integrated technology based on the Multiple Short Time Integration method. The processed information of the object distance is presented to the user via acoustic sounds through stereophonic headphones. The user interprets the information as an acoustic image of the surrounding environment. The Acoustic Prototype transforms the surface of the objects of the real environment into acoustical sounds. The method used is similar to a bat's acoustic orientation. Having good hearing ability, with few weeks training the users are able to perceive not only the presence of an object but also the object form (that is, if the object is round, if it has corners, if it is a car or a box, if it is a cardboard object or if it is an iron or cement object, a tree, a person, a static or moving object). The information is continuously delivered to the user in a few nanoseconds until the device is shut down, helping the end user to perceive the information in real time.The first author would like to acknowledge that this research was funded through the FP6 European project CASBLiP number 027063 and Project number 2062 of the Programa de Apoyo a la Investigacion y Desarrollo 2011 from the Universitat Politecnica de Valencia.Dunai, L.; Peris Fajarnes, G.; Lluna Gil, E.; Defez Garcia, B. (2013). Sensory navigation device for blind people. Journal of Navigation. 66(3):346-362. doi:10.1017/S0373463312000574S34636266

    Intelligent computational techniques and virtual environment for understanding cerebral visual impairment patients

    Get PDF
    Cerebral Visual Impairment (CVI) is a medical area that concerns the study of the effect of brain damages on the visual field (VF). People with CVI are not able to construct a perfect 3-Dimensional view of what they see through their eyes in their brain. Therefore, they have difficulties in their mobility and behaviours that others find hard to understand due to their visual impairment. A branch of Artificial Intelligence (AI) is the simulation of behaviour by building computational models that help to explain how people solve problems or why they behave in a certain way. This project describes a novel intelligent system that simulates the navigation problems faced by people with CVI. This will help relatives, friends, and ophthalmologists of CVI patients understand more about their difficulties in navigating their everyday environment. The navigation simulation system is implemented using the Unity3D game engine. Virtual scenes of different living environments are also created using the Unity modelling software. The vision of the avatar in the virtual environment is implemented using a camera provided by the 3D game engine. Given a visual field chart of a CVI patient with visual impairment, the system automatically creates a filter (mask) that mimics a visual defect and places it in front of the visual field of the avatar. The filters are created by extracting, classifying and converting the symbols of the defected areas in the visual field chart to numerical values and then converted to textures to mask the vision. Each numeric value represents a level of transparency and opacity according to the severity of the visual defect in that region. The filters represent the vision masks. Unity3D supports physical properties to facilitate the representation of the VF defects into a form of structures of rays. The length of each ray depends on the VF defect s numeric value. Such that, the greater values (means a greater percentage of opacity) represented by short rays in length. While the smaller values (means a greater percentage of transparency) represented by longer rays. The lengths of all rays are representing the vision map (how far the patient can see). Algorithms for navigation based on the generated rays have been developed to enable the avatar to move around in given virtual environments. The avatar depends on the generated vision map and will exhibit different behaviours to simulate the navigation problem of real patients. The avatar s behaviour of navigation differs from patient to another according to their different defects. An experiment of navigating virtual environments (scenes) using the HTC Oculus Vive Headset was conducted using different scenarios. The scenarios are designed to use different VF defects within different scenes. The experiment simulates the patient s navigation in virtual environments with static objects (rooms) and in virtual environments with moving objects. The behaviours of the experiment participants actions (avoid/bump) match the avatar s using the same scenario. This project has created a system that enables the CVI patient s parents and relatives to aid the understanding what the CVI patient encounter. Besides, it aids the specialists and educators to take into account all the difficulties that the patients experience. Then, is to design and develop appropriate educational programs that can help each individual patient
    • …
    corecore