429 research outputs found

    Indoor navigation for the visually impaired : enhancements through utilisation of the Internet of Things and deep learning

    Get PDF
    Wayfinding and navigation are essential aspects of independent living that heavily rely on the sense of vision. Walking in a complex building requires knowing exact location to find a suitable path to the desired destination, avoiding obstacles and monitoring orientation and movement along the route. People who do not have access to sight-dependent information, such as that provided by signage, maps and environmental cues, can encounter challenges in achieving these tasks independently. They can rely on assistance from others or maintain their independence by using assistive technologies and the resources provided by smart environments. Several solutions have adapted technological innovations to combat navigation in an indoor environment over the last few years. However, there remains a significant lack of a complete solution to aid the navigation requirements of visually impaired (VI) people. The use of a single technology cannot provide a solution to fulfil all the navigation difficulties faced. A hybrid solution using Internet of Things (IoT) devices and deep learning techniques to discern the patterns of an indoor environment may help VI people gain confidence to travel independently. This thesis aims to improve the independence and enhance the journey of VI people in an indoor setting with the proposed framework, using a smartphone. The thesis proposes a novel framework, Indoor-Nav, to provide a VI-friendly path to avoid obstacles and predict the user s position. The components include Ortho-PATH, Blue Dot for VI People (BVIP), and a deep learning-based indoor positioning model. The work establishes a novel collision-free pathfinding algorithm, Orth-PATH, to generate a VI-friendly path via sensing a grid-based indoor space. Further, to ensure correct movement, with the use of beacons and a smartphone, BVIP monitors the movements and relative position of the moving user. In dark areas without external devices, the research tests the feasibility of using sensory information from a smartphone with a pre-trained regression-based deep learning model to predict the user s absolute position. The work accomplishes a diverse range of simulations and experiments to confirm the performance and effectiveness of the proposed framework and its components. The results show that Indoor-Nav is the first type of pathfinding algorithm to provide a novel path to reflect the needs of VI people. The approach designs a path alongside walls, avoiding obstacles, and this research benchmarks the approach with other popular pathfinding algorithms. Further, this research develops a smartphone-based application to test the trajectories of a moving user in an indoor environment

    On supporting university communities in indoor wayfinding: An inclusive design approach

    Get PDF
    Mobility can be defined as the ability of people to move, live and interact with the space. In this context, indoor mobility, in terms of indoor localization and wayfinding, is a relevant topic due to the challenges it presents, in comparison with outdoor mobility, where GPS is hardly exploited. Knowing how to move in an indoor environment can be crucial for people with disabilities, and in particular for blind users, but it can provide several advantages also to any person who is moving in an unfamiliar place. Following this line of thought, we employed an inclusive by design approach to implement and deploy a system that comprises an Internet of Things infrastructure and an accessible mobile application to provide wayfinding functions, targeting the University community. As a real word case study, we considered the University of Bologna, designing a system able to be deployed in buildings with different configurations and settings, considering also historical buildings. The final system has been evaluated in three different scenarios, considering three different target audiences (18 users in total): i. students with disabilities (i.e., visual and mobility impairments); ii. campus students; and iii. visitors and tourists. Results reveal that all the participants enjoyed the provided functions and the indoor localization strategy was fine enough to provide a good wayfinding experience

    Comparative analysis of computer-vision and BLE technology based indoor navigation systems for people with visual impairments

    Get PDF
    Background: Considerable number of indoor navigation systems has been proposed to augment people with visual impairments (VI) about their surroundings. These systems leverage several technologies, such as computer-vision, Bluetooth low energy (BLE), and other techniques to estimate the position of a user in indoor areas. Computer-vision based systems use several techniques including matching pictures, classifying captured images, recognizing visual objects or visual markers. BLE based system utilizes BLE beacons attached in the indoor areas as the source of the radio frequency signal to localize the position of the user. Methods: In this paper, we examine the performance and usability of two computer-vision based systems and BLE-based system. The first system is computer-vision based system, called CamNav that uses a trained deep learning model to recognize locations, and the second system, called QRNav, that utilizes visual markers (QR codes) to determine locations. A field test with 10 blindfolded users has been conducted while using the three navigation systems. Results: The obtained results from navigation experiment and feedback from blindfolded users show that QRNav and CamNav system is more efficient than BLE based system in terms of accuracy and usability. The error occurred in BLE based application is more than 30% compared to computer vision based systems including CamNav and QRNav. Conclusions: The developed navigation systems are able to provide reliable assistance for the participants during real time experiments. Some of the participants took minimal external assistance while moving through the junctions in the corridor areas. Computer vision technology demonstrated its superiority over BLE technology in assistive systems for people with visual impairments. - 2019 The Author(s).Scopu

    Airport Accessibility and Navigation Assistance for People with Visual Impairments

    Get PDF
    People with visual impairments often have to rely on the assistance of sighted guides in airports, which prevents them from having an independent travel experience. In order to learn about their perspectives on current airport accessibility, we conducted two focus groups that discussed their needs and experiences in-depth, as well as the potential role of assistive technologies. We found that independent navigation is a main challenge and severely impacts their overall experience. As a result, we equipped an airport with a Bluetooth Low Energy (BLE) beacon-based navigation system and performed a real-world study where users navigated routes relevant for their travel experience. We found that despite the challenging environment participants were able to complete their itinerary independently, presenting none to few navigation errors and reasonable timings. This study presents the first systematic evaluation posing BLE technology as a strong approach to increase the independence of visually impaired people in airports

    Mixed Reality Browsers and Pedestrian Navigation in Augmented Cities

    No full text
    International audienceIn this paper, We use a declarative format for positional audio with synchronization between audio chunks using SMIL. This format has been specifically designed for the type of audio used in AR applications. The audio engine associated to this format is running on mobile platforms (iOS, Android). Our MRB browser called IXE use a format based on volunteered geographic information (OpenStreetMap) and OSM documents for IXE can be fully authored in side OSM editors like JOSM. This is in contrast with the other AR browsers like Layar, Juniao, Wikitude, which use a Point of Interest (POI) based format having no notion of ways. This introduces a fundamental difference and in some senses a duality relation between IXE and the other AR browsers. In IXE, Augmented Virtuality (AV) navigation along a route (composed of ways) is central and AR interaction with objects is delegated to associate 3D activities. In AR browsers, navigation along a route is delegated to associated map activities and AR interaction with objects is central. IXE supports multiple tracking technologies and therefore allows both indoor navigation in buildings and outdoor navigation at the level of sidewalks. A first android version of the IXE browser will be released at the end of 2013. Being based on volunteered geographic it will allow building accessible pedestrian networks in augmented cities

    Mixed Reality Browsers and Pedestrian Navigation in Augmented Cities

    Get PDF
    International audienceIn this paper, We use a declarative format for positional audio with synchronization between audio chunks using SMIL. This format has been specifically designed for the type of audio used in AR applications. The audio engine associated to this format is running on mobile platforms (iOS, Android). Our MRB browser called IXE use a format based on volunteered geographic information (OpenStreetMap) and OSM documents for IXE can be fully authored in side OSM editors like JOSM. This is in contrast with the other AR browsers like Layar, Juniao, Wikitude, which use a Point of Interest (POI) based format having no notion of ways. This introduces a fundamental difference and in some senses a duality relation between IXE and the other AR browsers. In IXE, Augmented Virtuality (AV) navigation along a route (composed of ways) is central and AR interaction with objects is delegated to associate 3D activities. In AR browsers, navigation along a route is delegated to associated map activities and AR interaction with objects is central. IXE supports multiple tracking technologies and therefore allows both indoor navigation in buildings and outdoor navigation at the level of sidewalks. A first android version of the IXE browser will be released at the end of 2013. Being based on volunteered geographic it will allow building accessible pedestrian networks in augmented cities

    スマートフォンを用いた視覚障碍者向け移動支援システムアーキテクチャに関する研究

    Get PDF
    学位の種別: 課程博士審査委員会委員 : (主査)東京大学教授 坂村 健, 東京大学教授 越塚 登, 東京大学教授 暦本 純一, 東京大学教授 中尾 彰宏, 東京大学教授 石川 徹University of Tokyo(東京大学

    Distributed and adaptive location identification system for mobile devices

    Full text link
    Indoor location identification and navigation need to be as simple, seamless, and ubiquitous as its outdoor GPS-based counterpart is. It would be of great convenience to the mobile user to be able to continue navigating seamlessly as he or she moves from a GPS-clear outdoor environment into an indoor environment or a GPS-obstructed outdoor environment such as a tunnel or forest. Existing infrastructure-based indoor localization systems lack such capability, on top of potentially facing several critical technical challenges such as increased cost of installation, centralization, lack of reliability, poor localization accuracy, poor adaptation to the dynamics of the surrounding environment, latency, system-level and computational complexities, repetitive labor-intensive parameter tuning, and user privacy. To this end, this paper presents a novel mechanism with the potential to overcome most (if not all) of the abovementioned challenges. The proposed mechanism is simple, distributed, adaptive, collaborative, and cost-effective. Based on the proposed algorithm, a mobile blind device can potentially utilize, as GPS-like reference nodes, either in-range location-aware compatible mobile devices or preinstalled low-cost infrastructure-less location-aware beacon nodes. The proposed approach is model-based and calibration-free that uses the received signal strength to periodically and collaboratively measure and update the radio frequency characteristics of the operating environment to estimate the distances to the reference nodes. Trilateration is then used by the blind device to identify its own location, similar to that used in the GPS-based system. Simulation and empirical testing ascertained that the proposed approach can potentially be the core of future indoor and GPS-obstructed environments

    Assistive Navigation Using Deep Reinforcement Learning Guiding Robot With UWB/Voice Beacons and Semantic Feedbacks for Blind and Visually Impaired People

    Get PDF
    Facilitating navigation in pedestrian environments is critical for enabling people who are blind and visually impaired (BVI) to achieve independent mobility. A deep reinforcement learning (DRL)–based assistive guiding robot with ultrawide-bandwidth (UWB) beacons that can navigate through routes with designated waypoints was designed in this study. Typically, a simultaneous localization and mapping (SLAM) framework is used to estimate the robot pose and navigational goal; however, SLAM frameworks are vulnerable in certain dynamic environments. The proposed navigation method is a learning approach based on state-of-the-art DRL and can effectively avoid obstacles. When used with UWB beacons, the proposed strategy is suitable for environments with dynamic pedestrians. We also designed a handle device with an audio interface that enables BVI users to interact with the guiding robot through intuitive feedback. The UWB beacons were installed with an audio interface to obtain environmental information. The on-handle and on-beacon verbal feedback provides points of interests and turn-by-turn information to BVI users. BVI users were recruited in this study to conduct navigation tasks in different scenarios. A route was designed in a simulated ward to represent daily activities. In real-world situations, SLAM-based state estimation might be affected by dynamic obstacles, and the visual-based trail may suffer from occlusions from pedestrians or other obstacles. The proposed system successfully navigated through environments with dynamic pedestrians, in which systems based on existing SLAM algorithms have failed
    corecore