1,359 research outputs found
A Navigation and Augmented Reality System for Visually Impaired People
In recent years, we have assisted with an impressive advance in augmented reality systems and computer vision algorithms, based on image processing and artificial intelligence. Thanks to these technologies, mainstream smartphones are able to estimate their own motion in 3D space with high accuracy. In this paper, we exploit such technologies to support the autonomous mobility of people with visual disabilities, identifying pre-defined virtual paths and providing context information, reducing the distance between the digital and real worlds. In particular, we present ARIANNA+, an extension of ARIANNA, a system explicitly designed for visually impaired people for indoor and outdoor localization and navigation. While ARIANNA is based on the assumption that landmarks, such as QR codes, and physical paths (composed of colored tapes, painted lines, or tactile pavings) are deployed in the environment and recognized by the camera of a common smartphone, ARIANNA+ eliminates the need for any physical support thanks to the ARKit library, which we exploit to build a completely virtual path. Moreover, ARIANNA+ adds the possibility for the users to have enhanced interactions with the surrounding environment, through convolutional neural networks (CNNs) trained to recognize objects or buildings and enabling the possibility of accessing contents associated with them. By using a common smartphone as a mediation instrument with the environment, ARIANNA+ leverages augmented reality and machine learning for enhancing physical accessibility. The proposed system allows visually impaired people to easily navigate in indoor and outdoor scenarios simply by loading a previously recorded virtual path and providing automatic guidance along the route, through haptic, speech, and sound feedback
Recommended from our members
MULTI-SENSOR LOCALIZATION AND TRACKING IN DISASTER MANAGEMENT AND INDOOR WAYFINDING FOR VISUALLY IMPAIRED USERS
This dissertation proposes a series of multi-sensor localization and tracking algorithms particularly developed for two important application domains, which are disaster management and indoor wayfinding for blind and visually impaired (BVI) users. For disaster management, we developed two different localization algorithms, one each for Radio Frequency Identification (RFID) and Bluetooth Low Energy (BLE) technology, which enable the disaster management system to track patients in real-time. Both algorithms work in the absence of any pre-deployed infrastructure along with smartphones and wearable devices. Regarding indoor wayfinding for BVI users, we have explored several types of indoor positioning techniques including BLE-based, inertial, visual and hybrid approaches to offer accurate and reliable location and orientation in complex navigation spaces. In this dissertation, significant contributions have been made in the design and implementation of various localization and tracking algorithms under different requirements of certain applications
A multimodal smartphone interface for active perception by visually impaired
The diffuse availability of mobile devices, such as smartphones and tablets, has the potential to bring substantial benefits to the people with sensory impairments. The solution proposed in this paper is part of an ongoing effort to create an accurate obstacle and hazard detector for the visually impaired, which is embedded in a hand-held device. In particular, it presents a proof of concept for a multimodal interface to control the orientation of a smartphone's camera, while being held by a person, using a combination of vocal messages, 3D sounds and vibrations. The solution, which is to be evaluated experimentally by users, will enable further research in the area of active vision with human-in-the-loop, with potential application to mobile assistive devices for indoor navigation of visually impaired people
Indoor navigation for the visually impaired : enhancements through utilisation of the Internet of Things and deep learning
Wayfinding and navigation are essential aspects of independent living that heavily rely on the sense of vision. Walking in a complex building requires knowing exact location to find a suitable path to the desired destination, avoiding obstacles and monitoring orientation and movement along the route. People who do not have access to sight-dependent information, such as that provided by signage, maps and environmental cues, can encounter challenges in achieving these tasks independently. They can rely on assistance from others or maintain their independence by using assistive technologies and the resources provided by smart environments. Several solutions have adapted technological innovations to combat navigation in an indoor environment over the last few years. However, there remains a significant lack of a complete solution to aid the navigation requirements of visually impaired (VI) people. The use of a single technology cannot provide a solution to fulfil all the navigation difficulties faced. A hybrid solution using Internet of Things (IoT) devices and deep learning techniques to discern the patterns of an indoor environment may help VI people gain confidence to travel independently. This thesis aims to improve the independence and enhance the journey of VI people in an indoor setting with the proposed framework, using a smartphone. The thesis proposes a novel framework, Indoor-Nav, to provide a VI-friendly path to avoid obstacles and predict the user s position. The components include Ortho-PATH, Blue Dot for VI People (BVIP), and a deep learning-based indoor positioning model. The work establishes a novel collision-free pathfinding algorithm, Orth-PATH, to generate a VI-friendly path via sensing a grid-based indoor space. Further, to ensure correct movement, with the use of beacons and a smartphone, BVIP monitors the movements and relative position of the moving user. In dark areas without external devices, the research tests the feasibility of using sensory information from a smartphone with a pre-trained regression-based deep learning model to predict the user s absolute position. The work accomplishes a diverse range of simulations and experiments to confirm the performance and effectiveness of the proposed framework and its components. The results show that Indoor-Nav is the first type of pathfinding algorithm to provide a novel path to reflect the needs of VI people. The approach designs a path alongside walls, avoiding obstacles, and this research benchmarks the approach with other popular pathfinding algorithms. Further, this research develops a smartphone-based application to test the trajectories of a moving user in an indoor environment
An indoor navigation architecture using variable data sources for blind and visually impaired persons
Contrary to outdoor positioning and navigation
systems, there isn’t a counterpart global solution for indoor
environments. Usually, the deployment of an indoor positioning
system must be adapted case by case, according to the
infrastructure and the objective of the localization. A particularly
delicate case is related with persons who are blind or visually
impaired. A robust and easy to use indoor navigation solution
would be extremely useful, but this would also be particularly
difficult to develop, given the special requirements of the system
that would have to be more accurate and user friendly than a
general solution. This paper presents a contribute to this subject,
by proposing a hybrid indoor positioning system adaptable to the
surrounding indoor structure, and dealing with different types of
signals to increase accuracy. This would permit lower the
deployment costs, since it could be done gradually, beginning
with the likely existing Wi-Fi infrastructure to get a fairy
accuracy up to a high accuracy using visual tags and NFC tags
when necessary and possible.info:eu-repo/semantics/publishedVersio
Outdoor Localization Using BLE RSSI and Accessible Pedestrian Signals for the Visually Impaired at Intersections
One of the major challenges for blind and visually impaired (BVI) people is traveling safely to cross intersections on foot. Many countries are now generating audible signals at crossings for visually impaired people to help with this problem. However, these accessible pedestrian signals can result in confusion for visually impaired people as they do not know which signal must be interpreted for traveling multiple crosses in complex road architecture. To solve this problem, we propose an assistive system called CAS (Crossing Assistance System) which extends the principle of the BLE (Bluetooth Low Energy) RSSI (Received Signal Strength Indicator) signal for outdoor and indoor location tracking and overcomes the intrinsic limitation of outdoor noise to enable us to locate the user effectively. We installed the system on a real-world intersection and collected a set of data for demonstrating the feasibility of outdoor RSSI tracking in a series of two studies. In the first study, our goal was to show the feasibility of using outdoor RSSI on the localization of four zones. We used a k-nearest neighbors (kNN) method and showed it led to 99.8% accuracy. In the second study, we extended our work to a more complex setup with nine zones, evaluated both the kNN and an additional method, a Support Vector Machine (SVM) with various RSSI features for classification. We found that the SVM performed best using the RSSI average, standard deviation, median, interquartile range (IQR) of the RSSI over a 5 s window. The best method can localize people with 97.7% accuracy. We conclude this paper by discussing how our system can impact navigation for BVI users in outdoor and indoor setups and what are the implications of these findings on the design of both wearable and traffic assistive technology for blind pedestrian navigation
Recommended from our members
Scalable and Vision Free User Interface Approaches for Indoor Navigation Systems for the Visually Impaired
This thesis introduces scalable and vision free user interface approaches for indoor navigation systems for the visually impaired. Using an Android Smartphone that runs the indoor navigation system – Percept Application with accessibility features, the blind user obtains navigation instructions generated automatically by our navigation generation algorithm to the chosen destination when touching specific landmarks tagged with Near Field Communication tags. This thesis also introduces an Orientation & Mobility Survey Tool that can help O&M Instructors survey the building and deploy such indoor navigation system. The system was deployed and tested in a large building at the University of Massachusetts at Amherst
- …