1,637 research outputs found
A Wearable Indoor Navigation System for Blind and Visually Impaired Individuals
Indoor positioning and navigation for blind and visually impaired individuals has become an active field of research. The development of a reliable positioning and navigational system will reduce the suffering of the people with visual disabilities, help them live more independently, and promote their employment opportunities.
In this work, a coarse-to-fine multi-resolution model is proposed for indoor navigation in hallway environments based on the use of a wearable computer called the eButton. This self-constructed device contains multiple sensors which are used for indoor positioning and localization in three layers of resolution: a global positioning system (GPS) layer for building identification; a Wi-Fi - barometer layer for rough position localization; and a digital camera - motion sensor layer for precise localization. In this multi-resolution model, a new theoretical framework is developed which uses the change of atmospheric pressure to determine the floor number in a multistory building. The digital camera and motion sensors within the eButton acquire both pictorial and motion data as a person with a normal vision walks along a hallway to establish a database. Precise indoor positioning and localization information is provided to the visually impaired individual based on a Kalman filter fusion algorithm and an automatic matching algorithm between the acquired images and those in the pre-established database. Motion calculation is based on the data from motion sensors is used to refine the localization result. Experiments were conducted to evaluate the performance of the algorithms. Our results show that the new device and algorithms can precisely determine the floor level and indoor location along hallways in multistory buildings, providing a powerful and unobtrusive navigational tool for blind and visually impaired individuals
Recommended from our members
MULTI-SENSOR LOCALIZATION AND TRACKING IN DISASTER MANAGEMENT AND INDOOR WAYFINDING FOR VISUALLY IMPAIRED USERS
This dissertation proposes a series of multi-sensor localization and tracking algorithms particularly developed for two important application domains, which are disaster management and indoor wayfinding for blind and visually impaired (BVI) users. For disaster management, we developed two different localization algorithms, one each for Radio Frequency Identification (RFID) and Bluetooth Low Energy (BLE) technology, which enable the disaster management system to track patients in real-time. Both algorithms work in the absence of any pre-deployed infrastructure along with smartphones and wearable devices. Regarding indoor wayfinding for BVI users, we have explored several types of indoor positioning techniques including BLE-based, inertial, visual and hybrid approaches to offer accurate and reliable location and orientation in complex navigation spaces. In this dissertation, significant contributions have been made in the design and implementation of various localization and tracking algorithms under different requirements of certain applications
An indoor navigation architecture using variable data sources for blind and visually impaired persons
Contrary to outdoor positioning and navigation
systems, there isn’t a counterpart global solution for indoor
environments. Usually, the deployment of an indoor positioning
system must be adapted case by case, according to the
infrastructure and the objective of the localization. A particularly
delicate case is related with persons who are blind or visually
impaired. A robust and easy to use indoor navigation solution
would be extremely useful, but this would also be particularly
difficult to develop, given the special requirements of the system
that would have to be more accurate and user friendly than a
general solution. This paper presents a contribute to this subject,
by proposing a hybrid indoor positioning system adaptable to the
surrounding indoor structure, and dealing with different types of
signals to increase accuracy. This would permit lower the
deployment costs, since it could be done gradually, beginning
with the likely existing Wi-Fi infrastructure to get a fairy
accuracy up to a high accuracy using visual tags and NFC tags
when necessary and possible.info:eu-repo/semantics/publishedVersio
An Indoor and Outdoor Navigation System for Visually Impaired People
In this paper, we present a system that allows visually impaired people to autonomously navigate in an unknown indoor and outdoor environment. The system, explicitly designed for low vision people, can be generalized to other users in an easy way. We assume that special landmarks are posed for helping the users in the localization of pre-defined paths. Our novel approach exploits the use of both the inertial sensors and the camera integrated into the smartphone as sensors. Such a navigation system can also provide direction estimates to the tracking system to the users. The success of out approach is proved both through experimental tests performed in controlled indoor environments and in real outdoor installations. A comparison with deep learning methods has been presented
SLAM for Visually Impaired People: A Survey
In recent decades, several assistive technologies for visually impaired and
blind (VIB) people have been developed to improve their ability to navigate
independently and safely. At the same time, simultaneous localization and
mapping (SLAM) techniques have become sufficiently robust and efficient to be
adopted in the development of assistive technologies. In this paper, we first
report the results of an anonymous survey conducted with VIB people to
understand their experience and needs; we focus on digital assistive
technologies that help them with indoor and outdoor navigation. Then, we
present a literature review of assistive technologies based on SLAM. We discuss
proposed approaches and indicate their pros and cons. We conclude by presenting
future opportunities and challenges in this domain.Comment: 26 pages, 5 tables, 3 figure
Indoor navigation for the visually impaired : enhancements through utilisation of the Internet of Things and deep learning
Wayfinding and navigation are essential aspects of independent living that heavily rely on the sense of vision. Walking in a complex building requires knowing exact location to find a suitable path to the desired destination, avoiding obstacles and monitoring orientation and movement along the route. People who do not have access to sight-dependent information, such as that provided by signage, maps and environmental cues, can encounter challenges in achieving these tasks independently. They can rely on assistance from others or maintain their independence by using assistive technologies and the resources provided by smart environments. Several solutions have adapted technological innovations to combat navigation in an indoor environment over the last few years. However, there remains a significant lack of a complete solution to aid the navigation requirements of visually impaired (VI) people. The use of a single technology cannot provide a solution to fulfil all the navigation difficulties faced. A hybrid solution using Internet of Things (IoT) devices and deep learning techniques to discern the patterns of an indoor environment may help VI people gain confidence to travel independently. This thesis aims to improve the independence and enhance the journey of VI people in an indoor setting with the proposed framework, using a smartphone. The thesis proposes a novel framework, Indoor-Nav, to provide a VI-friendly path to avoid obstacles and predict the user s position. The components include Ortho-PATH, Blue Dot for VI People (BVIP), and a deep learning-based indoor positioning model. The work establishes a novel collision-free pathfinding algorithm, Orth-PATH, to generate a VI-friendly path via sensing a grid-based indoor space. Further, to ensure correct movement, with the use of beacons and a smartphone, BVIP monitors the movements and relative position of the moving user. In dark areas without external devices, the research tests the feasibility of using sensory information from a smartphone with a pre-trained regression-based deep learning model to predict the user s absolute position. The work accomplishes a diverse range of simulations and experiments to confirm the performance and effectiveness of the proposed framework and its components. The results show that Indoor-Nav is the first type of pathfinding algorithm to provide a novel path to reflect the needs of VI people. The approach designs a path alongside walls, avoiding obstacles, and this research benchmarks the approach with other popular pathfinding algorithms. Further, this research develops a smartphone-based application to test the trajectories of a moving user in an indoor environment
Comparative analysis of computer-vision and BLE technology based indoor navigation systems for people with visual impairments
Background: Considerable number of indoor navigation systems has been proposed to augment people with visual impairments (VI) about their surroundings. These systems leverage several technologies, such as computer-vision, Bluetooth low energy (BLE), and other techniques to estimate the position of a user in indoor areas. Computer-vision based systems use several techniques including matching pictures, classifying captured images, recognizing visual objects or visual markers. BLE based system utilizes BLE beacons attached in the indoor areas as the source of the radio frequency signal to localize the position of the user. Methods: In this paper, we examine the performance and usability of two computer-vision based systems and BLE-based system. The first system is computer-vision based system, called CamNav that uses a trained deep learning model to recognize locations, and the second system, called QRNav, that utilizes visual markers (QR codes) to determine locations. A field test with 10 blindfolded users has been conducted while using the three navigation systems. Results: The obtained results from navigation experiment and feedback from blindfolded users show that QRNav and CamNav system is more efficient than BLE based system in terms of accuracy and usability. The error occurred in BLE based application is more than 30% compared to computer vision based systems including CamNav and QRNav. Conclusions: The developed navigation systems are able to provide reliable assistance for the participants during real time experiments. Some of the participants took minimal external assistance while moving through the junctions in the corridor areas. Computer vision technology demonstrated its superiority over BLE technology in assistive systems for people with visual impairments. - 2019 The Author(s).Scopu
Viia-hand: a Reach-and-grasp Restoration System Integrating Voice interaction, Computer vision and Auditory feedback for Blind Amputees
Visual feedback plays a crucial role in the process of amputation patients
completing grasping in the field of prosthesis control. However, for blind and
visually impaired (BVI) amputees, the loss of both visual and grasping
abilities makes the "easy" reach-and-grasp task a feasible challenge. In this
paper, we propose a novel multi-sensory prosthesis system helping BVI amputees
with sensing, navigation and grasp operations. It combines modules of voice
interaction, environmental perception, grasp guidance, collaborative control,
and auditory/tactile feedback. In particular, the voice interaction module
receives user instructions and invokes other functional modules according to
the instructions. The environmental perception and grasp guidance module
obtains environmental information through computer vision, and feedbacks the
information to the user through auditory feedback modules (voice prompts and
spatial sound sources) and tactile feedback modules (vibration stimulation).
The prosthesis collaborative control module obtains the context information of
the grasp guidance process and completes the collaborative control of grasp
gestures and wrist angles of prosthesis in conjunction with the user's control
intention in order to achieve stable grasp of various objects. This paper
details a prototyping design (named viia-hand) and presents its preliminary
experimental verification on healthy subjects completing specific
reach-and-grasp tasks. Our results showed that, with the help of our new
design, the subjects were able to achieve a precise reach and reliable grasp of
the target objects in a relatively cluttered environment. Additionally, the
system is extremely user-friendly, as users can quickly adapt to it with
minimal training
- …