2,581 research outputs found
AmIE: An Ambient Intelligent Environment for Assisted Living
In the modern world of technology Internet-of-things (IoT) systems strives to
provide an extensive interconnected and automated solutions for almost every
life aspect. This paper proposes an IoT context-aware system to present an
Ambient Intelligence (AmI) environment; such as an apartment, house, or a
building; to assist blind, visually-impaired, and elderly people. The proposed
system aims at providing an easy-to-utilize voice-controlled system to locate,
navigate and assist users indoors. The main purpose of the system is to provide
indoor positioning, assisted navigation, outside weather information, room
temperature, people availability, phone calls and emergency evacuation when
needed. The system enhances the user's awareness of the surrounding environment
by feeding them with relevant information through a wearable device to assist
them. In addition, the system is voice-controlled in both English and Arabic
languages and the information are displayed as audio messages in both
languages. The system design, implementation, and evaluation consider the
constraints in common types of premises in Kuwait and in challenges, such as
the training needed by the users. This paper presents cost-effective
implementation options by the adoption of a Raspberry Pi microcomputer,
Bluetooth Low Energy devices and an Android smart watch.Comment: 6 pages, 8 figures, 1 tabl
A multimodal smartphone interface for active perception by visually impaired
The diffuse availability of mobile devices, such as smartphones and tablets, has the potential to bring substantial benefits to the people with sensory impairments. The solution proposed in this paper is part of an ongoing effort to create an accurate obstacle and hazard detector for the visually impaired, which is embedded in a hand-held device. In particular, it presents a proof of concept for a multimodal interface to control the orientation of a smartphone's camera, while being held by a person, using a combination of vocal messages, 3D sounds and vibrations. The solution, which is to be evaluated experimentally by users, will enable further research in the area of active vision with human-in-the-loop, with potential application to mobile assistive devices for indoor navigation of visually impaired people
Distributed and adaptive location identification system for mobile devices
Indoor location identification and navigation need to be as simple, seamless,
and ubiquitous as its outdoor GPS-based counterpart is. It would be of great
convenience to the mobile user to be able to continue navigating seamlessly as
he or she moves from a GPS-clear outdoor environment into an indoor environment
or a GPS-obstructed outdoor environment such as a tunnel or forest. Existing
infrastructure-based indoor localization systems lack such capability, on top
of potentially facing several critical technical challenges such as increased
cost of installation, centralization, lack of reliability, poor localization
accuracy, poor adaptation to the dynamics of the surrounding environment,
latency, system-level and computational complexities, repetitive
labor-intensive parameter tuning, and user privacy. To this end, this paper
presents a novel mechanism with the potential to overcome most (if not all) of
the abovementioned challenges. The proposed mechanism is simple, distributed,
adaptive, collaborative, and cost-effective. Based on the proposed algorithm, a
mobile blind device can potentially utilize, as GPS-like reference nodes,
either in-range location-aware compatible mobile devices or preinstalled
low-cost infrastructure-less location-aware beacon nodes. The proposed approach
is model-based and calibration-free that uses the received signal strength to
periodically and collaboratively measure and update the radio frequency
characteristics of the operating environment to estimate the distances to the
reference nodes. Trilateration is then used by the blind device to identify its
own location, similar to that used in the GPS-based system. Simulation and
empirical testing ascertained that the proposed approach can potentially be the
core of future indoor and GPS-obstructed environments
Android Assistant EyeMate for Blind and Blind Tracker
At present many blind assistive systems have been implemented but there is no
such kind of good system to navigate a blind person and also to track the
movement of a blind person and rescue him/her if he/she is lost. In this paper,
we have presented a blind assistive and tracking embedded system. In this
system the blind person is navigated through a spectacle interfaced with an
android application. The blind person is guided through Bengali/English voice
commands generated by the application according to the obstacle position. Using
voice command a blind person can establish voice call to a predefined number
without touching the phone just by pressing the headset button. The blind
assistive application gets the latitude and longitude using GPS and then sends
them to a server. The movement of the blind person is tracked through another
android application that points out the current position in Google map. We took
distances from several surfaces like concrete and tiles floor in our experiment
where the error rate is 5%.Comment: arXiv admin note: text overlap with arXiv:1611.09480 by other autho
Comparative analysis of computer-vision and BLE technology based indoor navigation systems for people with visual impairments
Background: Considerable number of indoor navigation systems has been proposed to augment people with visual impairments (VI) about their surroundings. These systems leverage several technologies, such as computer-vision, Bluetooth low energy (BLE), and other techniques to estimate the position of a user in indoor areas. Computer-vision based systems use several techniques including matching pictures, classifying captured images, recognizing visual objects or visual markers. BLE based system utilizes BLE beacons attached in the indoor areas as the source of the radio frequency signal to localize the position of the user. Methods: In this paper, we examine the performance and usability of two computer-vision based systems and BLE-based system. The first system is computer-vision based system, called CamNav that uses a trained deep learning model to recognize locations, and the second system, called QRNav, that utilizes visual markers (QR codes) to determine locations. A field test with 10 blindfolded users has been conducted while using the three navigation systems. Results: The obtained results from navigation experiment and feedback from blindfolded users show that QRNav and CamNav system is more efficient than BLE based system in terms of accuracy and usability. The error occurred in BLE based application is more than 30% compared to computer vision based systems including CamNav and QRNav. Conclusions: The developed navigation systems are able to provide reliable assistance for the participants during real time experiments. Some of the participants took minimal external assistance while moving through the junctions in the corridor areas. Computer vision technology demonstrated its superiority over BLE technology in assistive systems for people with visual impairments. - 2019 The Author(s).Scopu
BLINDSHOPPING: NAVIGATION SYSTEM
The QR trail is an android application that designed to encourage visually challenged person to participate in more normal activities as normal person does. Moreover, this application can be used by normal person as well to navigate around places when the person lost in a way. The main purpose of the project is to provide a navigation system for the visually challenged person to move around autonomously in supermarkets or hypermarkets and do some shopping. The application will provide a guidance for visually impaired person through voice command from the smartphone as the user need to scan QR codes on the floor which contains the details of current location and instruction to move from one point of the shopping mall to another point. The development of this application will use Eclipse development tool. The programming language that will be used the development process in Java language and ZXing library. The rapid application development methodology is applied in development process of this application which consists 4 stages which are system design, prototype cycle, system testing and implication
- …