14,844 research outputs found
Portable form filling assistant for the visually impaired
The filling of printed forms has always been an issue for the visually impaired. Though optical character recognition technology has helped many blind people to ‘read’ the world, there is not a single device that allows them to fill out a paper-based form without a human assistant. The task of filling forms is however an essential part of their daily lives, for example, for access to social security or benefits. This paper describes a solution that allows a blind person to complete paper-based forms, pervasively and independently, using only off-the-shelf equipment including a Smartphone, a clipboard with sliding ruler, and a ballpoint pen. A dynamic color fiduciary (point of reference) marker is designed so that it can be moved by the user to any part of the form such that all regions can be “visited”. This dynamic color fiduciary marker is robust to camera focus and partial occlusion, allowing flexibility in handling the Smartphone with embedded camera. Feedback is given to the blind user via both voice and tone to facilitate efficient guidance in filling out the form. Experimental results have shown that this prototype can help visually impaired people to fill out a form independently.<br /
Deep Thermal Imaging: Proximate Material Type Recognition in the Wild through Deep Learning of Spatial Surface Temperature Patterns
We introduce Deep Thermal Imaging, a new approach for close-range automatic
recognition of materials to enhance the understanding of people and ubiquitous
technologies of their proximal environment. Our approach uses a low-cost mobile
thermal camera integrated into a smartphone to capture thermal textures. A deep
neural network classifies these textures into material types. This approach
works effectively without the need for ambient light sources or direct contact
with materials. Furthermore, the use of a deep learning network removes the
need to handcraft the set of features for different materials. We evaluated the
performance of the system by training it to recognise 32 material types in both
indoor and outdoor environments. Our approach produced recognition accuracies
above 98% in 14,860 images of 15 indoor materials and above 89% in 26,584
images of 17 outdoor materials. We conclude by discussing its potentials for
real-time use in HCI applications and future directions.Comment: Proceedings of the 2018 CHI Conference on Human Factors in Computing
System
ENHANCING USERSâ EXPERIENCE WITH SMART MOBILE TECHNOLOGY
The aim of this thesis is to investigate mobile guides for use with smartphones. Mobile guides have been successfully used to provide information, personalisation and navigation for the user. The researcher also wanted to ascertain how and in what ways mobile guides can enhance users' experience.
This research involved designing and developing web based applications to run on smartphones. Four studies were conducted, two of which involved testing of the particular application. The applications tested were a museum mobile guide application and a university mobile guide mapping application. Initial testing examined the prototype work for the âChronology of His Majesty Sultan Haji Hassanal Bolkiahâ application. The results were used to assess the potential of using similar mobile guides in Brunei Darussalamâs museums. The second study involved testing of the âKent LiveMapâ application for use at the University of Kent. Students at the university tested this mapping application, which uses crowdsourcing of information to provide live data. The results were promising and indicate that users' experience was enhanced when using the application.
Overall results from testing and using the two applications that were developed as part of this thesis show that mobile guides have the potential to be implemented in Brunei Darussalamâs museums and on campus at the University of Kent. However, modifications to both applications are required to fulfil their potential and take them beyond the prototype stage in order to be fully functioning and commercially viable
Comparative analysis of computer-vision and BLE technology based indoor navigation systems for people with visual impairments
Background: Considerable number of indoor navigation systems has been proposed to augment people with visual impairments (VI) about their surroundings. These systems leverage several technologies, such as computer-vision, Bluetooth low energy (BLE), and other techniques to estimate the position of a user in indoor areas. Computer-vision based systems use several techniques including matching pictures, classifying captured images, recognizing visual objects or visual markers. BLE based system utilizes BLE beacons attached in the indoor areas as the source of the radio frequency signal to localize the position of the user. Methods: In this paper, we examine the performance and usability of two computer-vision based systems and BLE-based system. The first system is computer-vision based system, called CamNav that uses a trained deep learning model to recognize locations, and the second system, called QRNav, that utilizes visual markers (QR codes) to determine locations. A field test with 10 blindfolded users has been conducted while using the three navigation systems. Results: The obtained results from navigation experiment and feedback from blindfolded users show that QRNav and CamNav system is more efficient than BLE based system in terms of accuracy and usability. The error occurred in BLE based application is more than 30% compared to computer vision based systems including CamNav and QRNav. Conclusions: The developed navigation systems are able to provide reliable assistance for the participants during real time experiments. Some of the participants took minimal external assistance while moving through the junctions in the corridor areas. Computer vision technology demonstrated its superiority over BLE technology in assistive systems for people with visual impairments. - 2019 The Author(s).Scopu
Non-contact measures to monitor hand movement of people with rheumatoid arthritis using a monocular RGB camera
Hand movements play an essential role in a personâs ability to interact with the environment. In hand biomechanics, the range of joint motion is a crucial metric to quantify changes due to degenerative pathologies, such as rheumatoid arthritis (RA). RA is a chronic condition where the immune system mistakenly attacks the joints, particularly those in the hands. Optoelectronic motion capture systems are gold-standard tools to quantify changes but are challenging to adopt outside laboratory settings. Deep learning executed on standard video data can capture RA participants in their natural environments, potentially supporting objectivity in remote consultation.
The three main research aims in this thesis were 1) to assess the extent to which current deep learning architectures, which have been validated for quantifying motion of other body segments, can be applied to hand kinematics using monocular RGB cameras, 2) to localise where in videos the hand motions of interest are to be found, 3) to assess the validity of 1) and 2) to determine disease status in RA.
First, hand kinematics for twelve healthy participants, captured with OpenPose were benchmarked against those captured using an optoelectronic system, showing acceptable instrument errors below 10°. Then, a gesture classifier was tested to segment video recordings of twenty-two healthy participants, achieving an accuracy of 93.5%. Finally, OpenPose and the classifier were applied to videos of RA participants performing hand exercises to determine disease status. The inferred disease activity exhibited agreement with the in-person ground truth in nine out of ten instances, outperforming virtual consultations, which agreed only six times out of ten.
These results demonstrate that this approach is more effective than estimated disease activity performed by human experts during video consultations. The end goal sets the foundation for a tool that RA participants can use to observe their disease activity from their home.Open Acces
Recommended from our members
Indoor Navigation System for the Visually Impaired with User-centric Graph Representation and Vision Detection Assistance
Independent navigation through unfamiliar indoor spaces is beset with barriers for the visually impaired. Hence, this issue impairs their independence, self-respect and self-reliance. In this thesis I will introduce a new indoor navigation system for the blind and visually impaired that is affordable for both the user and the building owners.
Outdoor vehicle navigation technical challenges have been solved using location information provided by Global Positioning Systems (GPS) and maps using Geographical Information Systems (GIS). However, GPS and GIS information is not available for indoor environments making indoor navigation, a challenging technical problem. Moreover, the indoor navigation system needs to be developed with the blind user in mind, i.e., special care needs to be given to vision free user interface.
In this project, I design and implement an indoor navigation application for the blind and visually impaired that uses RFID technology and Computer Vision for localization and a navigation map generated automatically based on environmental landmarks by simulating a userâs behavior. The focus of the indoor navigation system is no longer only on the indoor environment itself, but the way the blind users can experience it. This project will try this new idea in solving indoor navigation problems for blind and visually impaired users
Proceedings of the 1st joint workshop on Smart Connected and Wearable Things 2016
These are the Proceedings of the 1st joint workshop on Smart Connected and Wearable Things (SCWT'2016, Co-located with IUI 2016). The SCWT workshop integrates the SmartObjects and IoWT workshops. It focusses on the advanced interactions with smart objects in the context of the Internet-of-Things (IoT), and on the increasing popularity of wearables as advanced means to facilitate such interactions
- âŠ