103,632 research outputs found
Overcoming barriers and increasing independence: service robots for elderly and disabled people
This paper discusses the potential for service robots to overcome barriers and increase independence of
elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly
people and advances in technology which will make new uses possible and provides suggestions for some of these new
applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses
the complementarity of assistive service robots and personal assistance and considers the types of applications and
users for which service robots are and are not suitable
Investigating context-aware clues to assist navigation for visually impaired people
It is estimated that 7.4 million people in Europe are visually impaired [1]. Limitations of traditional mobility aids (i.e. white canes and guide dogs) coupled with a proliferation of context-aware technologies (e.g. Electronic Travel Aids, Global Positioning Systems and Geographical Information Systems), have stimulated research and development into navigational systems for the visually impaired. However, current research appears very technology focused, which has led to an insufficient appreciation of Human Computer Interaction, in particular task/requirements analysis and notions of contextual interactions. The study reported here involved a smallscale investigation into how visually impaired people interact with their environmental context during micro-navigation (through immediate environment) and/or macro-navigation (through distant environment) on foot. The purpose was to demonstrate the heterogeneous nature of visually impaired people in interaction with their environmental context. Results from a previous study involving sighted participants were used for comparison. Results revealed that when describing a route, visually impaired people vary in their use of different types of navigation clues - both as a group, when compared with sighted participants, and as individuals. Usability implications and areas for further work are identified and discussed
Sonification of guidance data during road crossing for people with visual impairments or blindness
In the last years several solutions were proposed to support people with
visual impairments or blindness during road crossing. These solutions focus on
computer vision techniques for recognizing pedestrian crosswalks and computing
their relative position from the user. Instead, this contribution addresses a
different problem; the design of an auditory interface that can effectively
guide the user during road crossing. Two original auditory guiding modes based
on data sonification are presented and compared with a guiding mode based on
speech messages.
Experimental evaluation shows that there is no guiding mode that is best
suited for all test subjects. The average time to align and cross is not
significantly different among the three guiding modes, and test subjects
distribute their preferences for the best guiding mode almost uniformly among
the three solutions. From the experiments it also emerges that higher effort is
necessary for decoding the sonified instructions if compared to the speech
instructions, and that test subjects require frequent `hints' (in the form of
speech messages). Despite this, more than 2/3 of test subjects prefer one of
the two guiding modes based on sonification. There are two main reasons for
this: firstly, with speech messages it is harder to hear the sound of the
environment, and secondly sonified messages convey information about the
"quantity" of the expected movement
Towards human technology symbiosis in the haptic mode
Search and rescue operations are often undertaken in dark and noisy environments in which rescue teams must rely on haptic feedback for exploration and safe exit. However, little attention has been paid specifically to haptic sensitivity in such contexts or to the possibility of enhancing communicational proficiency in the haptic mode as a life-preserving measure. Here we discuss the design of a haptic guide robot, inspired by careful study of the communication between blind person and guide dog. In the case of this partnership, the development of a symbiotic relationship between person and dog, based on mutual trust and confidence, is a prerequisite for successful task performance. We argue that a human-technology symbiosis is equally necessary and possible in the case of the robot guide. But this is dependent on the robot becoming 'transparent technology' in Andy Clark's sense. We report on initial haptic mode experiments in which a person uses a simple mobile mechanical device (a metal disk fixed with a rigid handle) to explore the immediate environment. These experiments demonstrate the extreme sensitivity and trainability of haptic communication and the speed with which users develop and refine their haptic proficiencies in using the device, permitting reliable and accurate discrimination between objects of different weights. We argue that such trials show the transformation of the mobile device into a transparent information appliance and the beginnings of the development of a symbiotic relationship between device and human user. We discuss how these initial explorations may shed light on the more general question of how a human mind, on being exposed to an unknown environment, may enter into collaboration with an external information source in order to learn about, and navigate, that environment
ENHANCING USERSâ EXPERIENCE WITH SMART MOBILE TECHNOLOGY
The aim of this thesis is to investigate mobile guides for use with smartphones. Mobile guides have been successfully used to provide information, personalisation and navigation for the user. The researcher also wanted to ascertain how and in what ways mobile guides can enhance users' experience.
This research involved designing and developing web based applications to run on smartphones. Four studies were conducted, two of which involved testing of the particular application. The applications tested were a museum mobile guide application and a university mobile guide mapping application. Initial testing examined the prototype work for the âChronology of His Majesty Sultan Haji Hassanal Bolkiahâ application. The results were used to assess the potential of using similar mobile guides in Brunei Darussalamâs museums. The second study involved testing of the âKent LiveMapâ application for use at the University of Kent. Students at the university tested this mapping application, which uses crowdsourcing of information to provide live data. The results were promising and indicate that users' experience was enhanced when using the application.
Overall results from testing and using the two applications that were developed as part of this thesis show that mobile guides have the potential to be implemented in Brunei Darussalamâs museums and on campus at the University of Kent. However, modifications to both applications are required to fulfil their potential and take them beyond the prototype stage in order to be fully functioning and commercially viable
Towards a multidisciplinary user-centric design framework for context-aware applications
The primary aim of this article is to review and merge theories of context within linguistics, computer science, and psychology, to propose a multidisciplinary model of context that would facilitate application developers in developing richer descriptions or scenarios of how a context-aware device may be used in various dynamic mobile settings. More specifically, the aim is to:1. Investigate different viewpoints of context within linguistics, computer science, and psychology, to develop summary condensed models for each discipline. 2. Investigate the impact of contrasting viewpoints on the usability of context-aware applications. 3. Investigate the extent to which single-discipline models can be merged and the benefits and insightfulness of a merged model for designing mobile computers. 4. Investigate the extent to which a proposed multidisciplinary modelcan be applied to specific applications of context-aware computing
A multimodal smartphone interface for active perception by visually impaired
The diffuse availability of mobile devices, such as smartphones and tablets, has the potential to bring substantial benefits to the people with sensory impairments. The solution proposed in this paper is part of an ongoing effort to create an accurate obstacle and hazard detector for the visually impaired, which is embedded in a hand-held device. In particular, it presents a proof of concept for a multimodal interface to control the orientation of a smartphone's camera, while being held by a person, using a combination of vocal messages, 3D sounds and vibrations. The solution, which is to be evaluated experimentally by users, will enable further research in the area of active vision with human-in-the-loop, with potential application to mobile assistive devices for indoor navigation of visually impaired people
Toward a multidisciplinary model of context to support context-aware computing
Capturing, defining, and modeling the essence of context are challenging, compelling, and prominent issues for interdisciplinary research and discussion. The roots of its emergence lie in the inconsistencies and ambivalent definitions across and within different research specializations (e.g., philosophy, psychology, pragmatics, linguistics, computer science, and artificial intelligence). Within the area of computer science, the advent of mobile context-aware computing has stimulated broad and contrasting interpretations due to the shift from traditional static desktop computing to heterogeneous mobile environments. This transition poses many challenging, complex, and largely unanswered research issues relating to contextual interactions and usability. To address those issues, many researchers strongly encourage a multidisciplinary approach. The primary aim of this article is to review and unify theories of context within linguistics, computer science, and psychology. Summary models within each discipline are used to propose an outline and detailed multidisciplinary model of context involving (a) the differentiation of focal and contextual aspects of the user and application's world, (b) the separation of meaningful and incidental dimensions, and (c) important user and application processes. The models provide an important foundation in which complex mobile scenarios can be conceptualized and key human and social issues can be identified. The models were then applied to different applications of context-aware computing involving user communities and mobile tourist guides. The authors' future work involves developing a user-centered multidisciplinary design framework (based on their proposed models). This will be used to design a large-scale user study investigating the usability issues of a context-aware mobile computing navigation aid for visually impaired people
- âŠ