46,463 research outputs found
Multiple multimodal mobile devices: Lessons learned from engineering lifelog solutions
For lifelogging, or the recording of oneâs life history through digital means, to be successful, a range of separate multimodal mobile devices must be employed. These include smartphones such as the N95, the Microsoft SenseCam â a wearable passive photo capture device, or
wearable biometric devices. Each collects a facet of the bigger picture, through, for example, personal digital photos, mobile messages and documents access history, but unfortunately, they operate independently and unaware of each other. This creates significant challenges for the practical application of these devices, the use and integration of their data and their operation by a user. In this chapter we discuss the software engineering challenges and their implications for individuals working on integration of data from multiple ubiquitous mobile devices drawing on our experiences working with such technology over the past several years for the development of integrated personal lifelogs. The chapter serves as an engineering guide to those considering working in the domain of lifelogging and more generally to those working with multiple multimodal devices and integration of their data
Recommended from our members
Mobile Learning Revolution: Implications for Language Pedagogy
Mobile technologies including cell phones and tablets are a pervasive feature of everyday life with potential impact on teaching and learning. âMobile pedagogyâ may seem like a contradiction in terms, since mobile learning often takes place physically beyond the teacher's reach, outside the walls of the classroom. While pedagogy implies careful planning, mobility exposes learners to the unexpected. A thoughtful pedagogical response to this reality involves new conceptualizations of what is to be learned and new activity designs. This approach recognizes that learners may act in more self-determined ways beyond the classroom walls, where online interactions and mobile encounters influence their target language communication needs and interests. The chapter sets out a range of opportunities for out-of-class mobile language learning that give learners an active role and promote communication. It then considers the implications of these developments for language content and curricula and the evolving roles and competences of teachers
Web-based multimodal graphs for visually impaired people
This paper describes the development and evaluation of Web-based multimodal graphs designed for visually impaired and blind people. The information in the graphs is conveyed to visually impaired people through haptic and audio channels. The motivation of this work is to address problems faced by visually impaired people in accessing graphical information on the Internet, particularly the common types of graphs for data visualization. In our work, line graphs, bar charts and pie charts are accessible through a force feedback device, the Logitech WingMan Force Feedback Mouse. Pre-recorded sound files are used to represent graph contents to users. In order to test the usability of the developed Web graphs, an evaluation was conducted with bar charts as the experimental platform. The results showed that the participants could successfully use the haptic and audio features to extract information from the Web graphs
Recommended from our members
Multimodal and ubiquitous computing systems: supporting independent-living older users
We document the rationale and design of a multimodal interface to a pervasive/ubiquitous computing system that supports independent living by older people in their own homes. The Millennium Home system involves fitting a residentâs home with sensors â these sensors can be used to trigger sequences of interaction with the resident to warn them about dangerous events, or to check if they need external help. We draw lessons from the design process and conclude the paper with implications for the design of multimodal interfaces to ubiquitous systems developed for the elderly and in healthcare, as well as for more general ubiquitous computing applications
Multi-Moji: Combining Thermal, Vibrotactile and Visual Stimuli to Expand the Affective Range of Feedback
This paper explores the combination of multiple concurrent
modalities for conveying emotional information in HCI:
temperature, vibration and abstract visual displays. Each modality
has been studied individually, but can only convey a
limited range of emotions within two-dimensional valencearousal
space. This paper is the first to systematically combine
multiple modalities to expand the available affective
range. Three studies were conducted: Study 1 measured the
emotionality of vibrotactile feedback by itself; Study 2 measured
the perceived emotional content of three bimodal combinations:
vibrotactile + thermal, vibrotactile + visual and
visual + thermal. Study 3 then combined all three modalities.
Results show that combining modalities increases the available
range of emotional states, particularly in the problematic
top-right and bottom-left quadrants of the dimensional
model. We also provide a novel lookup resource for designers
to identify stimuli to convey a range of emotions
Multimodal virtual reality versus printed medium in visualization for blind people
In this paper, we describe a study comparing the strengths of a multimodal Virtual Reality (VR) interface against traditional tactile diagrams in conveying information to visually impaired and blind people. The multimodal VR interface consists of a force feedback device (SensAble PHANTOM), synthesized speech and non-speech audio. Potential advantages of the VR technology are well known however its real usability in comparison with the conventional paper-based medium is seldom investigated. We have addressed this issue in our evaluation. The experimental results show benefits from using the multimodal approach in terms of more accurate information about the graphs obtained by users
- âŠ