22 research outputs found

    Can multimodal interaction support older adults in using mobile devices? The ECOMODE study.

    No full text
    Several studies investigated the potentialities of multimodal interfaces for improving accessibility for older people. This paper presents a study that evaluated the user experience of sixty people who worked with a tablet PC running the ECOMODE technology. This technology consists of an event-driven compressive vision algorithm, that allows the realization of a new generation of low-power cameras, able to elaborate real-time vocal- and video-inputs. The users interact with the applications on the tablet PC using mid-air hand gestures and vocal commands. Even if the ECOMODE technology suffers from some technical limitations, older people appreciated the proposed multimodal interaction mode. The results pointed out that the ECOMODE technology was considered to be particularly promising for daily tasks involving communication, such as placing calls, sending and listening to audio and messages, and taking and sharing pictures. It also seems effective in navigating archives, such as pictures, audio, or music databases

    Trade-offs in the design of multimodal interaction for older adults

    No full text
    none4This paper presents key aspects that designers and Human–Computer Interaction practitioners might encounter when designing multimodal interaction for older adults, focusing on the trade-offs that might occur as part of the design process. The paper gathers literature on multimodal interaction and assistive technology, and describes a set of design challenges specific for older users. Building on these main design challenges, four trade-offs in the design of multimodal technology for this target group are presented and discussed. To highlight the relevance of the trade-offs in the design process of multimodal technology for older adults, two of the four reported trade-offs are illustrated with two user studies that investigate mid-air and speech-based interaction with a tablet device. The first study explores the design trade-offs related to redundant multimodal commands in older, middle-aged and younger adults, whereas the second one investigates the design choices related to the definition of a set of mid-air one-hand gestures and voice input commands for older adults. Further reflections highlight the design trade-offs that such considerations bring in the process, providing an overview of the design choices involved and of their potential consequences.mixedSchiavo, Gianluca; Mich, Ornella; Ferron, Michela; Mana, NadiaSchiavo, Gianluca; Mich, Ornella; Ferron, Michela; Mana, Nadi

    Wizard of Oz Studies with Older Adults: A Methodological Note

    No full text
    Wizard of Oz (WoZ) is a prototyping technique in which users basically interact with what they believe is a fully functioning technology, while, in reality, the system is operated by a researcher, usually concealed from the participants. WoZ technique allows the exploration of user requirements and design concepts at an early stage in the design process and it can provide information about the interaction of different group of users, including older adults. In this paper, we provide a brief overview of WoZ method in HCI and, based on related literature and our experience, we present the methodological value and the potential drawbacks of WoZ approach in User-Centered Design when involving older people. We discuss indications on rganizational and ethical aspects of conducting WoZ studies with older participants and highlight the positive impact and possible pitfalls of this approach in sharing vision of future technology and communicating ideas for design

    Mobile multimodal interaction for older and younger users

    No full text
    Since they can integrate a wide range of interactive modalities, multimodal interfaces are considered to improve accessibility for a variety of users, including older adults. However, only few works have actually explored how older adults approach multimodal interaction outside specific contexts and have done so mainly in comparison to much younger users. This study explores how older (65+ years old), middle-aged (55-65 years old) and younger adults (25-35 years old) use mobile multimodal interaction in an everyday activity (i.e. taking photos with a tablet) by using midair gestures and voice commands, and investigates the differences and similarities between the considered age groups. Preliminary findings from a video-analysis show that all groups easily combine the proposed modalities when interacting with a tablet device. Furthermore, compared to younger adults, older and middle-aged adults show similarities in the way they perform gesture and voice commands

    Framing the design space of multimodal mid-air gesture and speech-based interaction with mobile devices for older people

    No full text
    Multimodal human–computer interaction has been sought to provide not only more compelling interactive experiences, but also more accessible interfaces to mobile devices. With the advance in mobile technology and in affordable sensors, multimodal research that leverages and combines multiple interaction modalities (such as speech, touch, vision, and gesture) has become more and more prominent. This article provides a framework for the key aspects in mid-air gesture and speech-based interaction for older adults. It explores the literature on multimodal interaction and older adults as technology users and summarises the main findings for this type of users. Building on these findings, a number of crucial factors to take into consideration when designing multimodal mobile technology for older adults are described. The aim of this work is to promote the usefulness and potential of multimodal technologies based on mid-air gestures and voice input for making older adults’ interaction with mobile devices more accessible and inclusive

    Gary: combining speech synthesis and eye tracking to support struggling readers

    No full text
    Children with reading difficulties face several obstacles in learning to fluently read written material. Multimedia applications integrating text-to-speech (TTS) synthesisers are valuable tools for supporting reading activities. The paper presents GARY, an application that combines TTS synthesis with eye tracking. GARY is meant to be used on a tablet device coupled with an eye tracker. Making use of the information from reader's eye movement, the system allows users to adapt the speed rate of the synthesised voice to their pace of reading. The paper describes the system, its functioning and future steps in designing a tool for supporting readers' ability in making the connection between the sounds heard and the letters read

    Designing Mobile Multimodal Interaction for Visually Impaired and Older Adults: Challenges and Possible Solutions

    No full text
    This paper presents two early studies aimed at investigating issues concerning the design of multimodal in- teraction - based on voice commands and one-hand mid-air gestures - with mobile technology speciïŹcally designed for visually impaired and elderly users. These studies were carried out on a new device allowing enhanced speech recognition (including lip movement analysis) and mid-air gesture interaction on Android operating system (smartphone and tablet PC). We discuss the initial ïŹndings and challenges raised by these novel interaction modalities, and in particular the issues regarding the design of feedback and feedforward, the problem of false positives, and the correct orientation and distance of the hand and the device during the interaction. Finally, we present a set of feedback and feedforward solutions designed to overcome the main issues highlighted

    Attention-driven read-aloud technology increases reading comprehension in children with reading disabilities

    No full text
    The paper presents the design of an assistive reading tool that integrates read-aloud technology with eye tracking to regulate the speed of reading and support struggling readers in following the text while listening to it. The paper describes the design rationale of this approach, following the theory of auditory–visual integration, in terms of an automatic self-adaptable technique based on the reader's gaze that provides an individualized interaction experience. This tool has been assessed in a controlled experiment with 20 children (aged 8–10 years) with a diagnosis of dyslexia and a control group of 20 children with typical reading abilities. The results show that children with reading difficulties improved their comprehension scores by 24% measured on a standardized instrument for the assessment of reading comprehension and that children with more inaccurate reading (N = 9) tended to benefit more. The findings are discussed in terms of a better integration between audio and visual text information, paving the way to improve standard read-aloud technology with gaze-contingency and self-adaptable techniques to personalize the reading experience
    corecore