2,028 research outputs found

    Services surround you:physical-virtual linkage with contextual bookmarks

    Get PDF
    Our daily life is pervaded by digital information and devices, not least the common mobile phone. However, a seamless connection between our physical world, such as a movie trailer on a screen in the main rail station and its digital counterparts, such as an online ticket service, remains difficult. In this paper, we present contextual bookmarks that enable users to capture information of interest with a mobile camera phone. Depending on the user’s context, the snapshot is mapped to a digital service such as ordering tickets for a movie theater close by or a link to the upcoming movie’s Web page

    INNOVATING CONTROL AND EMOTIONAL EXPRESSIVE MODALITIES OF USER INTERFACES FOR PEOPLE WITH LOCKED-IN SYNDROME

    Get PDF
    Patients with Lock-In-Syndrome (LIS) lost their ability to control any body part beside their eyes. Current solutions mainly use eye-tracking cameras to track patients' gaze as system input. However, despite the fact that interface design greatly impacts user experience, only a few guidelines have been were proposed so far to insure an easy, quick, fluid and non-tiresome computer system for these patients. On the other hand, the emergence of dedicated computer software has been greatly increasing the patients' capabilities, but there is still a great need for improvements as existing systems still present low usability and limited capabilities. Most interfaces designed for LIS patients aim at providing internet browsing or communication abilities. State of the art augmentative and alternative communication systems mainly focus on sentences communication without considering the need for emotional expression inextricable from human communication. This thesis aims at exploring new system control and expressive modalities for people with LIS. Firstly, existing gaze-based web-browsing interfaces were investigated. Page analysis and high mental workload appeared as recurring issues with common systems. To address this issue, a novel user interface was designed and evaluated against a commercial system. The results suggested that it is easier to learn and to use, quicker, more satisfying, less frustrating, less tiring and less prone to error. Mental workload was greatly diminished with this system. Other types of system control for LIS patients were then investigated. It was found that galvanic skin response may be used as system input and that stress related bio-feedback helped lowering mental workload during stressful tasks. Improving communication was one of the main goal of this research and in particular emotional communication. A system including a gaze-controlled emotional voice synthesis and a personal emotional avatar was developed with this purpose. Assessment of the proposed system highlighted the enhanced capability to have dialogs more similar to normal ones, to express and to identify emotions. Enabling emotion communication in parallel to sentences was found to help with the conversation. Automatic emotion detection seemed to be the next step toward improving emotional communication. Several studies established that physiological signals relate to emotions. The ability to use physiological signals sensors with LIS patients and their non-invasiveness made them an ideal candidate for this study. One of the main difficulties of emotion detection is the collection of high intensity affect-related data. Studies in this field are currently mostly limited to laboratory investigations, using laboratory-induced emotions, and are rarely adapted for real-life applications. A virtual reality emotion elicitation technique based on appraisal theories was proposed here in order to study physiological signals of high intensity emotions in a real-life-like environment. While this solution successfully elicited positive and negative emotions, it did not elicit the desired emotions for all subject and was therefore, not appropriate for the goals of this research. Collecting emotions in the wild appeared as the best methodology toward emotion detection for real-life applications. The state of the art in the field was therefore reviewed and assessed using a specifically designed method for evaluating datasets collected for emotion recognition in real-life applications. The proposed evaluation method provides guidelines for future researcher in the field. Based on the research findings, a mobile application was developed for physiological and emotional data collection in the wild. Based on appraisal theory, this application provides guidance to users to provide valuable emotion labelling and help them differentiate moods from emotions. A sample dataset collected using this application was compared to one collected using a paper-based preliminary study. The dataset collected using the mobile application was found to provide a more valuable dataset with data consistent with literature. This mobile application was used to create an open-source affect-related physiological signals database. While the path toward emotion detection usable in real-life application is still long, we hope that the tools provided to the research community will represent a step toward achieving this goal in the future. Automatically detecting emotion could not only be used for LIS patients to communicate but also for total-LIS patients who have lost their ability to move their eyes. Indeed, giving the ability to family and caregiver to visualize and therefore understand the patients' emotional state could greatly improve their quality of life. This research provided tools to LIS patients and the scientific community to improve augmentative and alternative communication, technologies with better interfaces, emotion expression capabilities and real-life emotion detection. Emotion recognition methods for real-life applications could not only enhance health care but also robotics, domotics and many other fields of study. A complete system fully gaze-controlled was made available open-source with all the developed solutions for LIS patients. This is expected to enhance their daily lives by improving their communication and by facilitating the development of novel assistive systems capabilities

    Video interaction using pen-based technology

    Get PDF
    Dissertação para obtenção do Grau de Doutor em InformáticaVideo can be considered one of the most complete and complex media and its manipulating is still a difficult and tedious task. This research applies pen-based technology to video manipulation, with the goal to improve this interaction. Even though the human familiarity with pen-based devices, how they can be used on video interaction, in order to improve it, making it more natural and at the same time fostering the user’s creativity is an open question. Two types of interaction with video were considered in this work: video annotation and video editing. Each interaction type allows the study of one of the interaction modes of using pen-based technology: indirectly, through digital ink, or directly, trough pen gestures or pressure. This research contributes with two approaches for pen-based video interaction: pen-based video annotations and video as ink. The first uses pen-based annotations combined with motion tracking algorithms, in order to augment video content with sketches or handwritten notes. It aims to study how pen-based technology can be used to annotate a moving objects and how to maintain the association between a pen-based annotations and the annotated moving object The second concept replaces digital ink by video content, studding how pen gestures and pressure can be used on video editing and what kind of changes are needed in the interface, in order to provide a more familiar and creative interaction in this usage context.This work was partially funded by the UTAustin-Portugal, Digital Media, Program (Ph.D. grant: SFRH/BD/42662/2007 - FCT/MCTES); by the HP Technology for Teaching Grant Initiative 2006; by the project "TKB - A Transmedia Knowledge Base for contemporary dance" (PTDC/EAT/AVP/098220/2008 funded by FCT/MCTES); and by CITI/DI/FCT/UNL (PEst-OE/EEI/UI0527/2011

    Semi-aural Interfaces: Investigating Voice-controlled Aural Flows

    Get PDF
    To support mobile, eyes-free web browsing, users can listen to ‘playlists’ of web content— aural flows . Interacting with aural flows, however, requires users to select interface buttons, tethering visual attention to the mobile device even when it is unsafe (e.g. while walking). This research extends the interaction with aural flows through simulated voice commands as a way to reduce visual interaction. This paper presents the findings of a study with 20 participants who browsed aural flows either through a visual interface only or by augmenting it with voice commands. Results suggest that using voice commands reduced the time spent looking at the device by half but yielded similar system usability and cognitive effort ratings as using buttons. Overall, the low-cognitive effort engendered by aural flows, regardless of the interaction modality, allowed participants to do more non-instructed (e.g. looking at the surrounding environment) than instructed activities (e.g. focusing on the user interface)

    Designing multimodal interaction for the visually impaired

    Get PDF
    Although multimodal computer input is believed to have advantages over unimodal input, little has been done to understand how to design a multimodal input mechanism to facilitate visually impaired users\u27 information access. This research investigates sighted and visually impaired users\u27 multimodal interaction choices when given an interaction grammar that supports speech and touch input modalities. It investigates whether task type, working memory load, or prevalence of errors in a given modality impact a user\u27s choice. Theories in human memory and attention are used to explain the users\u27 speech and touch input coordination. Among the abundant findings from this research, the following are the most important in guiding system design: (1) Multimodal input is likely to be used when it is available. (2) Users select input modalities based on the type of task undertaken. Users prefer touch input for navigation operations, but speech input for non-navigation operations. (3) When errors occur, users prefer to stay in the failing modality, instead of switching to another modality for error correction. (4) Despite the common multimodal usage patterns, there is still a high degree of individual differences in modality choices. Additional findings include: (I) Modality switching becomes more prevalent when lower working memory and attentional resources are required for the performance of other concurrent tasks. (2) Higher error rates increases modality switching but only under duress. (3) Training order affects modality usage. Teaching a modality first versus second increases the use of this modality in users\u27 task performance. In addition to discovering multimodal interaction patterns above, this research contributes to the field of human computer interaction design by: (1) presenting a design of an eyes-free multimodal information browser, (2) presenting a Wizard of Oz method for working with visually impaired users in order to observe their multimodal interaction. The overall contribution of this work is that of one of the early investigations into how speech and touch might be combined into a non-visual multimodal system that can effectively be used for eyes-free tasks

    Gestural product interaction : development and evaluation of an emotional vocabulary

    Get PDF
    This research explores emotional response to gesture in order to inform future product interaction design. After describing the emergence and likely role of full-body interfaces with devices and systems, the importance of emotional reaction to the necessary movements and gestures is outlined. A gestural vocabulary for the control of a web page is then presented, along with a semantic differential questionnaire for its evaluation. An experiment is described where users undertook a series of web navigation tasks using the gestural vocabulary, then recorded their reaction to the experience. A number of insights were drawn on the context, precision, distinction, repetition and scale of gestures when used to control or activate a product. These insights will be of help in interaction design, and provide a basis for further development of the gestural vocabulary
    corecore