3 research outputs found

    A Dual Modal Presentation of Network Relationships in Texts

    Get PDF
    Based on Baddeley’s working memory model, this research proposed a method to convert textual information with network relationships into a “graphics + voice” representation and hypothesized that this dual-modal presentation will result in superior comprehension performance and higher satisfaction than pure textual display. A simple T-test experiment was used to test the hypothesis. The independent variable was the presentation mode: textual display vs. visual-auditory presentation. The dependent variables were user performance and satisfaction. Thirty subjects participated in this experiment. The results indicate that both user performance and satisfaction improved significantly by using the “graphic + voice” presentation

    Designing multimodal interaction for the visually impaired

    Get PDF
    Although multimodal computer input is believed to have advantages over unimodal input, little has been done to understand how to design a multimodal input mechanism to facilitate visually impaired users\u27 information access. This research investigates sighted and visually impaired users\u27 multimodal interaction choices when given an interaction grammar that supports speech and touch input modalities. It investigates whether task type, working memory load, or prevalence of errors in a given modality impact a user\u27s choice. Theories in human memory and attention are used to explain the users\u27 speech and touch input coordination. Among the abundant findings from this research, the following are the most important in guiding system design: (1) Multimodal input is likely to be used when it is available. (2) Users select input modalities based on the type of task undertaken. Users prefer touch input for navigation operations, but speech input for non-navigation operations. (3) When errors occur, users prefer to stay in the failing modality, instead of switching to another modality for error correction. (4) Despite the common multimodal usage patterns, there is still a high degree of individual differences in modality choices. Additional findings include: (I) Modality switching becomes more prevalent when lower working memory and attentional resources are required for the performance of other concurrent tasks. (2) Higher error rates increases modality switching but only under duress. (3) Training order affects modality usage. Teaching a modality first versus second increases the use of this modality in users\u27 task performance. In addition to discovering multimodal interaction patterns above, this research contributes to the field of human computer interaction design by: (1) presenting a design of an eyes-free multimodal information browser, (2) presenting a Wizard of Oz method for working with visually impaired users in order to observe their multimodal interaction. The overall contribution of this work is that of one of the early investigations into how speech and touch might be combined into a non-visual multimodal system that can effectively be used for eyes-free tasks

    A Dual Modal Presentation of Network Relationships in Texts

    No full text
    Based on Baddeley’s working memory model, this research proposed a method to convert textual information with network relationships into a “graphics + voice” representation and hypothesized that this dual-modal presentation will result in superior comprehension performance and higher satisfaction than pure textual display. A simple T-test experiment was used to test the hypothesis. The independent variable was the presentation mode: textual display vs. visual-auditory presentation. The dependent variables were user performance and satisfaction. Thirty subjects participated in this experiment. The results indicate that both user performance and satisfaction improved significantly by using the “graphic + voice” presentation
    corecore