13,522 research outputs found

    The role of avatars in e-government interfaces

    Get PDF
    This paper investigates the use of avatars to communicate live message in e-government interfaces. A comparative study is presented that evaluates the contribution of multimodal metaphors (including avatars) to the usability of interfaces for e-government and user trust. The communication metaphors evaluated included text, earcons, recorded speech and avatars. The experimental platform used for the experiment involved two interface versions with a sample of 30 users. The results demonstrated that the use of multimodal metaphors in an e-government interface can significantly contribute to enhancing the usability and increase trust of users to the e-government interface. A set of design guidelines, for the use of multimodal metaphors in e-government interfaces, was also produced

    Multimodal virtual reality versus printed medium in visualization for blind people

    Get PDF
    In this paper, we describe a study comparing the strengths of a multimodal Virtual Reality (VR) interface against traditional tactile diagrams in conveying information to visually impaired and blind people. The multimodal VR interface consists of a force feedback device (SensAble PHANTOM), synthesized speech and non-speech audio. Potential advantages of the VR technology are well known however its real usability in comparison with the conventional paper-based medium is seldom investigated. We have addressed this issue in our evaluation. The experimental results show benefits from using the multimodal approach in terms of more accurate information about the graphs obtained by users

    Oral messages improve visual search

    Get PDF
    Input multimodality combining speech and hand gestures has motivated numerous usability studies. Contrastingly, issues relating to the design and ergonomic evaluation of multimodal output messages combining speech with visual modalities have not yet been addressed extensively. The experimental study presented here addresses one of these issues. Its aim is to assess the actual efficiency and usability of oral system messages including brief spatial information for helping users to locate objects on crowded displays rapidly. Target presentation mode, scene spatial structure and task difficulty were chosen as independent variables. Two conditions were defined: the visual target presentation mode (VP condition) and the multimodal target presentation mode (MP condition). Each participant carried out two blocks of visual search tasks (120 tasks per block, and one block per condition). Scene target presentation mode, scene structure and task difficulty were found to be significant factors. Multimodal target presentation proved to be more efficient than visual target presentation. In addition, participants expressed very positive judgments on multimodal target presentations which were preferred to visual presentations by a majority of participants. Besides, the contribution of spatial messages to visual search speed and accuracy was influenced by scene spatial structure and task difficulty: (i) messages improved search efficiency to a lesser extent for 2D array layouts than for some other symmetrical layouts, although the use of 2D arrays for displaying pictures is currently prevailing; (ii) message usefulness increased with task difficulty. Most of these results are statistically significant.Comment: 4 page

    Piloting Multimodal Learning Analytics using Mobile Mixed Reality in Health Education

    Get PDF
    © 2019 IEEE. Mobile mixed reality has been shown to increase higher achievement and lower cognitive load within spatial disciplines. However, traditional methods of assessment restrict examiners ability to holistically assess spatial understanding. Multimodal learning analytics seeks to investigate how combinations of data types such as spatial data and traditional assessment can be combined to better understand both the learner and learning environment. This paper explores the pedagogical possibilities of a smartphone enabled mixed reality multimodal learning analytics case study for health education, focused on learning the anatomy of the heart. The context for this study is the first loop of a design based research study exploring the acquisition and retention of knowledge by piloting the proposed system with practicing health experts. Outcomes from the pilot study showed engagement and enthusiasm of the method among the experts, but also demonstrated problems to overcome in the pedagogical method before deployment with learners

    GazeTouchPass: Multimodal Authentication Using Gaze and Touch on Mobile Devices

    Get PDF
    We propose a multimodal scheme, GazeTouchPass, that combines gaze and touch for shoulder-surfing resistant user authentication on mobile devices. GazeTouchPass allows passwords with multiple switches between input modalities during authentication. This requires attackers to simultaneously observe the device screen and the user's eyes to find the password. We evaluate the security and usability of GazeTouchPass in two user studies. Our findings show that GazeTouchPass is usable and significantly more secure than single-modal authentication against basic and even advanced shoulder-surfing attacks

    Digital interaction: where are we going?

    Get PDF
    In the framework of the AVI 2018 Conference, the interuniversity center ECONA has organized a thematic workshop on "Digital Interaction: where are we going?". Six contributions from the ECONA members investigate different perspectives around this thematic
    • …
    corecore