7,326 research outputs found

    Testing Two Tools for Multimodal Navigation

    Get PDF
    The latest smartphones with GPS, electronic compasses, directional audio, touch screens, and so forth, hold a potential for location-based services that are easier to use and that let users focus on their activities and the environment around them. Rather than interpreting maps, users can search for information by pointing in a direction and database queries can be created from GPS location and compass data. Users can also get guidance to locations through point and sweep gestures, spatial sound, and simple graphics. This paper describes two studies testing two applications with multimodal user interfaces for navigation and information retrieval. The applications allow users to search for information and get navigation support using combinations of point and sweep gestures, nonspeech audio, graphics, and text. Tests show that users appreciated both applications for their ease of use and for allowing users to interact directly with the surrounding environment

    Ambient Gestures

    No full text
    We present Ambient Gestures, a novel gesture-based system designed to support ubiquitous ‘in the environment’ interactions with everyday computing technology. Hand gestures and audio feedback allow users to control computer applications without reliance on a graphical user interface, and without having to switch from the context of a non-computer task to the context of the computer. The Ambient Gestures system is composed of a vision recognition software application, a set of gestures to be processed by a scripting application and a navigation and selection application that is controlled by the gestures. This system allows us to explore gestures as the primary means of interaction within a multimodal, multimedia environment. In this paper we describe the Ambient Gestures system, define the gestures and the interactions that can be achieved in this environment and present a formative study of the system. We conclude with a discussion of our findings and future applications of Ambient Gestures in ubiquitous computing

    Novel Multimodal Feedback Techniques for In-Car Mid-Air Gesture Interaction

    Get PDF
    This paper presents an investigation into the effects of different feedback modalities on mid-air gesture interaction for infotainment systems in cars. Car crashes and near-crash events are most commonly caused by driver distraction. Mid-air interaction is a way of reducing driver distraction by reducing visual demand from infotainment. Despite a range of available modalities, feedback in mid-air gesture systems is generally provided through visual displays. We conducted a simulated driving study to investigate how different types of multimodal feedback can support in-air gestures. The effects of different feedback modalities on eye gaze behaviour, and the driving and gesturing tasks are considered. We found that feedback modality influenced gesturing behaviour. However, drivers corrected falsely executed gestures more often in non-visual conditions. Our findings show that non-visual feedback can reduce visual distraction significantl

    Prototype gesture recognition interface for vehicular head-up display system

    Get PDF

    Generic dialogue modeling for multi-application dialogue systems

    Get PDF
    We present a novel approach to developing interfaces for multi-application dialogue systems. The targeted interfaces allow transparent switching between a large number of applications within one system. The approach, based on the Rapid Dialogue Prototyping Methodology (RDPM) and the Vector Space model techniques from Information Retrieval, is composed of three main steps: (1) producing finalized dia logue models for applications using the RDPM, (2) designing an application interaction hierarchy, and (3) navigating between the applications based on the user's application of interest

    Evaluating Multimodal Driver Displays of Varying Urgency

    Get PDF
    Previous studies have evaluated Audio, Visual and Tactile warnings for drivers, highlighting the importance of conveying the appropriate level of urgency through the signals. However, these modalities have never been combined exhaustively with different urgency levels and tested while using a driving simulator. This paper describes two experiments investigating all multimodal combinations of such warnings along three different levels of designed urgency. The warnings were first evaluated in terms of perceived urgency and perceived annoyance in the context of a driving simulator. The results showed that the perceived urgency matched the designed urgency of the warnings. More urgent warnings were also rated as more annoying but the effect of annoyance was lower compared to urgency. The warnings were then tested for recognition time when presented during a simulated driving task. It was found that warnings of high urgency induced quicker and more accurate responses than warnings of medium and of low urgency. In both studies, the number of modalities used in warnings (one, two or three) affected both subjective and objective responses. More modalities led to higher ratings of urgency and annoyance, with annoyance having a lower effect compared to urgency. More modalities also led to quicker responses. These results provide implications for multimodal warning design and reveal how modalities and modality combinations can influence participant responses during a simulated driving task

    DOLPHIN: the design and initial evaluation of multimodal focus and context

    Get PDF
    In this paper we describe a new focus and context visualisation technique called multimodal focus and context. This technique uses a hybrid visual and spatialised audio display space to overcome the limited visual displays of mobile devices. We demonstrate this technique by applying it to maps of theme parks. We present the results of an experiment comparing multimodal focus and context to a purely visual display technique. The results showed that neither system was significantly better than the other. We believe that this is due to issues involving the perception of multiple structured audio sources
    corecore