25,897 research outputs found

    Crossmodal spatial location: initial experiments

    Get PDF
    This paper describes an alternative form of interaction for mobile devices using crossmodal output. The aim of our work is to investigate the equivalence of audio and tactile displays so that the same messages can be presented in one form or another. Initial experiments show that spatial location can be perceived as equivalent in both the auditory and tactile modalities Results show that participants are able to map presented 3D audio positions to tactile body positions on the waist most effectively when mobile and that there are significantly more errors made when using the ankle or wrist. This paper compares the results from both a static and mobile experiment on crossmodal spatial location and outlines the most effective ways to use this crossmodal output in a mobile context

    Tactons: structured tactile messages for non-visual information display

    Get PDF
    Tactile displays are now becoming available in a form that can be easily used in a user interface. This paper describes a new form of tactile output. Tactons, or tactile icons, are structured, abstract messages that can be used to communicate messages non-visually. A range of different parameters can be used for Tacton construction including: frequency, amplitude and duration of a tactile pulse, plus other parameters such as rhythm and location. Tactons have the potential to improve interaction in a range of different areas, particularly where the visual display is overloaded, limited in size or not available, such as interfaces for blind people or in mobile and wearable devices. This paper describes Tactons, the parameters used to construct them and some possible ways to design them. Examples of where Tactons might prove useful in user interfaces are given

    Understanding concurrent earcons: applying auditory scene analysis principles to concurrent earcon recognition

    Get PDF
    Two investigations into the identification of concurrently presented, structured sounds, called earcons were carried out. One of the experiments investigated how varying the number of concurrently presented earcons affected their identification. It was found that varying the number had a significant effect on the proportion of earcons identified. Reducing the number of concurrently presented earcons lead to a general increase in the proportion of presented earcons successfully identified. The second experiment investigated how modifying the earcons and their presentation, using techniques influenced by auditory scene analysis, affected earcon identification. It was found that both modifying the earcons such that each was presented with a unique timbre, and altering their presentation such that there was a 300 ms onset-to-onset time delay between each earcon were found to significantly increase identification. Guidelines were drawn from this work to assist future interface designers when incorporating concurrently presented earcons

    DOLPHIN: the design and initial evaluation of multimodal focus and context

    Get PDF
    In this paper we describe a new focus and context visualisation technique called multimodal focus and context. This technique uses a hybrid visual and spatialised audio display space to overcome the limited visual displays of mobile devices. We demonstrate this technique by applying it to maps of theme parks. We present the results of an experiment comparing multimodal focus and context to a purely visual display technique. The results showed that neither system was significantly better than the other. We believe that this is due to issues involving the perception of multiple structured audio sources

    Making a financial time machine:a multitouch application to enable interactive 3-D visualization of distant savings goals

    Get PDF
    Financial planning and decision making for the general public continues to vex and perplex in equal measure. Whilst the tools presented by a typical desktop computer should make the task easier, the recent financial crisis confirms the increasing difficulty that people have in calculating the benefits of deferring consumption for future gains (i.e. Saving). We present an interactive concept demonstration for Microsoft SurfaceTM that tackles two of the key barriers to saving decision making. Firstly we show an interface that avoid the laborious writing down or inputting of data and instead embodies the cognitive decision of allocation of resources in a physical gesture based interface, where the scale of the investment or expenditure correlates with the scale of the gesture. Second we show how a fast-forward based animation can demonstrate the impact of small increments in savings to a long term savings goal in a strategy game-based, interactive format. The platform uses custom software (XNATM format) as opposed to the more usual WPFTM format found on Surface applications. This enables dynamic 3-D graphical icons to be used to maximize the interactive appeal of the interface. Demonstration and test trial feedback indicates that this platform can be adapted to suit the narrative of individual purchasing decisions to inform educate diverse user groups about the long term consequences of small financial decisions

    Non-visual information display using tactons

    Get PDF
    This paper describes a novel form of display using tactile output. Tactons, or tactile icons, are structured tactile messages that can be used to communicate message to users non visually. A range of different parameters can be used to construct Tactons, e.g.: frequency, amplitude, waveform and duration of a tactile pulse, plus body location. Tactons have the potential to improve interaction in a range of different areas, particularly where the visual display is overloaded, limited in size or not available, such as interfaces for blind people or on mobile and wearable devices

    Beyond representations: towards an action-centric perspective on tangible interaction

    Get PDF
    In the light of theoretical as well as concrete technical development, we discuss a conceptual shift from an information-centric to an action-centric perspective on tangible interactive technology. We explicitly emphasise the qualities of shareable use, and the importance of designing tangibles that allow for meaningful manipulation and control of the digital material. This involves a broadened focus from studying properties of the interface, to instead aim for qualities of the activity of using a system, a general tendency towards designing for social and sharable use settings and an increased openness towards multiple and subjective interpretations. An effect of this is that tangibles are not designed as representations of data, but as resources for action. We discuss four ways that tangible artefacts work as resources for action: (1) for physical manipulation; (2) for referential, social and contextually oriented action; (3) for perception and sensory experience; (4) for digitally mediated action

    Prototype gesture recognition interface for vehicular head-up display system

    Get PDF
    • 

    corecore