8,089 research outputs found

    Personalising Vibrotactile Displays through Perceptual Sensitivity Adjustment

    Get PDF
    Haptic displays are commonly limited to transmitting a discrete set of tactile motives. In this paper, we explore the transmission of real-valued information through vibrotactile displays. We simulate spatial continuity with three perceptual models commonly used to create phantom sensations: the linear, logarithmic and power model. We show that these generic models lead to limited decoding precision, and propose a method for model personalization adjusting to idiosyncratic and spatial variations in perceptual sensitivity. We evaluate this approach using two haptic display layouts: circular, worn around the wrist and the upper arm, and straight, worn along the forearm. Results of a user study measuring continuous value decoding precision show that users were able to decode continuous values with relatively high accuracy (4.4% mean error), circular layouts performed particularly well, and personalisation through sensitivity adjustment increased decoding precision

    Anticipatory Mobile Computing: A Survey of the State of the Art and Research Challenges

    Get PDF
    Today's mobile phones are far from mere communication devices they were ten years ago. Equipped with sophisticated sensors and advanced computing hardware, phones can be used to infer users' location, activity, social setting and more. As devices become increasingly intelligent, their capabilities evolve beyond inferring context to predicting it, and then reasoning and acting upon the predicted context. This article provides an overview of the current state of the art in mobile sensing and context prediction paving the way for full-fledged anticipatory mobile computing. We present a survey of phenomena that mobile phones can infer and predict, and offer a description of machine learning techniques used for such predictions. We then discuss proactive decision making and decision delivery via the user-device feedback loop. Finally, we discuss the challenges and opportunities of anticipatory mobile computing.Comment: 29 pages, 5 figure

    Using visualization for visualization : an ecological interface design approach to inputting data

    Get PDF
    Visualization is experiencing growing use by a diverse community, with continuing improvements in the availability and usability of systems. In spite of these developments the problem of how first to get the data in has received scant attention: the established approach of pre-defined readers and programming aids has changed little in the last two decades. This paper proposes a novel way of inputting data for scientific visualization that employs rapid interaction and visual feedback in order to understand how the data is stored. The approach draws on ideas from the discipline of ecological interface design to extract and control important parameters describing the data, at the same time harnessing our innate human ability to recognize patterns. Crucially, the emphasis is on file format discovery rather than file format description, so the method can therefore still work when nothing is known initially of how the file was originally written, as is often the case with legacy binary data. © 2013 Elsevier Ltd

    Increasing the motion of users in photo-realistic virtual environments by utilising auditory rendering of the environment and ego-motion

    Get PDF
    An occurring problem of the image-based-rendering technology for Virtual Environments has been that subjects in general showed very little movement of head and body. Our hypothesis is that the movement rate could be enhanced by introducing the auditory modality. In the study described in this paper, 126 subjects participated in a between-subjects experiment involving six different experimental conditions, including both uni-and bi-modal stimuli (auditory and visual). The aim of the study was to investigate the influence of auditory rendering in stimulating and enhancing subjects ’ motion in virtual reality. The auditory stimuli consisted of several combinations of auditory feedback, including static sound sources as well as self-induced sounds. Results show that motion in virtual reality is significantly enhanced when moving sound sources and sound of ego-motion are rendered in the environment. 1

    Multisensory learning in adaptive interactive systems

    Get PDF
    The main purpose of my work is to investigate multisensory perceptual learning and sensory integration in the design and development of adaptive user interfaces for educational purposes. To this aim, starting from renewed understanding from neuroscience and cognitive science on multisensory perceptual learning and sensory integration, I developed a theoretical computational model for designing multimodal learning technologies that take into account these results. Main theoretical foundations of my research are multisensory perceptual learning theories and the research on sensory processing and integration, embodied cognition theories, computational models of non-verbal and emotion communication in full-body movement, and human-computer interaction models. Finally, a computational model was applied in two case studies, based on two EU ICT-H2020 Projects, "weDRAW" and "TELMI", on which I worked during the PhD

    Sonic Interaction Design to enhance presence and motion in virtual environments

    Get PDF

    Emerging technologies for learning (volume 1)

    Get PDF
    Collection of 5 articles on emerging technologies and trend
    • …
    corecore