2,948 research outputs found

    Multisensory texture exploration at the tip of the pen

    Get PDF
    A tool for the multisensory stylus-based exploration of virtual textures was used to investigate how different feedback modalities (static or dynamically deformed images, vibration, sound) affect exploratory gestures. To this end, we ran an experiment where participants had to steer a path with the stylus through a curved corridor on the surface of a graphic tablet/display, and we measured steering time, dispersion of trajectories, and applied force. Despite the variety of subjective impressions elicited by the different feedback conditions, we found that only nonvisual feedback induced significant variations in trajectories and an increase in movement time. In a post-experiment, using a paper-and-wood physical realization of the same texture, we recorded a variety of gestural behaviors markedly different from those found with the virtual texture. With the physical setup, movement time was shorter and texture-dependent lateral accelerations could be observed. This work highlights the limits of multisensory pseudo-haptic techniques in the exploration of surface textures

    Path Following in Non-Visual Conditions

    Get PDF
    Path-following tasks have been investigated mostly under visual conditions, that is when subjects are able to see both the path and the tool, or limb, used for navigation. Moreover, only basic path shapes are usually adopted. In the present experiment, participants must rely exclusively on audio and vibrotactile feedback to follow a path on a flat surface. Two different, asymmetric path shapes were tested. Participants navigated by moving their index finger over a surface sensing position and force. Results show that the different non-visual feedback modes did not affect the task's accuracy, yet they affected its speed, with vibrotactile feedback causing slower gestures than audio feedback. Conversely, audio and audio-tactile feedback yielded similar results. Vibrotactile feedback caused participants to exert more force over the surface. Finally, the shape of the path was relevant to the accuracy, and participants tended to prefer audio over vibrotactile and audio-tactile feedback

    To “Sketch-a-Scratch”

    Get PDF
    A surface can be harsh and raspy, or smooth and silky, and everything in between. We are used to sense these features with our fingertips as well as with our eyes and ears: the exploration of a surface is a multisensory experience. Tools, too, are often employed in the interaction with surfaces, since they augment our manipulation capabilities. “Sketch-a-Scratch” is a tool for the multisensory exploration and sketching of surface textures. The user’s actions drive a physical sound model of real materials’ response to interactions such as scraping, rubbing or rolling. Moreover, different input signals can be converted into 2D visual surface profiles, thus enabling to experience them visually, aurally and haptically

    Visual cues in musical synchronisation

    Get PDF
    Although music performance is generally thought of as an auditory activity in the Western tradition, the presence of continuous visual information in live music contributes to the cohesiveness of music ensembles, which presents an interesting psychological phenomenon in which audio and visual cues are presumably integrated. In order to investigate how auditory and visual sensory information are combined in the basic process of synchronising movements with music, this thesis focuses on both musicians and nonmusicians as they respond to two sources of visual information common to ensembles: the conductor, and the ancillary movements (movements that do not directly create sound; e.g. body sway or head nods) of co-performers. These visual cues were hypothesized to improve the timing of intentional synchronous action (matching a musical pulse), as well as increasing the synchrony of emergent ancillary movements between participant and stimulus. The visual cues were tested in controlled renderings of ensemble music arrangements, and were derived from real, biological motion. All three experiments employed the same basic synchronisation task: participants drummed along to the pulse of tempo-changing music while observing various visual cues. For each experiment, participants’ drum timing and upper-body movements were recorded as they completed the synchronisation task. The analyses used to quantify drum timing and ancillary movements came from theoretical approaches to movement timing and entrainment: information processing and dynamical systems. Overall, this thesis shows that basic musical timing is a common ability that is facilitated by visual cues in certain contexts, and that emergent ancillary movements and intentional synchronous movements in combination may best explain musical timing and synchronisation

    To ‘Sketch-a-Scratch’

    Get PDF
    A surface can be harsh and raspy, or smooth and silky, and everything in between. We are used to sense these features with our fingertips as well as with our eyes and ears: the exploration of a surface is a multisensory experience. Tools, too, are often employed in the interaction with surfaces, since they augment our manipulation capabilities. “Sketch-a-Scratch” is a tool for the multisensory exploration and sketching of surface textures. The user’s actions drive a physical sound model of real materials’ response to interactions such as scraping, rubbing or rolling. Moreover, different input signals can be converted into 2D visual surface profiles, thus enabling to experience them visually, aurally and haptically

    Effects of Embodied Learning and Digital Platform on the Retention of Physics Content: Centripetal Force

    Get PDF
    abstract: Embodiment theory proposes that knowledge is grounded in sensorimotor systems, and that learning can be facilitated to the extent that lessons can be mapped to these systems. This study with 109 college-age participants addresses two overarching questions: (a) how are immediate and delayed learning gains affected by the degree to which a lesson is embodied, and (b) how do the affordances of three different educational platforms affect immediate and delayed learning? Six 50 min-long lessons on centripetal force were created. The first factor was the degree of embodiment with two levels: (1) low and (2) high. The second factor was platform with three levels: (1) a large scale “mixed reality” immersive environment containing both digital and hands-on components called SMALLab, (2) an interactive whiteboard system, and (3) a mouse-driven desktop computer. Pre-tests, post-tests, and 1-week follow-up (retention or delayed learning gains) tests were administered resulting in a 2 × 3 × 3 design. Two knowledge subtests were analyzed, one that relied on more declarative knowledge and one that relied on more generative knowledge, e.g., hand-drawing vectors. Regardless of condition, participants made significant immediate learning gains from pre-test to post-test. There were no significant main effects or interactions due to platform or embodiment on immediate learning. However, from post-test to follow-up the level of embodiment interacted significantly with time, such that participants in the high embodiment conditions performed better on the subtest devoted to generative knowledge questions. We posit that better retention of certain types of knowledge can be seen over time when more embodiment is present during the encoding phase. This sort of retention may not appear on more traditional factual/declarative tests. Educational technology designers should consider using more sensorimotor feedback and gestural congruency when designing and opportunities for instructor professional development need to be provided as well.View the article as published at http://journal.frontiersin.org/article/10.3389/fpsyg.2016.01819/ful

    AUGMENTED TOUCH INTERACTIONS WITH FINGER CONTACT SHAPE AND ORIENTATION

    Get PDF
    Touchscreen interactions are far less expressive than the range of touch that human hands are capable of - even considering technologies such as multi-touch and force-sensitive surfaces. Recently, some touchscreens have added the capability to sense the actual contact area of a finger on the touch surface, which provides additional degrees of freedom - the size and shape of the touch, and the finger's orientation. These additional sensory capabilities hold promise for increasing the expressiveness of touch interactions - but little is known about whether users can successfully use the new degrees of freedom. To provide this baseline information, we carried out a study with a finger-contact-sensing touchscreen, and asked participants to produce a range of touches and gestures with different shapes and orientations, with both one and two fingers. We found that people are able to reliably produce two touch shapes and three orientations across a wide range of touches and gestures - a result that was confirmed in another study that used the augmented touches for a screen lock application
    • 

    corecore