2,022 research outputs found

    Eye-Hand Coordination during Dynamic Visuomotor Rotations

    Get PDF
    Background for many technology-driven visuomotor tasks such as tele-surgery, human operators face situations in which the frames of reference for vision and action are misaligned and need to be compensated in order to perform the tasks with the necessary precision. The cognitive mechanisms for the selection of appropriate frames of reference are still not fully understood. This study investigated the effect of changing visual and kinesthetic frames of reference during wrist pointing, simulating activities typical for tele-operations. Methods using a robotic manipulandum, subjects had to perform center-out pointing movements to visual targets presented on a computer screen, by coordinating wrist flexion/extension with abduction/adduction. We compared movements in which the frames of reference were aligned (unperturbed condition) with movements performed under different combinations of visual/kinesthetic dynamic perturbations. The visual frame of reference was centered to the computer screen, while the kinesthetic frame was centered around the wrist joint. Both frames changed their orientation dynamically (angular velocity\u200a=\u200a36\ub0/s) with respect to the head-centered frame of reference (the eyes). Perturbations were either unimodal (visual or kinesthetic), or bimodal (visual+kinesthetic). As expected, pointing performance was best in the unperturbed condition. The spatial pointing error dramatically worsened during both unimodal and most bimodal conditions. However, in the bimodal condition, in which both disturbances were in phase, adaptation was very fast and kinematic performance indicators approached the values of the unperturbed condition. Conclusions this result suggests that subjects learned to exploit an \u201caffordance\u201d made available by the invariant phase relation between the visual and kinesthetic frames. It seems that after detecting such invariance, subjects used the kinesthetic input as an informative signal rather than a disturbance, in order to compensate the visual rotation without going through the lengthy process of building an internal adaptation model. Practical implications are discussed as regards the design of advanced, high-performance man-machine interfaces

    3-D Interfaces for Spatial Construction

    Get PDF
    It is becoming increasingly easy to bring the body directly to digital form via stereoscopic immersive displays and tracked input devices. Is this space a viable one in which to construct 3d objects? Interfaces built upon two-dimensional displays and 2d input devices are the current standard for spatial construction, yet 3d interfaces, where the dimensionality of the interactive space matches that of the design space, have something unique to offer. This work increases the richness of 3d interfaces by bringing several new tools into the picture: the hand is used directly to trace surfaces; tangible tongs grab, stretch, and rotate shapes; a handle becomes a lightsaber and a tool for dropping simple objects; and a raygun, analagous to the mouse, is used to select distant things. With these tools, a richer 3d interface is constructed in which a variety of objects are created by novice users with relative ease. What we see is a space, not exactly like the traditional 2d computer, but rather one in which a distinct and different set of operations is easy and natural. Design studies, complemented by user studies, explore the larger space of three-dimensional input possibilities. The target applications are spatial arrangement, freeform shape construction, and molecular design. New possibilities for spatial construction develop alongside particular nuances of input devices and the interactions they support. Task-specific tangible controllers provide a cultural affordance which links input devices to deep histories of tool use, enhancing intuition and affective connection within an interface. On a more practical, but still emotional level, these input devices frame kinesthetic space, resulting in high-bandwidth interactions where large amounts of data can be comfortably and quickly communicated. A crucial issue with this interface approach is the tension between specific and generic input devices. Generic devices are the tradition in computing -- versatile, remappable, frequently bereft of culture or relevance to the task at hand. Specific interfaces are an emerging trend -- customized, culturally rich, to date these systems have been tightly linked to a single application, limiting their widespread use. The theoretical heart of this thesis, and its chief contribution to interface research at large is an approach to customization. Instead of matching an application domain's data, each new input device supports a functional class. The spatial construction task is split into four types of manipulation: grabbing, pointing, holding, and rubbing. Each of these action classes spans the space of spatial construction, allowing a single tool to be used in many settings without losing the unique strengths of its specific form. Outside of 3d interface, outside of spatial construction, this approach strikes a balance between generic and specific suitable for many interface scenarios. In practice, these specific function groups are given versatility via a quick remapping technique which allows one physical tool to perform many digital tasks. For example, the handle can be quickly remapped from a lightsaber that cuts shapes to tools that place simple platonic solids, erase portions of objects, and draw double-helices in space. The contributions of this work lie both in a theoretical model of spatial interaction, and input devices (combined with new interactions) which illustrate the efficacy of this philosophy. This research brings the new results of Tangible User Interface to the field of Virtual Reality. We find a space, in and around the hand, where full-fledged haptics are not necessary for users physically connect with digital form.</p

    Generalized Movement Representation in Haptic Perception

    Get PDF
    The extraction of spatial information by touch often involves exploratory movements, with tactile and kinesthetic signals combined to construct a spatial haptic percept. However, the body has many sensory surfaces that can move independently, giving rise to the source binding problem: when there are multiple tactile signals originating from sensory surfaces with multiple movements, are the tactile and kinesthetic signals bound to one another? We studied haptic signal combination by applying the tactile signal to a stationary fingertip while another body part (the other hand or a foot) or a visual target moves, and using a task that can only be done if the tactile and kinesthetic signals are combined. We found that both direction and speed of movement transfer across limbs, but only direction transfers between visual target motion and the tactile signal. In control experiments, we excluded the role of explicit reasoning or knowledge of motion kinematics in this transfer. These results demonstrate the existence of two motion representations in the haptic system—one of direction and another of speed or amplitude—that are both source-free or unbound from their sensory surface of origin. These representations may well underlie our flexibility in haptic perception and sensorimotor control

    Neuronal bases of structural coherence in contemporary dance observation

    Get PDF
    The neuronal processes underlying dance observation have been the focus of an increasing number of brain imaging studies over the past decade. However, the existing literature mainly dealt with effects of motor and visual expertise, whereas the neural and cognitive mechanisms that underlie the interpretation of dance choreographies remained unexplored. Hence, much attention has been given to the Action Observation Network (AON) whereas the role of other potentially relevant neuro-cognitive mechanisms such as mentalizing (theory of mind) or language (narrative comprehension) in dance understanding is yet to be elucidated. We report the results of an fMRI study where the structural coherence of short contemporary dance choreographies was manipulated parametrically using the same taped movement material. Our participants were all trained dancers. The whole-brain analysis argues that the interpretation of structurally coherent dance phrases involves a subpart (Superior Parietal) of the AON as well as mentalizing regions in the dorsomedial Prefrontal Cortex. An ROI analysis based on a similar study using linguistic materials (Pallier et al. 2011) suggests that structural processing in language and dance might share certain neural mechanisms

    An Occupational Therapy Guide for Teaching Handwriting Skills to Adults

    Get PDF
    Handwriting is a skill utilized widely by adults; however, there is a lack of guidelines, information, or literature on the subject as it relates to adults. The purpose of this project was to develop guidelines for occupational therapists to use when providing handwriting interventions with adults. A literature review was conducted using PubMed, CINAHL, SCOPUS, DynaMed, and professional journals to further understand the topic of handwriting with adults and its relation to occupational therapy. Currently, there is limited research and information regarding handwriting with adults and no programs or guidelines were found to assist occupational therapists in developing treatment interventions to remediate adult patients\u27 handwriting. The guidelines developed for occupational therapists consist of a review of the anatomy and musculature involved with handwriting, grasp patterns, ergonomic factors relating to handwriting, visual control, proprioception and kinesthesia, spatial analysis, bilateral integration, and age-appropriate activities/intervention ideas for use with occupational therapists\u27 adult clients. The development of these guidelines was grounded in constructivist learning theory to enhance the meaning of the treatment for the client. These guidelines are intended to provide occupational therapists with a basic foundation of knowledge and treatment strategies to maximize their clients\u27 remediation of handwriting dysfunction. The authors of this scholarly project recommend more research be completed on handwriting practices with adults. It is also recommended that an assessment be developed that specifically addresses adult handwriting skills

    Body Context and Posture Affect Mental Imagery of Hands

    Get PDF
    Different visual stimuli have been shown to recruit different mental imagery strategies. However the role of specific visual stimuli properties related to body context and posture in mental imagery is still under debate. Aiming to dissociate the behavioural correlates of mental processing of visual stimuli characterized by different body context, in the present study we investigated whether the mental rotation of stimuli showing either hands as attached to a body (hands-on-body) or not (hands-only), would be based on different mechanisms. We further examined the effects of postural changes on the mental rotation of both stimuli. Thirty healthy volunteers verbally judged the laterality of rotated hands-only and hands-on-body stimuli presented from the dorsum- or the palm-view, while positioning their hands on their knees (front postural condition) or behind their back (back postural condition). Mental rotation of hands-only, but not of hands-on-body, was modulated by the stimulus view and orientation. Additionally, only the hands-only stimuli were mentally rotated at different speeds according to the postural conditions. This indicates that different stimulus-related mechanisms are recruited in mental rotation by changing the bodily context in which a particular body part is presented. The present data suggest that, with respect to hands-only, mental rotation of hands-on-body is less dependent on biomechanical constraints and proprioceptive input. We interpret our results as evidence for preferential processing of visual- rather than kinesthetic-based mechanisms during mental transformation of hands-on-body and hands-only, respectively

    Unimodal and crossmodal processing of visual and kinesthetic stimuli in working memory

    Get PDF
    The processing of (object) information in working memory has been intensively investigated in the visual modality (i.e. D’Esposito, 2007; Ranganath, 2006). In comparison, research on kinesthetic/haptic or crossmodal processing in working memory is still sparse. During recognition and comparison of object information across modalities, representations built from one sensory modality have to be matched with representations obtained from other senses. In the present thesis, the questions how object information is represented in unimodal and crossmodal working memory, which processes enable unimodal and crossmodal comparisons, and which neuronal correlates are associated with these processes were addressed. In particular, unimodal and crossmodal processing of visually and kinesthetically perceived object features were systematically investigated in distinct working memory phases of encoding, maintenance, and recognition. At this, the kinesthetic modality refers to the sensory perception of movement direction and spatial position, e.g. of one’s own hand, and is part of the haptic sense. Overall, the results of the present thesis suggest that modality-specific representations and modality-specific processes play a role during unimodal and crossmodal processing of object features in working memory

    Tangible user interfaces : past, present and future directions

    Get PDF
    In the last two decades, Tangible User Interfaces (TUIs) have emerged as a new interface type that interlinks the digital and physical worlds. Drawing upon users' knowledge and skills of interaction with the real non-digital world, TUIs show a potential to enhance the way in which people interact with and leverage digital information. However, TUI research is still in its infancy and extensive research is required in or- der to fully understand the implications of tangible user interfaces, to develop technologies that further bridge the digital and the physical, and to guide TUI design with empirical knowledge. This paper examines the existing body of work on Tangible User In- terfaces. We start by sketching the history of tangible user interfaces, examining the intellectual origins of this field. We then present TUIs in a broader context, survey application domains, and review frame- works and taxonomies. We also discuss conceptual foundations of TUIs including perspectives from cognitive sciences, phycology, and philoso- phy. Methods and technologies for designing, building, and evaluating TUIs are also addressed. Finally, we discuss the strengths and limita- tions of TUIs and chart directions for future research

    Haptic Touch and Hand Ability

    Get PDF

    Amplitude and direction errors in kinesthetic pointing

    Get PDF
    We investigated the accuracy with which, in the absence of vision, one can reach again a 2D target location that had been previously identified by a guided movement. A robotic arm guided the participant's hand to a target (locating motion) and away from it (homing motion). Then, the participant pointed freely toward the remembered target position. Two experiments manipulated separately the kinematics of the locating and homing motions. Some robot motions followed a straight path with the bell-shaped velocity profile that is typical of natural movements. Other motions followed curved paths, or had strong acceleration and deceleration peaks. Current motor theories of perception suggest that pointing should be more accurate when the homing and locating motion mimics natural movements. This expectation was not borne out by the results, because amplitude and direction errors were almost independent of the kinematics of the locating and homing phases. In both experiments, participants tended to overshoot the target positions along the lateral directions. In addition, pointing movements towards oblique targets were attracted by the closest diagonal (oblique effect). This error pattern was robust not only with respect to the manner in which participants located the target position (perceptual equivalence), but also with respect to the manner in which they executed the pointing movements (motor equivalence). Because of the similarity of the results with those of previous studies on visual pointing, it is argued that the observed error pattern is basically determined by the idiosyncratic properties of the mechanisms whereby space is represented internall
    corecore