2,477 research outputs found
What can and what cannot be adjusted in the movement patterns of cerebellar patients?
This commentary reviews the case of a patient who could alter the coordination of her prehensile movements when removal of visual feedback reduced her kinetic tremor, but could not coordinate her hand aperture with her hand transport within a single movement. This suggests a dissociation between different subtypes of cerebellar context-response linkage, rather than a single, general association function
A 2.5-D representation of the human hand
Primary somatosensory maps in the brain represent the body as a discontinuous, fragmented set of 2-D skin regions. We nevertheless experience our body as a coherent 3-D volumetric object. The links between these different aspects of body representation, however, remain poorly understood. Perceiving the body’s location in external space requires that immediate afferent signals from the periphery be combined with stored representations of body size and shape. At least for the back of the hand, this body representation is massively distorted, in a highly stereotyped manner. Here we test whether a common pattern of distortions applies to the entire hand as a 3-D object, or whether each 2-D skin surface has its own characteristic pattern of distortion. Participants judged the location in external space of landmark points on the dorsal and palmar surfaces of the hand. By analyzing the internal configuration of judgments, we produced implicit maps of each skin surface. Qualitatively similar distortions were observed in both cases. The distortions were correlated across participants, suggesting that the two surfaces are bound into a common underlying representation. The magnitude of distortion, however, was substantially smaller on the palmar surface, suggesting that this binding is incomplete. The implicit representation of the human hand may be a hybrid, intermediate between a 2-D representation of individual skin surfaces and a 3-D representation of the hand as a volumetric object
Transcranial magnetic stimulation over sensorimotor cortex disrupts anticipatory reflex gain modulation for skilled action
Skilled interactions with new environments require flexible changes to the transformation from somatosensory signals to motor outputs. Transcortical reflex gains are known to be modulated according to task and environmental dynamics, but the mechanism of this modulation remains unclear. We examined reflex organization in the sensorimotor cortex. Subjects performed point- to- point arm movements into predictable force fields. When a small perturbation was applied just before the arm encountered the force field, reflex responses in the shoulder muscles changed according to the upcoming force field direction, indicating anticipatory reflex gain modulation. However, when a transcranial magnetic stimulation (TMS) was applied before the reflex response to such perturbations so that the silent period caused by TMS overlapped the reflex processing period, this modulation was abolished, while the reflex itself remained. Loss of reflex gain modulation could not be explained by reduced reflex amplitudes nor by peripheral effects of TMS on the muscles themselves. Instead, we suggest that TMS disrupted interneuronal networks in the sensorimotor cortex, which contribute to reflex gain modulation rather than reflex generation. We suggest that these networks normally provide the adaptability of rapid sensorimotor reflex responses by regulating reflex gains according to the current dynamical environment
More than skin deep: body representation beyond primary somatosensory cortex
The neural circuits underlying initial sensory processing of somatic information are relatively well understood. In contrast, the processes that go beyond primary somatosensation to create more abstract representations related to the body are less clear. In this review, we focus on two classes of higher-order processing beyond somatosensation. Somatoperception refers to the process of perceiving the body itself, and particularly of ensuring somatic perceptual constancy. We review three key elements of somatoperception: (a) remapping information from the body surface into an egocentric reference frame (b) exteroceptive perception of objects in the external world through their contact with the body and (c) interoceptive percepts about the nature and state of the body itself. Somatorepresentation, in contrast, refers to the essentially cognitive process of constructing semantic knowledge and attitudes about the body, including: (d) lexical-semantic knowledge about bodies generally and one’s own body specifically, (e) configural knowledge about the structure of bodies, (f) emotions and attitudes directed towards one’s own body, and (g) the link between physical body and psychological self. We review a wide range of neuropsychological, neuroimaging and neurophysiological data to explore the dissociation between these different aspects of higher somatosensory function
What is it like to have a body?
Few questions in psychology are as fundamental or as elusive as the sense of one’s own body. Despite widespread recognition of the link between body and self, psychology has only recently developed methods for the scientific study of bodily awareness. Experimental manipulations of embodiment in healthy volunteers have allowed important advances in knowledge. Synchronous multisensory inputs from different modalities play a fundamental role in producing ‘body ownership’, the feeling that my body is ‘mine’. Indeed, appropriate multisensory stimulation can induce ownership over external objects, virtual avatars, and even other people’s bodies. We argue that bodily experience is not monolithic, but has measurable internal structure and components that can be identified psychometrically and psychophysically, suggesting the apparent phenomenal unity of self-consciousness may be illusory. We further review evidence that the sense of one’s own body is highly plastic, with representations of body structure and size particularly prone to multisensory influences
Effects of motor preparation and spatial attention on corticospinal excitability in a delayed-response paradigm
The preparation of motor responses during the delay period of an instructed delay task is associated with sustained neural firing in the primate premotor cortex. It remains unclear how and when such preparation-related premotor activity influences the motor output system. In this study, we tested modulation of corticospinal excitability using single-pulse transcranial magnetic stimulation (TMS) during a delayed-response task. At the beginning of the delay interval participants were either provided with no information, spatial attentional information concerning location but not identity of an upcoming imperative stimulus, or information regarding the upcoming response. Behavioral data indicate that participants used all information available to them. Only when information concerning the upcoming response was provided did corticospinal excitability show differential modulation for the effector muscle compared to other task-unrelated muscles. We conclude that modulation of corticospinal excitability reflects specific response preparation, rather than non-specific event preparation
Recommended from our members
Optimal integration of auditory and vibrotactile information for judgments of temporal order
Recent research that assessed spatial judgments about multisensory stimuli suggests that humans integrate multisensory inputs in a statistically optimal manner by weighting each input by its normalized reciprocal variance. Is integration similarly optimal When humans judge the temporal properties of bimodal stimuli? Twenty-four participants performed temporal order judgments (TOJs,) about 2 spatially separated stimuli. Stimuli were auditory, vibrotactile, or both. The temporal profiles of vibrotactile stimuli were manipulated to produce 3 levels of precision for TOJs. In bimodal conditions, the asynchrony between the 2 unimodal stimuli that comprised it bimodal Stimulus was manipulated to determine the weight given to touch. Bimodal performance on 2 measures-judgment uncertainty and tactile weight-was predicted With unimodal data. A model relying exclusively on audition wits rejected on the basis of both measures. A second model that selected the best input on each trial did not predict the reduced judgment uncertainty observed in bimodal trials. Only the optimal Maximum-likelihood-estimation model predicted both judgment uncertainties and weights the model's validity is extended to TOJs. Alternatives for modeling the process of event sequencing based on integrated multisensory inputs are discussed
Sense of agency primes manual motor responses
Perceiving the body influences how we perceive and respond to stimuli in the world. We investigated the respective effects of different components of bodily representation - the senses of ownership and agency - on responses to simple visual stimuli. Participants viewed a video image of their hand on a computer monitor presented either in real time, or with a systematic delay. Blocks began with an induction period in which the index finger was (i) brushed, (ii) passively moved, or (iii) actively moved by the participant. Subjective reports showed that the sense of ownership over the seen hand emerged with synchronous video, regardless of the type of induction, whereas the sense of agency over the hand emerged only following synchronous video with active movement. Following induction, participants responded as quickly as possible to the onset of visual stimuli near the hand by pressing a button with their other hand. Reaction time was significantly speeded when participants had a sense of agency over their seen hand. This effect was eliminated when participants responded vocally, suggesting that it reflects priming of manual responses, rather than enhanced stimulus detection. These results suggest that vision of one's own hand and, specifically, the sense of agency over that hand primes manual motor responses
On-line control of grasping actions: object-specific motor facilitation requires sustained visual input
Dorsal stream visual processing is generally considered to underlie visually driven action, but when subjects grasp an object from memory, as visual information is not available, ventral stream characteristics emerge. In this study we use paired-pulse transcranial magnetic stimulation (TMS) to investigate the importance of the current visual input during visuomotor grasp. Previously, the amplitude of the paired-pulse motor evoked potentials (MEPs) in hand muscles before movement onset have been shown to predict the subsequent pattern of muscle activity during grasp. Specific facilitation of paired-pulse MEPs may reflect premotor–motor (PMC–M1) cortex connectivity. Here we investigate the paired-pulse MEPs evoked under memory-cued and visually driven conditions before grasping one of two possible target objects (a handle or a disc). All trials began with a delay period of 1200 ms. Then, a TMS pulse served as the cue to reach, grasp and hold the target object for 0.5 s. Total trial length was 5 s. Both objects were continually visible in both conditions, but the way in which the target object was designated differed between conditions. In the memory-cued condition, the target object was illuminated for the first 200 ms of the trial only. In the visually driven condition, the target object was illuminated throughout the 5 s trial. Thus, the conditions differed in whether or not the object to be grasped was designated at the time of movement initiation. We found that the pattern of paired-pulse MEP facilitation matched the pattern of object-specific muscle activity only for the visually driven condition. The results suggest that PMC–M1 connectivity contributes to action selection only when immediate sensory information specifies which action to make
Visually induced analgesia: seeing the body reduces pain
Given previous reports of strong interactions between vision and somatic senses, we investigated whether vision of the body modulates pain perception. Participants looked into a mirror aligned with their body midline at either the reflection of their own left hand (creating the illusion that they were looking directly at their own right hand) or the reflection of a neutral object. We induced pain using an infrared laser and recorded nociceptive laser-evoked potentials (LEPs). We also collected subjective ratings of pain intensity and unpleasantness. Vision of the body produced clear analgesic effects on both subjective ratings of pain and the N2/P2 complex of LEPs. Similar results were found during direct vision of the hand, without the mirror. Furthermore, these effects were specific to vision of one’s own hand and were absent when viewing another person’s hand. These results demonstrate a novel analgesic effect of non-informative vision of the body
- …
