40 research outputs found

    Grasping the non-conscious: Preserved grip scaling to unseen objects for immediate but not delayed grasping following a unilateral lesion to primary visual cortex

    Get PDF
    AbstractPatients with damage to primary visual cortex can sometimes direct actions towards ‘unseen’ targets located in areas of the visual field that are deemed ‘blind’ on the basis of static perimetry tests. Here, we show that a patient with a complete right homonymous hemianopia after a V1 lesion remains sensitive to the width of objects presented in her blind field but only when reaching out to grasp them in ‘real-time’. A subsequent fMRI experiment revealed spared extra-geniculostriate pathways, which may mediate her preserved abilities. Taken together, the results support the view that visually guided movements can be mediated by pathways that do not support visual consciousness

    The Role of Haptic Expectations in Reaching to Grasp: From Pantomime to Natural Grasps and Back Again

    Get PDF
    © Copyright © 2020 Whitwell, Katz, Goodale and Enns. When we reach to pick up an object, our actions are effortlessly informed by the object’s spatial information, the position of our limbs, stored knowledge of the object’s material properties, and what we want to do with the object. A substantial body of evidence suggests that grasps are under the control of “automatic, unconscious” sensorimotor modules housed in the “dorsal stream” of the posterior parietal cortex. Visual online feedback has a strong effect on the hand’s in-flight grasp aperture. Previous work of ours exploited this effect to show that grasps are refractory to cued expectations for visual feedback. Nonetheless, when we reach out to pretend to grasp an object (pantomime grasp), our actions are performed with greater cognitive effort and they engage structures outside of the dorsal stream, including the ventral stream. Here we ask whether our previous finding would extend to cued expectations for haptic feedback. Our method involved a mirror apparatus that allowed participants to see a “virtual” target cylinder as a reflection in the mirror at the start of all trials. On “haptic feedback” trials, participants reached behind the mirror to grasp a size-matched cylinder, spatially coincident with the virtual one. On “no-haptic feedback” trials, participants reached behind the mirror and grasped into “thin air” because no cylinder was present. To manipulate haptic expectation, we organized the haptic conditions into blocked, alternating, and randomized schedules with and without verbal cues about the availability of haptic feedback. Replicating earlier work, we found the strongest haptic effects with the blocked schedules and the weakest effects in the randomized uncued schedule. Crucially, the haptic effects in the cued randomized schedule was intermediate. An analysis of the influence of the upcoming and immediately preceding haptic feedback condition in the cued and uncued random schedules showed that cuing the upcoming haptic condition shifted the haptic influence on grip aperture from the immediately preceding trial to the upcoming trial. These findings indicate that, unlike cues to the availability of visual feedback, participants take advantage of cues to the availability of haptic feedback, flexibly engaging pantomime, and natural modes of grasping to optimize the movement

    Rapid decrement in the effects of the Ponzo display dissociates action and perception.

    Get PDF
    It has been demonstrated that pictorial illusions have a smaller influence on grasping than they do on perceptual judgments. Yet to date this work has not considered the reduced influence of an illusion as it is measured repeatedly. Here we studied this decrement in the context of a Ponzo illusion to further characterize the dissociation between vision for perception and for action. Participants first manually estimated the lengths of single targets in a Ponzo display with their thumb and index finger, then actually grasped these targets in another series of trials, and then manually estimated the target lengths again in a final set of trials. The results showed that although the perceptual estimates and grasp apertures were equally sensitive to real differences in target length on the initial trials, only the perceptual estimates remained biased by the illusion over repeated measurements. In contrast, the illusion\u27s effect on the grasps decreased rapidly, vanishing entirely after only a few trials. Interestingly, a closer examination of the grasp data revealed that this initial effect was driven largely by undersizing the grip aperture for the display configuration in which the target was positioned between the diverging background lines (i.e., when the targets appeared to be shorter than they really were). This asymmetry between grasping apparently shorter and longer targets suggests that the sensorimotor system may initially treat the edges of the configuration as obstacles to be avoided. This finding highlights the sensorimotor system\u27s ability to rapidly update motor programs through error feedback, manifesting as an immunity to the effects of illusion displays even after only a few trials

    A time-resolved proteomic and prognostic map of COVID-19

    Get PDF
    COVID-19 is highly variable in its clinical presentation, ranging from asymptomatic infection to severe organ damage and death. We characterized the time-dependent progression of the disease in 139 COVID-19 inpatients by measuring 86 accredited diagnostic parameters, such as blood cell counts and enzyme activities, as well as untargeted plasma proteomes at 687 sampling points. We report an initial spike in a systemic inflammatory response, which is gradually alleviated and followed by a protein signature indicative of tissue repair, metabolic reconstitution, and immunomodulation. We identify prognostic marker signatures for devising risk-adapted treatment strategies and use machine learning to classify therapeutic needs. We show that the machine learning models based on the proteome are transferable to an independent cohort. Our study presents a map linking routinely used clinical diagnostic parameters to plasma proteomes and their dynamics in an infectious disease

    Affective blindsight in the absence of input from face processing regions in occipital-temporal cortex

    No full text
    © 2017 Elsevier Ltd Previous research suggests that the implicit recognition of emotional expressions may be carried out by pathways that bypass primary visual cortex (V1) and project to the amygdala. Some of the strongest evidence supporting this claim comes from case studies of “affective blindsight” in which patients with V1 damage can correctly guess whether an unseen face was depicting a fearful or happy expression. In the current study, we report a new case of affective blindsight in patient MC who is cortically blind following extensive bilateral lesions to V1, as well as face and object processing regions in her ventral visual stream. Despite her large lesions, MC has preserved motion perception which is related to sparing of the motion sensitive region MT+ in both hemispheres. To examine affective blindsight in MC we asked her to perform gender and emotion discrimination tasks in which she had to guess, using a two-alternative forced-choice procedure, whether the face presented was male or female, happy or fearful, or happy or angry. In addition, we also tested MC in a four-alternative forced-choice target localization task. Results indicated that MC was not able to determine the gender of the faces (53% accuracy), or localize targets in a forced-choice task. However, she was able to determine, at above chance levels, whether the face presented was depicting a happy or fearful (67%, p = .006), or a happy or angry (64%, p = .025) expression. Interestingly, although MC was better than chance at discriminating between emotions in faces when asked to make rapid judgments, her performance fell to chance when she was asked to provide subjective confidence ratings about her performance. These data lend further support to the idea that there is a non-conscious visual pathway that bypasses V1 which is capable of processing affective signals from facial expressions without input from higher-order face and object processing regions in the ventral visual stream

    Real-time vision, tactile cues, and visual form agnosia: removing haptic feedback from a “natural” grasping task induces pantomime-like grasps

    Get PDF
    Investigators study the kinematics of grasping movements (prehension) under a variety of conditions to probe visuomotor function in normal and brain-damaged individuals. When patient DF, who suffers from visual form agnosia, performs natural grasps, her in-flight hand aperture is scaled to the widths of targets ('grip scaling') that she cannot discriminate amongst. In contrast, when DF's pantomime grasps are based on a memory of a previewed object, her grip scaling is very poor. Her failure on this task has been interpreted as additional support for the dissociation between the use of object vision for action and object vision for perception. Curiously, however, when DF directs her pantomimed grasps towards a displaced imagined copy of a visible object where her fingers make contact with the surface of the table, her grip scaling does not appear to be particularly poor. In the first of two experiments, we revisit this previous work and show that her grip scaling in this real-time pantomime grasping task does not differ from controls, suggesting that terminal tactile feedback from a proxy of the target can maintain DF's grip scaling. In a second experiment with healthy participants, we tested a recent variant of a grasping task in which no tactile feedback is available (i.e. no haptic feedback) by comparing the kinematics of target-directed grasps with and without haptic feedback to those of real-time pantomime grasps without haptic feedback. Compared to natural grasps, removing haptic feedback increased RT, slowed the velocity of the reach, reduced grip aperture, sharpened the slopes relating grip aperture to target width, and reduced the final grip aperture. All of these effects were also observed in the pantomime grasping task. Taken together, these results provide compelling support for the view that removing haptic feedback induces a switch from real-time visual control to one that depends more on visual perception and cognitive supervision

    Erratum: Grip Constancy but Not Perceptual Size Constancy Survives Lesions of Early Visual Cortex (Current Biology (2020) 30(18) (3680–3686.e5), (S0960982220310186), (10.1016/j.cub.2020.07.026))

    No full text
    © 2020 (Current Biology 30, 3680–3686.e1–e5; September 21, 2020) In the original Figure 3, the bottom two images were mislabeled as “7.1 cm 6.2 cm 5.5 cm 5 cm.” They should read as follows: 7.6 cm 6.3 cm 5.0 cm 3.8 cm, respectively. This has been corrected online and appears below. The authors apologize for any confusion the error may have caused. [Figure presented] [Figure presented

    Unusual hand postures but not familiar tools show motor equivalence with precision grasping.

    No full text
    A central question in sensorimotor control is whether or not actions performed with the hands and corresponding actions performed with tools share a common underlying motor plan, even though different muscles and effectors are engaged. There is certainly evidence that tools used to extend the reach of the limb can be incorporated into the body schema after training. But even so, it is not clear whether or not actions such as grasping with tools and grasping with the fingers share the same programming network, i.e. show \u27motor equivalence\u27. Here we first show that feedback-appropriate motor programming for grasps with atypical hand postures readily transfers to stereotypical precision grasps. In stark contrast, however, we find no evidence for an analogous transfer of the programming for grasps using tools to the same stereotypical precision grasps. These findings have important implications for our understanding of body schema. Although the extension of the limb that is afforded by tool use may be incorporated into the body schema, the programming of a grasping movement made with tools appears to resist such incorporation. It could be the case that the proprioceptive signals from the limb can be easily updated to reflect the end of a tool held in the hand, but the motor programs and sensory signals associated with grasping with the thumb and finger cannot be easily adapted to control the opening and closing of a tool. Instead, new but well-practiced motor programs are put in place for tool use that do not exhibit motor equivalence with manual grasping
    corecore