287 research outputs found

    Common Cortical Loci Are Activated during Visuospatial Interpolation and Orientation Discrimination Judgements

    Get PDF
    There is a wealth of literature on the role of short-range interactions between low-level orientation-tuned filters in the perception of discontinuous contours. However, little is known about how spatial information is integrated across more distant regions of the visual field in the absence of explicit local orientation cues, a process referred to here as visuospatial interpolation (VSI). To examine the neural correlates of VSI high field functional magnetic resonance imaging was used to study brain activity while observers either judged the alignment of three Gabor patches by a process of interpolation or discriminated the local orientation of the individual patches. Relative to a fixation baseline the two tasks activated a largely over-lapping network of regions within the occipito-temporal, occipito-parietal and frontal cortices. Activated clusters specific to the orientation task (orientation>interpolation) included the caudal intraparietal sulcus, an area whose role in orientation encoding per se has been hotly disputed. Surprisingly, there were few task-specific activations associated with visuospatial interpolation (VSI>orientation) suggesting that largely common cortical loci were activated by the two experimental tasks. These data are consistent with previous studies that suggest higher level grouping processes -putatively involved in VSI- are automatically engaged when the spatial properties of a stimulus (e.g. size, orientation or relative position) are used to make a judgement

    Perception, action and the cortical visual streams

    Get PDF
    Over a decade ago Milner and Goodale suggested that perception and action are subserved by two distinct cortical visual streams. The ventral stream projecting from striate cortex to inferotemporal cortex is involved in the perceptual identification of objects. The dorsal stream projecting from striate cortex to posterior parietal cortex is involved in visually guided actions. A series of experiments have been carried out and are presented within this thesis to investigate how various aspects of visuomotor behaviour fit into such a model. A range of techniques were employed, including: (1) behavioural studies with patients with optic ataxia (dorsal stream damage) and visual form agnosia (ventral stream damage); (2) transcranial magnetic stimulation (TMS) in healthy subjects; (3) functional magnetic resonance imaging (fMRI) in healthy subjects. The following conclusions were made: (1) obstacle avoidance behaviour is impaired in patients with optic ataxia due to damage to the dorsal stream; (2) obstacle avoidance is intact in patients with visual form agnosia as damage is restricted to the ventral stream; (3) obstacle avoidance is mediated by the dorsal stream when an immediate response is required, whereas under delayed conditions the ventral stream comes into play; (4) visual form agnosic patients can use looming information to catch moving objects and they are capable of responding to online perturbations due to an intact dorsal stream; (5) V5 / MT+ is involved in motion processing for perception and action and does not belong exclusively to the dorsal or ventral stream; (6) the dorsal stream is only sensitive to orientation changes if the stimuli are graspable. While some modifications of the original distinction are necessary, the experiments presented within this thesis suggest that this model has, for the most part, withstood the test of time and provides a useful framework for understanding various aspects of perception and action

    How the brain controls hand actions: TMS, fMRI and behavioural studies

    Get PDF
    This thesis focused on testing the predictions made in Milner and Goodale’s model and reports finding from experiments investigating how inputs from both the dorsal and the ventral streams are required when we perform hand actions with objects (Chapter 2) and tools (Chapter 3 & 4) using different paradigms such as real and pantomimed grasping and techniques such as transcranial magnetic stimulation, motion-tracking of hand movements and cutting-edge fMRI multivoxel pattern analysis. The primary aim was to gain a new insight on the role of the dorsal and the ventral visual streams in real grasping and pantomiming and to understand what specific aspects of objects and movements associated with them are represented within the two streams. The first experiment (Chapter 2) examined the causal role of the anterior intraparietal and the lateral occipital in object’s real and pantomimed grasping using TMS. The results showed that real object grasping and pantomime actions without the objects in hand require the left dorsal stream but that information from the ventral stream is additionally required for pantomiming. The experiments in Chapter 3 and 4 investigated how tools and tool related actions are represented within the dorsal and the ventral stream (Chapter 3) and whether different action end-goals affected early grasping kinematics (Chapter 4). Using MVPA we showed that both dorsal and ventral stream regions represent information about functional and structural manipulation knowledge of tools. Moreover, we showed that both streams represent tool identity, which seems in line with our behavioural findings that tool identity affects grasping kinematics. The current work provided a detailed understanding of how the dorsal and the ventral streams interact in tool processing and propose a more sophisticated view of the distributed representations across the two streams. These findings open up a number of research avenues as well as help understanding how actions are disrupted in brain-damaged patients and advance the development of neural prosthetics

    The cognitive neuroscience of prehension: recent developments

    Get PDF
    Prehension, the capacity to reach and grasp, is the key behavior that allows humans to change their environment. It continues to serve as a remarkable experimental test case for probing the cognitive architecture of goal-oriented action. This review focuses on recent experimental evidence that enhances or modifies how we might conceptualize the neural substrates of prehension. Emphasis is placed on studies that consider how precision grasps are selected and transformed into motor commands. Then, the mechanisms that extract action relevant information from vision and touch are considered. These include consideration of how parallel perceptual networks within parietal cortex, along with the ventral stream, are connected and share information to achieve common motor goals. On-line control of grasping action is discussed within a state estimation framework. The review ends with a consideration about how prehension fits within larger action repertoires that solve more complex goals and the possible cortical architectures needed to organize these actions

    State-dependent TMS reveals representation of affective body movements in the anterior intraparietal cortex

    Get PDF
    In humans, recognition of others’ actions involves a cortical network that comprises, among other cortical regions, the posterior superior temporal sulcus (pSTS), where biological motion is coded and the anterior intraparietal suclus (aIPS), where movement information is elaborated in terms of meaningful goal directed actions. This action observation system (AOS) is thought to encode neutral voluntary actions, and possibly some aspects of affective motor repertoire, but the role of the AOS’ areas in processing affective kinematic information has never been examined. Here we investigated whether the action observation system plays a role in representing dynamic emotional bodily expressions. In the first experiment, we assessed behavioural adaptation effects of observed affective movements. Participants watched series of happy or fearful whole-body point-light displays (PLDs) as adapters and were then asked to perform an explicit categorization of the emotion expressed in test PLDs. Participants were slower when categorizing any of the two emotions as long as it was congruent with the emotion in the adapter sequence. We interpreted this effect as adaptation to the emotional content of PLDs. In the second experiment, we combined this paradigm with TMS applied over either the right aIPS, pSTS and the right half of the occipital pole (corresponding to Brodmann’s area 17 and serving as control) to examine the neural locus of the adaptation effect. TMS over the aIPS (but not over the other sites) reversed the behavioural cost of adaptation, specifically for fearful contents. This demonstrates that aIPS contains an explicit representation of affective body movements

    Cortical Mechanisms for Transsaccadic Perception of Visual Object Features

    Get PDF
    The cortical correlates for transsaccadic perception (i.e., the ability to perceive, maintain, and update information across rapid eye movements, or saccades; Irwin, 1991) have been little investigated. Previously, Dunkley et al. (2016) found evidence of transsaccadic updating of object orientation in specific intraparietal (i.e., supramarginal gyrus, SMG) and extrastriate occipital (putative V4) regions. Based on these findings, I hypothesized that transsaccadic perception may rely on a single cortical mechanism. In this dissertation, I first investigated whether activation in the previous regions would generalize to another modality (i.e., motor/grasping) for the same feature (orientation) change, using a functional magnetic resonance imaging (fMRI) event-related paradigm that involved participants grasping a three-dimensional rotatable object for either fixations or saccades. The findings from this experiment further support the role of SMG in transsaccadic updating of object orientation, and provide a novel view of traditional reach/grasp-related regions in their ability to update grasp-related signals across saccades. In the second experiment, I investigated whether parietal cortex (e.g., SMG) plays a general role in the transsaccadic perception of other low-level object features, such as spatial frequency. The results point to the engagement of a different, posteromedial extrastriate (i.e., cuneus) region for transsaccadic perception of spatial frequency changes. This indirect assessment of transsaccadic interactions for different object features suggests that feature sensitive mechanisms may exist. In the third experiment, I tested the cortical correlates directly for two object features: orientation and shape. In this experiment, only posteromedial extrastriate cortex was associated with transsaccadic feature updating in the feature discrimination task, as it showed both saccade and feature modulations. Overall, the results of these three neuroimaging studies suggest that transsaccadic perception may be brought about by more than a single, general mechanism and, instead, through multiple, feature-dependent cortical mechanisms. Specifically, the saccade system communicates with inferior parietal cortex for transsaccadic judgements of orientation in an identified object, whereas as a medial occipital system is engaged for feature judgements related to object identity

    Decoding motor intentions from human brain activity

    Get PDF
    “You read my mind.” Although this simple everyday expression implies ‘knowledge or understanding’ of another’s thinking, true ‘mind-reading’ capabilities implicitly seem constrained to the domains of Hollywood and science-fiction. In the field of sensorimotor neuroscience, however, significant progress in this area has come from mapping characteristic changes in brain activity that occur prior to an action being initiated. For instance, invasive neural recordings in non-human primates have significantly increased our understanding of how highly cognitive and abstract processes like intentions and decisions are represented in the brain by showing that it is possible to decode or ‘predict’ upcoming sensorimotor behaviors (e.g., movements of the arm/eyes) based on preceding changes in the neuronal output of parieto-frontal cortex, a network of areas critical for motor planning. In the human brain, however, a successful counterpart for this predictive ability and a similar detailed understanding of intention-related signals in parieto-frontal cortex have remained largely unattainable due to the limitations of non-invasive brain mapping techniques like functional magnetic resonance imaging (fMRI). Knowing how and where in the human brain intentions or plans for action are coded is not only important for understanding the neuroanatomical organization and cortical mechanisms that govern goal-directed behaviours like reaching, grasping and looking – movements critical to our interactions with the world – but also for understanding homologies between human and non-human primate brain areas, allowing the transfer of neural findings between species. In the current thesis, I employed multi-voxel pattern analysis (MVPA), a new fMRI technique that has made it possible to examine the coding of neural information at a more fine-grained level than that previously available. I used fMRI MVPA to examine how and where movement intentions are coded in human parieto-frontal cortex and specifically asked the question: What types of predictive information about a subject\u27s upcoming movement can be decoded from preceding changes in neural activity? Project 1 first used fMRI MVPA to determine, largely as a proof-of-concept, whether or not specific object-directed hand actions (grasps and reaches) could be predicted from intention-related brain activity patterns. Next, Project 2 examined whether effector-specific (arm vs. eye) movement plans along with their intended directions (left vs. right) could also be decoded prior to movement. Lastly, Project 3 examined exactly where in the human brain higher-level movement goals were represented independently from how those goals were to be implemented. To this aim, Project 3 had subjects either grasp or reach toward an object (two different motor goals) using either their hand or a novel tool (with kinematics opposite to those of the hand). In this way, the goal of the action (grasping vs. reaching) could be maintained across actions, but the way in which those actions were kinematically achieved changed in accordance with the effector (hand or tool). All three projects employed a similar event-related delayed-movement fMRI paradigm that separated in time planning and execution neural responses, allowing us to isolate the preparatory patterns of brain activity that form prior to movement. Project 1 found that the plan-related activity patterns in several parieto-frontal brain regions were predictive of different upcoming hand movements (grasps vs. reaches). Moreover, we found that several parieto-frontal brain regions, similar to that only previously demonstrated in non-human primates, could actually be characterized according to the types of movements they can decode. Project 2 found a variety of functional subdivisions: some parieto-frontal areas discriminated movement plans for the different reach directions, some for the different eye movement directions, and a few areas accurately predicted upcoming directional movements for both the hand and eye. This latter finding demonstrates -- similar to that shown previously in non-human primates -- that some brain areas code for the end motor goal (i.e., target location) independent of effector used. Project 3 identified regions that decoded upcoming hand actions only, upcoming tool actions only, and rather interestingly, areas that predicted actions with both effectors (hand and tool). Notably, some of these latter areas were found to represent the higher-level goals of the movement (grasping vs. reaching) instead of the specific lower-level kinematics (hand vs. tool) necessary to implement those goals. Taken together, these findings offer substantial new insights into the types of intention-related signals contained in human brain activity patterns and specify a hierarchical neural architecture spanning parieto-frontal cortex that guides the construction of complex object-directed behaviors

    The cognitive neuroscience of visual working memory

    Get PDF
    Visual working memory allows us to temporarily maintain and manipulate visual information in order to solve a task. The study of the brain mechanisms underlying this function began more than half a century ago, with Scoville and Milner’s (1957) seminal discoveries with amnesic patients. This timely collection of papers brings together diverse perspectives on the cognitive neuroscience of visual working memory from multiple fields that have traditionally been fairly disjointed: human neuroimaging, electrophysiological, behavioural and animal lesion studies, investigating both the developing and the adult brain

    Frontal eye field, where art thou? Anatomy, function, and non-invasive manipulation of frontal regions involved in eye movements and associated cognitive operations

    Get PDF
    The planning, control and execution of eye movements in 3D space relies on a distributed system of cortical and subcortical brain regions. Within this network, the Eye Fields have been described in animals as cortical regions in which electrical stimulation is able to trigger eye movements and influence their latency or accuracy. This review focuses on the Frontal Eye Field (FEF) a “hub” region located in Humans in the vicinity of the pre-central sulcus and the dorsal-most portion of the superior frontal sulcus. The straightforward localization of the FEF through electrical stimulation in animals is difficult to translate to the healthy human brain, particularly with non-invasive neuroimaging techniques. Hence, in the first part of this review, we describe attempts made to characterize the anatomical localization of this area in the human brain. The outcome of functional Magnetic Resonance Imaging (fMRI), Magneto-encephalography (MEG) and particularly, non-invasive mapping methods such a Transcranial Magnetic Stimulation (TMS) are described and the variability of FEF localization across individuals and mapping techniques are discussed. In the second part of this review, we will address the role of the FEF. We explore its involvement both in the physiology of fixation, saccade, pursuit, and vergence movements and in associated cognitive processes such as attentional orienting, visual awareness and perceptual modulation. Finally in the third part, we review recent evidence suggesting the high level of malleability and plasticity of these regions and associated networks to non-invasive stimulation. The exploratory, diagnostic, and therapeutic interest of such interventions for the modulation and improvement of perception in 3D space are discussed
    corecore