1,148 research outputs found

    Divisions Within the Posterior Parietal Cortex Help Touch Meet Vision

    Get PDF
    The parietal cortex is divided into two major functional regions: the anterior parietal cortex that includes primary somatosensory cortex, and the posterior parietal cortex (PPC) that includes the rest of the parietal lobe. The PPC contains multiple representations of space. In Dijkerman and de Haan’s (see record 2007-13802-022) model, higher spatial representations are separate from PPC functions. This model should be developed further so that the functions of the somatosensory system are integrated with specific functions within the PPC and higher spatial representations. Through this further specification of the model, one can make better predictions regarding functional interactions between somatosensory and visual systems

    Central role of somatosensory processes in sexual arousal as identified by neuroimaging techniques

    Get PDF
    Research on the neural correlates of sexual arousal is a growing field of research in affective neuroscience. A new approach studying the correlation between the hemodynamic cerebral response and autonomic genital response has enabled distinct brain areas to be identified according to their role in inducing penile erection, on the one hand, and in representing penile sensation, on the othe

    Neural correlates of hand-tool interaction

    Get PDF
    Background: The recent advent of non-invasive functional magnetic resonance image (fMRI) has helped us understand how visual information is processed in the visual system, and the functional organising principles of high-order visual areas beyond striate cortex. In particular, evidence has been reported for a constellation of high-order visual areas that are highly specialised for the visual processing of different object domains such as faces, bodies, and tools. A number of accounts of the underlying principle of functional specialisation in high-order visual cortex propose that visual properties and object domain drive the category selectivity of these areas. However, recent evidence has challenged such accounts, showing that non-visual object properties and connectivity constraints between specialised brain networks can, in part, account for the visual system’s functional organisation. Methodology: Here I will use fMRI to examine how areas along the visual ventral stream and dorsal action stream process visually presented hands and tools. These categories are visually dissimilar but share similar functions. By using different statistical analyses, such as univariate group and single-subject region of interest (ROI) analyses, multivariate multivoxel pattern analyses, and functional connectivity analyses, I will investigate the topics of category-selectivity and the principles underlying the organisation of high-order visual areas in left occipitotemporal and left parietal cortex. Principle Findings: In the first part of this thesis I report novel evidence that, similar to socially relevant faces and bodies, the human high-order visual areas in left occipitotemporal and left parietal cortex houses areas that are selective for the visual processing of human hands. In the second part of this thesis, I show that the visual representation of hands and tools in these areas show large anatomical overlap and high similarity in the response patterns to these categories. As hands and tools differ in visual appearance and object domain yet share action-related properties, the results demonstrate that these category-selective responses in the visual system reflect responses to non-visual action-related object properties common to hands and tools rather than to purely visual properties or object domain. This proposition is further supported by evidence of selective functional connectivity patterns between hand/tool occipitotemporal and parietal areas. Conclusions/Significance: Overall these results indicate that high-order visual cortex is functionally organised to process both visual properties and non-visual object dimensions (e.g., action-related properties). I propose that this correspondence between hand and tool representations in ventral ‘visual’ and parietal ‘action’ areas is constrained by the necessity to connect visual object information to functionally-specific downstream networks (e.g., frontoparietal action network) to facilitate hand-tool action-related processing

    The Neural Development of Visuohaptic Object Processing

    Get PDF
    Thesis (Ph.D.) - Indiana University, Cognitive Science, 2015Object recognition is ubiquitous and essential for interacting with, as well as learning about, the surrounding multisensory environment. The inputs from multiple sensory modalities converge quickly and efficiently to guide this interaction. Vision and haptics are two modalities in particular that offer redundant and complementary information regarding the geometrical (i.e., shape) properties of objects for recognition and perception. While the systems supporting visuohaptic object recognition in the brain, including the lateral occipital complex (LOC) and the intraparietal sulcus (IPS), are well-studied in adults, there is currently a paucity of research surrounding the neural development of visuohaptic processing in children. Little is known about how and when vision converges with haptics for object recognition. In this dissertation, I investigate the development of neural mechanisms involved in multisensory processing. Using functional magnetic resonance imaging (fMRI) and general psychophysiological interaction (gPPI) methods of functional connectivity analysis in children (4 to 5.5 years, 7 to 8.5 years) and adults, I examine the developmental changes of the brain regions underlying the convergence of visual and haptic object perception, the neural substrates supporting crossmodal processing, and the interactions and functional connections between visuohaptic systems and other neural regions. Results suggest that the complexity of sensory inputs impacts the development of neural substrates. The more complicated forms of multisensory and crossmodal object processing show protracted developmental trajectories as compared to the processing of simple, unimodal shapes. Additionally, the functional connections between visuohaptic areas weaken over time, which may facilitate the fine-tuning of other perceptual systems that occur later in development. Overall, the findings indicate that multisensory object recognition cannot be described as a unitary process. Rather, it is comprised of several distinct sub-processes that follow different developmental timelines throughout childhood and into adulthood

    Separate channels for processing form, texture, and color: Evidence from fMRI adaptation and visual object agnosia

    Get PDF
    Previous neuroimaging research suggests that although object shape is analyzed in the lateral occipital cortex, surface properties of objects, such as color and texture, are dealt with in more medial areas, close to the collateral sulcus (CoS). The present study sought to determine whether there is a single medial region concerned with surface properties in general or whether instead there are multiple foci independently extracting different surface properties. We used stimuli varying in their shape, texture, or color, and tested healthy participants and 2 object-agnosic patients, in both a discrimination task and a functional MR adaptation paradigm. We found a double dissociation between medial and lateral occipitotemporal cortices in processing surface (texture or color) versus geometric (shape) properties, respectively. In Experiment 2, we found that the medial occipitotemporal cortex houses separate foci for color (within anterior CoS and lingual gyrus) and texture (caudally within posterior CoS). In addition, we found that areas selective for shape, texture, and color individually were quite distinct from those that respond to all of these features together (shape and texture and color). These latter areas appear to correspond to those associated with the perception of complex stimuli such as faces and places

    Contribution of the posterior parietal cortex in reaching, grasping, and using objects and tools

    Get PDF
    Neuropsychological and neuroimaging data suggest a differential contribution of posterior parietal regions during the different components of a transitive gesture. Reaching requires the integration of object location and body position coordinates and reaching tasks elicit bilateral activation in different foci along the intraparietal sulcus. Grasping requires a visuomotor match between the object's shape and the hand's posture. Lesion studies and neuroimaging confirm the importance of the anterior part of the intraparietal sulcus for human grasping. Reaching and grasping reveal bilateral activation that is generally more prominent on the side contralateral to the hand used or the hemifield stimulated. Purposeful behavior with objects and tools can be assessed in a variety of ways, including actual use, pantomimed use, and pure imagery of manipulation. All tasks have been shown to elicit robust activation over the left parietal cortex in neuroimaging, but lesion studies hav e not always confirmed these findings. Compared to pantomimed or imagined gestures, actual object and tool use typically produces activation over the left primary somatosensory region. Neuroimaging studies on pantomiming or imagery of tool use in healthy volunteers revealed neural responses in possibly separate foci in the left supramarginal gyrus. In sum, the parietal contribution of reaching and grasping of objects seems to depend on a bilateral network of intraparietal foci that appear organized along gradients of sensory and effector preferences. Dorsal and medial parietal cortex appears to contribute to the online monitoring/adjusting of the ongoing prehensile action, whereas the functional use of objects and tools seems to involve the inferior lateral parietal cortex. This functional input reveals a clear left lateralized activation pattern that may be tuned to the integration of acquired knowledge in the planning and guidance of the transitive movement

    Neural Coding of Real and Implied Motion in the Human Brain

    Get PDF
    Perceiving and processing visual motion is crucial for all animals, including humans. Brain regions in the human brain that are responsive to real motion have been extensively studied with different neuroimaging methods. However, the neural codes that are related to real motion have been primarily addressed using highly reductionist and mostly artificial motion stimuli, mostly using so-called random dot kinematograms. Studies using more natural forms of motion that the brain evolved and developed to deal with are comparably rare. Moreover, real, physical motion is not the only type of stimulus that induces motion perception in humans. Implied motion stimuli also induce motion perception although the stimuli do not carry physical motion information. Implied motion stimuli are for example still images containing a snap-shot of an object in motion. Various contextual cues mediate the percept of motion, including the context of the object in its background, and in particular the object composition and its axial position in the image that mediate both, the impression of implied motion as well as its direction. This means that at the neural level, object processing must be used to generate the implied motion percept. The work described in this thesis investigated the neural coding of real and implied motion in the human brain. The investigation was done using functional brain imaging of human adults and data were collected with a 3-Tesla MRI scanner while the participants viewed a variety of distinct visual stimuli. The visual stimuli contained directional real and implied motion and were created specifically for this study. For real motion stimuli, the aim of was to engage a maximal number of directionally selective units, in order to maximize the overlap to the subset of units potentially involved in coding implied motion. Hence, real motion stimuli were created such that the static component frames had natural image statistics (known to activate neurons more effectively) by using Fourier-scrambled natural images, and motion was presented at a wide range of motion velocities. Similarly, implied motion stimuli were derived from photographs of natural scenes. They were created by placing objects such as airplanes, birds, cars, or snapshots of walking humans on a set of contextual background images such as skylines or streets. For both, real motion and implied motion, stimuli for four directions were created: forwards and backwards,and left- and rightwards

    Location representations of objects in cluttered scenes in the human brain

    Get PDF
    When we perceive a visual scene, we usually see an arrangement of multiple cluttered and partly overlapping objects, like a park with trees and people in it. Spatial attention helps us to prioritize relevant portions of such scenes to efficiently interact with our environments. In previous experiments on object recognition, objects were often presented in isolation, and these studies found that the location of objects is encoded early in time (before ∼150 ms) and in early visual cortex or in the dorsal stream. However, in real life objects rarely appear in isolation but are instead embedded in cluttered scenes. Encoding the location of an object in clutter might require fundamentally different neural computations. Therefore this dissertation addressed the question of how location representations of objects on cluttered backgrounds are encoded in the human brain. To answer this question, we investigated where in cortical space and when in neural processing time location representations emerge when objects are presented on cluttered backgrounds and which role spatial attention plays for the encoding of object location. We addressed these questions in two studies, both including fMRI and EEG experiments. The results of the first study showed that location representations of objects on cluttered backgrounds emerge along the ventral visual stream, peaking in region LOC with a temporal delay that was linked to recurrent processing. The second study showed that spatial attention modulated those location representations in mid- and high-level regions along the ventral stream and late in time (after ∼150 ms), independently of whether backgrounds were cluttered or not. These findings show that location representations emerge during late stages of processing both in cortical space and in neural processing time when objects are presented on cluttered backgrounds and that they are enhanced by spatial attention. Our results provide a new perspective on visual information processing in the ventral visual stream and on the temporal dynamics of location processing. Finally, we discuss how shared neural substrates of location and category representations in the brain might improve object recognition for real-world vision

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    The neuroscience of vision-based grasping: a functional review for computational modeling and bio-inspired robotics

    Get PDF
    The topic of vision-based grasping is being widely studied using various techniques and with different goals in humans and in other primates. The fundamental related findings are reviewed in this paper, with the aim of providing researchers from different fields, including intelligent robotics and neural computation, a comprehensive but accessible view on the subject. A detailed description of the principal sensorimotor processes and the brain areas involved in them is provided following a functional perspective, in order to make this survey especially useful for computational modeling and bio-inspired robotic application
    • …
    corecore