1,482 research outputs found

    Non-holistic coding of objects in lateral occipital complex with and without attention

    Get PDF
    A fundamental issue in visual cognition is whether high-level visual areas code objects in a part-based or a view-based (holistic) format. By examining the viewpoint invariance of object recognition, previous behavioral and neuroimaging studies have yielded ambiguous results, supporting both types of representational formats. A critical factor distinguishing the two formats could be the availability of attentional resources, as a number of studies have found greater viewpoint invariance for attended compared to unattended objects. It has therefore been suggested that attention is necessary to enable part-based representations, whereas holistic representations are automatically activated irrespective of attention. In this functional magnetic resonance imaging study we used a multivariate approach to probe the format of object representations in human lateral occipital complex (LOC) and its dependence on attention. We presented human participants with intact and half-split versions of objects that were either attended or unattended. Cross-classifying between intact and split objects, we found that the objectrelated information coded in activation patterns of intact objects is fully preserved in the patterns of split objects and vice versa. Importantly, the generalization between intact and split objects did not depend on attention. Our findings demonstrate that LOC codes objects in a non-holistic format, both in the presence and absence of attention

    The neural coding of properties shared by faces, bodies and objects

    Get PDF
    Previous studies have identified relatively separated regions of the brain that respond strongly when participants view images of either faces, bodies or objects. The aim of this thesis was to investigate how and where in the brain shared properties of faces, bodies and objects are processed. We selected three properties that are shared by faces and bodies, shared categories (sex and weight), shared identity and shared orientation (i.e. facing direction). We also investigated one property shared by faces and objects, the tendency to process a face or object as a whole rather than by its parts, which is known as holistic processing. We hypothesized that these shared properties might be encoded separately for faces, bodies and objects in the previously defined domain-specific regions, or alternatively that they might be encoded in an overlapping or shared code in those or other regions. In all of studies in this thesis, we used fMRI to record the brain activity of participants viewing images of faces and bodies or objects that showed differences in the shared properties of interest. We then investigated the neural responses these stimuli elicited in a variety of specifically localized brain regions responsive to faces, bodies or objects, as well as across the whole-brain. Our results showed evidence for a mix of overlapping coding, shared coding and domain-specific coding, depending on the particular property and the level of abstraction of its neural coding. We found we could decode face and body categories, identities and orientations from both face- and body-responsive regions showing that these properties are encoded in overlapping brain regions. We also found that non-domain specific brain regions are involved in holistic face processing. We identified shared coding of orientation and weight in the occipital cortex and shared coding of identity in the early visual cortex, right inferior occipital cortex, right parahippocampal cortex and right superior parietal cortex, demonstrating that a variety of brain regions combine face and body information into a common code. In contrast to these findings, we found evidence that high-level visual transformations may be predominantly processed in domain-specific regions, as we could most consistently decode body categories across image-size and body identity across viewpoint from body-responsive regions. In conclusion, this thesis furthers our understanding of the neural coding of face, body and object properties and gives new insights into the functional organisation of occipitotemporal cortex

    Perceptual Learning Of Object Shape

    Get PDF
    Recognition of objects is accomplished through the use of cues that depend on internal representations of familiar shapes. We used a paradigm of perceptual learning during visual search to explore what features human observers use to identify objects. Human subjects were trained to search for a target object embedded in an array of distractors, until their performance improved from chance levels to over 80% of trials in an object specific manner. We determined the role of specific object components in the recognition of the object as a whole by measuring the transfer of learning from the trained object to other objects sharing components with it. Depending on the geometric relationship of the trained object with untrained objects, transfer to untrained objects was observed. Novel objects that shared a component with the trained object were identified at much higher levels than those that did not, and this could be used as an indicator of which features of the object were important for recognition. Training on an object transferred to the less complex components of the object when these components were embedded in an array of distractors of similar complexity. There was transfer to the different components of object, regardless of how well they distinguish the object from distractors. These results suggest that objects are not represented in a holistic manner during learning, but that their individual components are encoded. Transfer between objects was not complete, and occurred for more than one component, suggesting that a joint involvement of multiple components was necessary for full performance. The sequence of this learning indicated a possible underlying mechanism of the learning. Subjects improved first in a single quadrant of the visual field, and the improvement then spread out sequentially to the other quadrants. This location specificity of the improvement suggests that, with training, encoding information about object shape occurs in early, retinotopically mapped cortical areas. fMRI work suggests that the learning of novel objects in this manner involves a reciprocal switch between two cortical networks, one that involves the normally object-sensitive regions of LOC, and one that involves the temporal and parietal cortices

    Transcranial alternating current stimulation (tACS) at 40 Hz enhances face and object perception

    Get PDF
    Neurophysiological evidence suggests that face and object recognition relies on the coordinated activity of neural populations (i.e., neural oscillations) in the gamma-band range (> 30 Hz) over the occipito-temporal cortex. To test the causal effect of gamma-band oscillations on face and object perception we applied transcranial Alternating Current Stimulation (tACS) in healthy volunteers (N = 60). In this single-blind, sham-controlled study, we examined whether the administration of offline tACS at gamma-frequency (40 Hz) over the right occipital cortex enhances performance of perception and memory of face and object stimuli. We hypothesized that gamma tACS would enhance the perception of both categories of visual stimuli. Results, in line with our hypothesis, show that 40 Hz tACS enhanced both face and object perception. This effect is process-specific (i.e., it does not affect memory), frequency-specific (i.e., stimulation at 5 Hz did not cause any behavioural change), and site-specific (i.e., stimulation of the sensory-motor cortex did not affect performance). Our findings show that high-frequency tACS modulates human visual perception, and it is in line with neurophysiological studies showing that the perception of visual stimuli (i.e., faces and objects) is mediated by oscillations in the gamma-band range. Furthermore, this study adds insight about the design of effective neuromodulation protocols that might have implications for interventions in clinical settings

    Sensory Competition in the Face Processing Areas of the Human Brain

    Get PDF
    The concurrent presentation of multiple stimuli in the visual field may trigger mutually suppressive interactions throughout the ventral visual stream. While several studies have been performed on sensory competition effects among non-face stimuli relatively little is known about the interactions in the human brain for multiple face stimuli. In the present study we analyzed the neuronal basis of sensory competition in an event-related functional magnetic resonance imaging (fMRI) study using multiple face stimuli. We varied the ratio of faces and phase-noise images within a composite display with a constant number of peripheral stimuli, thereby manipulating the competitive interactions between faces. For contralaterally presented stimuli we observed strong competition effects in the fusiform face area (FFA) bilaterally and in the right lateral occipital area (LOC), but not in the occipital face area (OFA), suggesting their different roles in sensory competition. When we increased the spatial distance among pairs of faces the magnitude of suppressive interactions was reduced in the FFA. Surprisingly, the magnitude of competition depended on the visual hemifield of the stimuli: ipsilateral stimulation reduced the competition effects somewhat in the right LOC while it increased them in the left LOC. This suggests a left hemifield dominance of sensory competition. Our results support the sensory competition theory in the processing of multiple faces and suggests that sensory competition occurs in several cortical areas in both cerebral hemispheres

    Brain wave measures of attention to human faces and non-face forms and objects in print media advertisements

    Get PDF
    The sense of vision and the phenomenon of visual attention constitute some of the prime processes with which human beings communicate with their non-mediated and mediated environment. The objective of this study was to explore how the human brain, as part of a complex visual system, deploys its attentional resources to human face and non-face forms and objects, when tested as primary visual elements in basic print-media advertisement layouts. With the theoretical basis of a two-component attention framework that distinguishes between image-based bottom-up attention (50ms) and task-dependent top-down attention (250ms), it was hypothesized that faces would evoke significantly higher bottom-up attention than non-face forms, whereas non-face forms and objects would evoke significantly higher top-down attention when compared to faces. Using a repeated measures design with twenty participants, and brain wave measures or electroencephalographic (EEG) activity as the dependent variable, the study examined differences in attention evoked by four categories of stimuli - faces, products, product-in-use and abstract drawings across three cortical regions of the brain, the occipital, temporal and parietal lobes. Wilcoxon signed-ranks test showed that faces did not evoke significant bottom-up attention, whereas abstract drawings and product-in-use evoked significant attention both in the bottom-up and top-down attention frameworks. These results suggest that processing of simple and familiar stimuli like faces might be more implicit and holistic when they are juxtaposed with more novel and complex forms of stimuli like abstract drawings and products-in-use that call forth higher attentional and cognitive resources. Implications of these results for further studies of advertising effects are discussed

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Revealing Connections in Object and Scene Processing Using Consecutive TMS and fMR-Adaptation

    Get PDF
    When processing the visual world, our brain must perform many computations that may occur across several regions. It is important to understand communications between regions in order to understand perceptual processes underlying processing of our environment. We sought to determine the connectivity of object and scene processing regions of the cortex, which are not fully established. In order to determine these connections repetitive transcranial magnetic stimulation (rTMS) and functional magnetic resonance-adaptation (fMR-A) were paired together. rTMS was applied to object-selective lateral occipital (LO) and scene-selective transverse occipital sulcus (TOS). Immediately after stimulation, participants underwent fMR-A, and pre- and post-TMS responses were compared. TMS disrupted remote regions revealing connections from LO and TOS to remote object and scene-selective regions in the occipital cortex. In addition, we report important neural correlates regarding the transference of object related information between modalities, from LO to outside the ventral network to parietal and frontal areas

    Not all visual symmetry is equal: partially distinct neural bases for vertical and horizontal symmetry

    Get PDF
    Visual mirror symmetry plays an important role in visual perception in both human and animal vision; its importance is reflected in the fact that it can be extracted automatically during early stages of visual processing. However, how this extraction is implemented at the cortical level remains an open question. Given the importance of symmetry in visual perception, one possibility is that there is a network which extracts all types of symmetry irrespective of axis of orientation; alternatively, symmetry along different axes might be encoded by different brain regions, implying that that there is no single neural mechanism for symmetry processing. Here we used fMRI-guided transcranial magnetic stimulation (TMS) to compare the neural basis of the two main types of symmetry found in the natural world, vertical and horizontal symmetry. TMS was applied over either right Lateral Occipital Cortex (LO), right Occipital Face Area (OFA) or Vertex while participants were asked to detect symmetry in low-level dot configurations. Whereas detection of vertical symmetry was impaired by TMS over both LO and OFA, detection of horizontal symmetry was delayed by stimulation of LO only. Thus, different types of visual symmetry rely on partially distinct cortical networks
    • …
    corecore