4,621 research outputs found

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Neural Representations of a Real-World Environment

    Get PDF
    The ability to represent the spatial structure of the environment is critical for successful navigation. Extensive research using animal models has revealed the existence of specialized neurons that appear to code for spatial information in their firing patterns. However, little is known about which regions of the human brain support representations of large-scale space. To address this gap in the literature, we performed three functional magnetic resonance imaging (fMRI) experiments aimed at characterizing the representations of locations, headings, landmarks, and distances in a large environment for which our subjects had extensive real-world navigation experience: their college campus. We scanned University of Pennsylvania students while they made decisions about places on campus and then tested for spatial representations using multivoxel pattern analysis and fMRI adaptation. In Chapter 2, we tested for representations of the navigator\u27s current location and heading, information necessary for self-localization. In Chapter 3, we tested whether these location and heading representations were consistent across perception and spatial imagery. Finally, in Chapter 4, we tested for representations of landmark identity and the distances between landmarks. Across the three experiments, we observed that specific regions of medial temporal and medial parietal cortex supported long-term memory representations of navigationally-relevant spatial information. These results serve to elucidate the functions of these regions and offer a framework for understanding the relationship between spatial representations in the medial temporal lobe and in high-level visual regions. We discuss our findings in the context of the broader spatial cognition literature, including implications for studies of both humans and animal models

    Reading visually embodied meaning from the brain: Visually grounded computational models decode visual-object mental imagery induced by written text

    Get PDF
    Embodiment theory predicts that mental imagery of object words recruits neural circuits involved in object perception. The degree of visual imagery present in routine thought and how it is encoded in the brain is largely unknown. We test whether fMRI activity patterns elicited by participants reading objects' names include embodied visual-object representations, and whether we can decode the representations using novel computational image-based semantic models. We first apply the image models in conjunction with text-based semantic models to test predictions of visual-specificity of semantic representations in different brain regions. Representational similarity analysis confirms that fMRI structure within ventral-temporal and lateral-occipital regions correlates most strongly with the image models and conversely text models correlate better with posterior-parietal/lateral-temporal/inferior-frontal regions. We use an unsupervised decoding algorithm that exploits commonalities in representational similarity structure found within both image model and brain data sets to classify embodied visual representations with high accuracy (8/10) and then extend it to exploit model combinations to robustly decode different brain regions in parallel. By capturing latent visual-semantic structure our models provide a route into analyzing neural representations derived from past perceptual experience rather than stimulus-driven brain activity. Our results also verify the benefit of combining multimodal data to model human-like semantic representations

    A simple rule to describe interactions between visual categories

    Get PDF
    Acknowledgements: We thank Prof Thomas Palmeri for helpful comments on a previous version of the manuscript. This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Peer Review The peer review history for this article is available at https://publons.com/publon/10.1111/ejn.14890. DATA AVAILABILITY STATEMENT The programmes used to run the two experiments and the collected data are available on the OSF website (OSF.IO/ASB4E). A substantial proportion of the stimuli were sourced from the copyright protected Corel database and thus cannot be shared on OSF.Peer reviewedPublisher PD

    Building Mental Experiences: From Scenes to Events

    Get PDF
    Mental events are central to everyday cognition, be it our continuous perception of the world, recalling autobiographical memories, or imagining the future. Little is known about the fine-grained temporal dynamics of these processes. Given the apparent predominance of scene imagery across cognition, in this thesis I used magnetoencephalography to investigate whether and how activity in the hippocampus and ventromedial prefrontal cortex (vmPFC) supports the mental construction of scenes and the events to which they give rise. In the first experiment, participants gradually imagined scenes and also closely matched non-scene arrays; this allowed me to assess whether any brain regions showed preferential responses to scene imagery. The anterior hippocampus and vmPFC were particularly engaged by the construction of scene imagery, with the vmPFC driving hippocampal activity. In the second experiment, I found that certain objects – those that were space-defining – preferentially engaged the vmPFC and superior temporal gyrus during scene construction, providing insight into how objects affect the creation of scene representations. The third experiment involved boundary extension during scene perception, permitting me to examine how single scenes might be prepared for inclusion into events. I observed changes in evoked responses just 12.5-58 ms after scene onset over fronto-temporal sensors, with again the vmPFC exerting a driving influence on other brain regions, including the hippocampus. In the final experiment, participants watched brief movies of events built from a series of scenes or non-scene patterns. A difference in evoked responses between the two event types emerged during the first frame of the movies, the primary source of which was shown to be the hippocampus. The enduring theme of the results across experiments was scene-specific engagement of the hippocampus and vmPFC, with the latter being the driving influence. Overall, this thesis provides insights into the neural dynamics of how scenes are built, made ready for inclusion into unfolding mental episodes, and then linked to produce our seamless experience of the world

    Diagnostic colours of emotions

    Get PDF
    This thesis investigates the role of colour in the cognitive processesing of emotional information. The research is guided by the effect of colour diagnosticity which has been shown previously to influence recognition performance of several types of objects as well as natural scenes. The research presented in Experiment 1 examined whether colour information is considered a diagnostic perceptual feature of seven emotional categories: happiness, sadness, anger, fear, disgust, surprise and neutral. Participants (N = 119), who were naïve to the specific purpose and expectations of the experiment, chose colour more than any other perceptual quality (e.g. shape and tactile information) as a feature that describes the seven emotional categories. The specific colour features given for the six basic emotions were consistently different from those given to the non-emotional neutral category. While emotional categories were often described by chromatic colour features (e.g. red, blue, orange) the neutral category was often ascribed achromatic colour features (e.g. white, grey, transparent) as the most symptomatic perceptual qualities for its description. The emotion 'anger' was unique in being the only emotion showing an agreement higher that 50% of the total given colour features for one particular colour - red. Confirming that colour is a diagnostic feature of emotions led to the examination of the effect of diagnostic colours of emotion on recognition memory for emotional words and faces: the effect, if any, of appropriate and inappropriate colours (matched with emotion) on the strength of memory for later recognition of faces and words (Experiments 2 & 3). The two experiments used retention intervals of 15 minutes and one week respectively and the colour-emotion associations were determined for each individual participant. Results showed that regardless of the subject’s consistency level in associating colours with emotions, and compared with the individual inappropriate or random colours, individual appropriate colours of emotions significantly enhance recognition memory for six basic emotional faces and words. This difference between the individual inappropriate colours or random colours and the individual appropriate colours of emotions was not found to be significant for non-emotional neutral stimuli. Post hoc findings from both experiments further show that appropriate colours of emotion are associated more consistently than inappropriate colours of emotions. This suggests that appropriate colour-emotion associations are unique both in their strength of association and in the form of their representation. Experiment 4 therefore aimed to investigate whether appropriate colour-emotion associations also trigger an implicit automatic cognitive system that allows faster naming times for appropriate versus inappropriate colours of emotional word carriers. Results from the combined Emotional-Semantic Stroop task confirm the above hypothesis and therefore imply that colour plays a substantial role not only in our conceptual representations of objects but also in our conceptual representations of basic emotions. The resemblance of the present findings collectively to those found previously for objects and natural scenes suggests a common cognitive mechanism for the processing of emotional diagnostic colours and the processing of diagnostic colours of objects or natural scenes. Overall, this thesis provides the foundation for many future directions of research in the area of colour and emotion as well as a few possible immediate practical implications

    The influence of semantics on the visual processing of natural scenes

    Get PDF
    A long standing question in cognitive science has been: is visual processing completely encapsulated and separate from semantics or can visual processing be influenced by semantics? We address this question in two ways: 1) Do pictures and words share similar representations and 2) Does semantics modulate visual processing. Using multi-voxel pattern analysis (MVPA) and fMRI decoding we examined the similarity of neural activity across pictures and words that describe natural scenes. A whole brain MVPA searchlight revealed multiple brain regions in the occipitotemporal, posterior parietal and frontal cortices that showed transfer from pictures to words and from words to pictures. In addition to sharing similar representations across pictures and words, can words dynamically influence the processing of visual stimuli? Using Event Related Potentials (ERPs) and good and bad exemplars of natural scenes, we show that top-down expectation, initiated via a category cue (e.g. the word ‘Beach’), dynamically influences the processing of natural scenes. Good and bad exemplars first evoked differential ERPs in the time-window 250-350 ms from stimulus onset, with the bad exemplars showing greater negativity over frontal electrode sites, when the cue matched the image. Interestingly, this good/bad effect disappeared when the images were mismatched to the cue. Overall, these studies taken together, provide evidence for the influence of semantics on the visual processing of natural scenes

    Situated Sentence Processing: The Coordinated Interplay Account and a Neurobehavioral Model

    Get PDF
    Crocker MW, Knoeferle P, Mayberry M. Situated Sentence Processing: The Coordinated Interplay Account and a Neurobehavioral Model. Brain and Language. 2010;112(3):189-201

    Event Structure In Vision And Language

    Get PDF
    Our visual experience is surprisingly rich: We do not only see low-level properties such as colors or contours; we also see events, or what is happening. Within linguistics, the examination of how we talk about events suggests that relatively abstract elements exist in the mind which pertain to the relational structure of events, including general thematic roles (e.g., Agent), Causation, Motion, and Transfer. For example, “Alex gave Jesse flowers” and “Jesse gave Alex flowers” both refer to an event of transfer, with the directionality of the transfer having different social consequences. The goal of the present research is to examine the extent to which abstract event information of this sort (event structure) is generated in visual perceptual processing. Do we perceive this information, just as we do with more ‘traditional’ visual properties like color and shape? In the first study (Chapter 2), I used a novel behavioral paradigm to show that event roles – who is acting on whom – are rapidly and automatically extracted from visual scenes, even when participants are engaged in an orthogonal task, such as color or gender identification. In the second study (Chapter 3), I provided functional magnetic resonance (fMRI) evidence for commonality in content between neural representations elicited by static snapshots of actions and by full, dynamic action sequences. These two studies suggest that relatively abstract representations of events are spontaneously extracted from sparse visual information. In the final study (Chapter 4), I return to language, the initial inspiration for my investigations of events in vision. Here I test the hypothesis that the human brain represents verbs in part via their associated event structures. Using a model of verbs based on event-structure semantic features (e.g., Cause, Motion, Transfer), it was possible to successfully predict fMRI responses in language-selective brain regions as people engaged in real-time comprehension of naturalistic speech. Taken together, my research reveals that in both perception and language, the mind rapidly constructs a representation of the world that includes events with relational structure
    • 

    corecore