159 research outputs found

    Distributed Activity Patterns for Objects and Their Features: Decoding Perceptual and Conceptual Object Processing in Information Networks of the Human Brain

    Get PDF
    How are object features and knowledge-fragments represented and bound together in the human brain? Distributed patterns of activity within brain regions can encode distinctions between perceptual and cognitive phenomena with impressive specificity. The research reported here investigated how the information within regions\u27 multi-voxel patterns is combined in object-concept networks. Chapter 2 investigated how memory-driven activity patterns for an object\u27s specific shape, color, and identity become active at different stages of the visual hierarchy. Brain activity patterns were recorded with functional magnetic resonance imaging (fMRI) as participants searched for specific fruits or vegetables within visual noise. During time-points in which participants were searching for an object, but viewing pure noise, the targeted object\u27s identity could be decoded in the left anterior temporal lobe (ATL). In contrast, top-down generated patterns for the object\u27s specific shape and color were decoded in early visual regions. The emergence of object-identity information in the left ATL was predicted by concurrent shape and color information in their respective featural regions. These findings are consistent with theories proposing that feature-fragments in sensory cortices converge to higher-level identity representations in convergence zones. Chapter 3 investigated whether brain regions share fluctuations in multi-voxel information across time. A new analysis method was first developed, to measure dynamic changes in distributed pattern information. This method, termed informational connectivity (IC), was then applied to data collected as participants viewed different types of man-made objects. IC identified connectivity between object-processing regions that was not apparent from existing functional connectivity measures, which track fluctuating univariate signals. Collectively, this work suggests that networks of regions support perceptual and conceptual object processing through the convergence and synchrony of distributed pattern information

    Expertise Moderates Incidentally Learned Associations Between Words and Images

    Get PDF
    Individuals with expertise in a domain of knowledge demonstrate superior learning for information in their area of expertise, relative to non-experts. In this study, we investigated whether expertise benefits extend to learning associations between words and images that are encountered incidentally. Sport-knowledge-experts and non-sports-experts encountered previously unknown faces through a basic perceptual task. The faces were incidentally presented as candidates for a position in a sports team (a focus of knowledge for only the sports-experts) or for a job in a business (a focus of knowledge for both the sports-experts and non-sports-experts). Participants later received a series of surprise memory tests that tested: ability to recognize each face as being old, the amount of information recalled about each face, and ability to select a correct face from equally familiar alternatives. Relative to non-sports-experts, participants with superior sports expertise were able to better recall the information associated with each face and could better select associated faces from similarly familiar options for the hypothetical prospective athletes. Hypothetical job candidates were recalled and selected at similar levels of performance in both groups. The groups were similarly familiar with the images (in a yes/no recognition memory test) when the faces were prospective athletes or job candidates. These findings suggest a specific effect of expertise on associative memory between words and images, but not for individual items, supporting a dissociation in how expertise modulates the human memory system for word–image pairings

    The VWFA Is the Home of Orthographic Learning When Houses Are Used as Letters

    Get PDF
    Learning to read specializes a portion of the left mid-fusiform cortex for printed word recognition, the putative visual word form area (VWFA). This study examined whether a VWFA specialized for English is sufficiently malleable to support learning a perceptually atypical second writing system. The study utilized an artificial orthography, HouseFont, in which house images represent English phonemes. House images elicit category-biased activation in a spatially distinct brain region, the so-called parahippocampal place area (PPA). Using house images as letters made it possible to test whether the capacity for learning a second writing system involves neural territory that supports reading in the first writing system, or neural territory tuned for the visual features of the new orthography. Twelve human adults completed two weeks of training to establish basic HouseFont reading proficiency and underwent functional neuroimaging pre and post-training. Analysis of three functionally defined regions of interest (ROIs), the VWFA, and left and right PPA, found significant pre-training versus post-training increases in response to HouseFont words only in the VWFA. Analysis of the relationship between the behavioral and neural data found that activation changes from pre-training to post-training within the VWFA predicted HouseFont reading speed. These results demonstrate that learning a new orthography utilizes neural territory previously specialized by the acquisition of a native writing system. Further, they suggest VWFA engagement is driven by orthographic functionality and not the visual characteristics of graphemes, which informs the broader debate about the nature of category-specialized areas in visual association cortex

    Rapid improvement of cognitive maps in the awake state

    Get PDF
    Post-navigation awake quiescence, relative to task engagement, benefits the accuracy of a new “cognitive map”. This effect is hypothesized to reflect awake quiescence, like sleep, being conducive to the consolidation and integration of new spatial memories. Sleep has been shown to improve cognitive map accuracy over time. It remained unknown whether awake quiescence can induce similar time-related improvements in new cognitive maps, or whether it simply counteracts their decay. We examined this question via two experiments. In Experiment 1, using an established cognitive mapping paradigm, we reveal that map accuracy for a virtual town was significantly better in people whose memory was probed after 10 min of post-navigation awake quiescence or ongoing cognitive engagement, relative to those whose memory was probed shortly after initial navigation. In Experiment 2, using a newly developed cognitive mapping task that involved a more complex and real-life virtual town, we again found that map accuracy was superior in those whose memory was probed after 10 min of awake quiescence than those who were tested soon after navigation. These findings indicate that actual improvements in human memories are not restricted to sleep. Thus, contrary to conventional wisdom and theories, the passage of (day)time need not always result in forgetting

    Stroop effects from newly learned color words : effects of memory consolidation and episodic context

    Get PDF
    The Stroop task is an excellent tool to test whether reading a word automatically activates its associated meaning, and it has been widely used in mono- and bilingual contexts. Despite of its ubiquity, the task has not yet been employed to test the automaticity of recently established word-concept links in novel-word-learning studies, under strict experimental control of learning and testing conditions. In three experiments, we thus paired novel words with native language (German) color words via lexical association and subsequently tested these words in a manual version of the Stroop task. Two crucial findings emerged: When novel word Stroop trials appeared intermixed among native-word trials, the novel-word Stroop effect was observed immediately after the learning phase. If no native color words were present in a Stroop block, the novel-word Stroop effect only emerged 24 h later. These results suggest that the automatic availability of a novel word's meaning depends either on supportive context from the learning episode and/or on sufficient time for memory consolidation. We discuss how these results can be reconciled with the complementary learning systems account of word learning

    Eye-tracking the time‐course of novel word learning and lexical competition in adults and children

    Get PDF
    Lexical competition is a hallmark of proficient, automatic word recognition. Previous research suggests that there is a delay before a new spoken word becomes engaged in this process, with sleep playing an important role. However, data from one method--the visual world paradigm--consistently show competition without a delay. We trained 42 adults and 40 children (aged 7-8) on novel word-object pairings, and employed this paradigm to measure the time-course of lexical competition. Fixations to novel objects upon hearing existing words (e.g., looks to the novel object biscal upon hearing “click on the biscuit”) were compared to fixations on untrained objects. Novel word-object pairings learned immediately before testing and those learned the previous day exhibited significant competition effects, with stronger competition for the previous day pairings for children but not adults. Crucially, this competition effect was significantly smaller for novel than existing competitors (e.g., looks to candy upon hearing “click on the candle”), suggesting that novel items may not compete for recognition like fully-fledged lexical items, even after 24 hours. Explicit memory (cued recall) was superior for words learned the day before testing, particularly for children; this effect (but not the lexical competition effects) correlated with sleep-spindle density. Together, the results suggest that different aspects of new word learning follow different time courses: visual world competition effects can emerge swiftly, but are qualitatively different from those observed with established words, and are less reliant upon sleep. Furthermore, the findings fit with the view that word learning earlier in development is boosted by sleep to a greater degree

    Fractionating the anterior temporal lobe : MVPA reveals differential responses to input and conceptual modality

    Get PDF
    Words activate cortical regions in accordance with their modality of presentation (i.e., written vs. spoken), yet there is a long-standing debate about whether patterns of activity in any specific brain region capture modality-invariant conceptual information. Deficits in patients with semantic dementia highlight the anterior temporal lobe (ATL) as an amodal store of semantic knowledge but these studies do not permit precise localisation of this function. The current investigation used multiple imaging methods in healthy participants to examine functional dissociations within ATL. Multi-voxel pattern analysis identified spatially segregated regions: a response to input modality in anterior superior temporal gyrus (aSTG) and a response to meaning in more ventral anterior temporal lobe (vATL). This functional dissociation was supported by resting-state connectivity that found greater coupling for aSTG with primary auditory cortex and vATL with the default mode network. A meta-analytic decoding of these connectivity patterns implicated aSTG in processes closely tied to auditory processing (such as phonology and language) and vATL in meaning-based tasks (such as comprehension or social cognition). Thus we provide converging evidence for the segregation of meaning and input modality in the ATL
    corecore