26 research outputs found

    Distributed Activity Patterns for Objects and Their Features: Decoding Perceptual and Conceptual Object Processing in Information Networks of the Human Brain

    Get PDF
    How are object features and knowledge-fragments represented and bound together in the human brain? Distributed patterns of activity within brain regions can encode distinctions between perceptual and cognitive phenomena with impressive specificity. The research reported here investigated how the information within regions\u27 multi-voxel patterns is combined in object-concept networks. Chapter 2 investigated how memory-driven activity patterns for an object\u27s specific shape, color, and identity become active at different stages of the visual hierarchy. Brain activity patterns were recorded with functional magnetic resonance imaging (fMRI) as participants searched for specific fruits or vegetables within visual noise. During time-points in which participants were searching for an object, but viewing pure noise, the targeted object\u27s identity could be decoded in the left anterior temporal lobe (ATL). In contrast, top-down generated patterns for the object\u27s specific shape and color were decoded in early visual regions. The emergence of object-identity information in the left ATL was predicted by concurrent shape and color information in their respective featural regions. These findings are consistent with theories proposing that feature-fragments in sensory cortices converge to higher-level identity representations in convergence zones. Chapter 3 investigated whether brain regions share fluctuations in multi-voxel information across time. A new analysis method was first developed, to measure dynamic changes in distributed pattern information. This method, termed informational connectivity (IC), was then applied to data collected as participants viewed different types of man-made objects. IC identified connectivity between object-processing regions that was not apparent from existing functional connectivity measures, which track fluctuating univariate signals. Collectively, this work suggests that networks of regions support perceptual and conceptual object processing through the convergence and synchrony of distributed pattern information

    Expertise Moderates Incidentally Learned Associations Between Words and Images

    Get PDF
    Individuals with expertise in a domain of knowledge demonstrate superior learning for information in their area of expertise, relative to non-experts. In this study, we investigated whether expertise benefits extend to learning associations between words and images that are encountered incidentally. Sport-knowledge-experts and non-sports-experts encountered previously unknown faces through a basic perceptual task. The faces were incidentally presented as candidates for a position in a sports team (a focus of knowledge for only the sports-experts) or for a job in a business (a focus of knowledge for both the sports-experts and non-sports-experts). Participants later received a series of surprise memory tests that tested: ability to recognize each face as being old, the amount of information recalled about each face, and ability to select a correct face from equally familiar alternatives. Relative to non-sports-experts, participants with superior sports expertise were able to better recall the information associated with each face and could better select associated faces from similarly familiar options for the hypothetical prospective athletes. Hypothetical job candidates were recalled and selected at similar levels of performance in both groups. The groups were similarly familiar with the images (in a yes/no recognition memory test) when the faces were prospective athletes or job candidates. These findings suggest a specific effect of expertise on associative memory between words and images, but not for individual items, supporting a dissociation in how expertise modulates the human memory system for word–image pairings

    The VWFA Is the Home of Orthographic Learning When Houses Are Used as Letters

    Get PDF
    Learning to read specializes a portion of the left mid-fusiform cortex for printed word recognition, the putative visual word form area (VWFA). This study examined whether a VWFA specialized for English is sufficiently malleable to support learning a perceptually atypical second writing system. The study utilized an artificial orthography, HouseFont, in which house images represent English phonemes. House images elicit category-biased activation in a spatially distinct brain region, the so-called parahippocampal place area (PPA). Using house images as letters made it possible to test whether the capacity for learning a second writing system involves neural territory that supports reading in the first writing system, or neural territory tuned for the visual features of the new orthography. Twelve human adults completed two weeks of training to establish basic HouseFont reading proficiency and underwent functional neuroimaging pre and post-training. Analysis of three functionally defined regions of interest (ROIs), the VWFA, and left and right PPA, found significant pre-training versus post-training increases in response to HouseFont words only in the VWFA. Analysis of the relationship between the behavioral and neural data found that activation changes from pre-training to post-training within the VWFA predicted HouseFont reading speed. These results demonstrate that learning a new orthography utilizes neural territory previously specialized by the acquisition of a native writing system. Further, they suggest VWFA engagement is driven by orthographic functionality and not the visual characteristics of graphemes, which informs the broader debate about the nature of category-specialized areas in visual association cortex

    The Link Between Conceptual and Perceptual Information in Memory

    No full text
    We continually draw on, and link, conceptual knowledge with perception as we process and interact with our surroundings. This chapter highlights issues at the intersection of perceptual and conceptual processing in human memory. First, it discusses the role of the brain’s perceptual systems and connected regions during conceptual processing. Next, a case study of real-world (or ‘canonical’) size is used to illustrate questions and issues that arise when seeking to understand phenomena that can require information from both perceptual input and semantic memory to be integrated. The influence of conceptual processing on perception is then described, before outlining some additional related factors: conceptual granularity, episodic memory, and individual differences. The chapter concludes by looking to the future of this research area – a field that requires a unique understanding of issues that lie at the heart of perception, memory, and more. The author was supported by NSF award 1947685 during the writing of this chapter

    Context Reinstatement Requires a Schema Relevant Virtual Environment to Benefit Object Recall

    No full text
    How does our environment impact what we will later remember? Early work in real-world environments suggested that having matching encoding/retrieval contexts improves memory. However, some laboratory-based studies have not replicated this advantageous context-dependent memory effect. Using virtual reality methods, we find support for context-dependent memory effects, and examine an influence of memory schema and dynamic environments. Participants (N = 240) remembered more objects when in the same virtual environment (context) as during encoding. This traded-off with falsely ‘recognizing’ more similar lures. Experimentally manipulating the virtual objects and environments revealed that a congruent object/environment schema aids recall (but not recognition), though a dynamic background does not. These findings further our understanding of when and how context affects our memory through a naturalistic approach to studying such effects

    Fast mapping rapidly integrates information into existing memory networks.

    No full text

    Informational Connectivity: Identifying synchronized discriminability of multi-voxel patterns across the brain

    Get PDF
    The fluctuations in a brain region’s activation levels over a functional magnetic resonance imaging (fMRI) time-course are used in functional connectivity to identify networks with synchronous responses. It is increasingly recognized that multi-voxel activity patterns contain information that cannot be extracted from univariate activation levels. Here we present a novel analysis method that quantifies regions’ synchrony in multi-voxel activity pattern discriminability, rather than univariate activation, across a timeseries. We introduce a measure of multi-voxel pattern discriminability at each time-point, which is then used to identify regions that share synchronous time-courses of condition-specific multi-voxel information. This method has the sensitivity and access to distributed information that multi-voxel pattern analysis enjoys, allowing it to be applied to data from conditions not separable by univariate responses. We demonstrate this by analyzing data collected while people viewed four different types of man-made objects (typically not separable by univariate analyses) using both functional connectivity and informational connectivity methods. Informational connectivity reveals networks of object-processing regions that are not detectable using functional connectivity. The informational connectivity results support prior findings and hypotheses about object-processing. This new method allows investigators to ask questions that are not addressable through typical functional connectivity, just as MVPA has added new research avenues to those addressable with the general linear model
    corecore