60 research outputs found

    Recollection-dependent memory for event duration in large-scale spatial navigation

    Get PDF
    Time and space represent two key aspects of episodic memories, forming the spatiotemporal context of events in a sequence. Little is known, however, about how temporal information, such as the duration and the order of particular events, are encoded into memory, and if it matters whether the memory representation is based on recollection or familiarity. To investigate this issue, we used a real world virtual reality navigation paradigm where periods of navigation were interspersed with pauses of different durations. Crucially, participants were able to reliably distinguish the durations of events that were subjectively reexperienced (i.e., recollected), but not of those that were familiar. This effect was not found in temporal order (ordinal) judgments. We also show that the active experience of the passage of time (holding down a key while waiting) moderately enhanced duration memory accuracy. Memory for event duration, therefore, appears to rely on the hippocampally supported ability to recollect or reexperience an event enabling the reinstatement of both its duration and its spatial context, to distinguish it from other events in a sequence. In contrast, ordinal memory appears to rely on familiarity and recollection to a similar extent. © 2017 Brunec et al

    The impact of multisensory integration deficits on speech perception in children with autism spectrum disorders.

    Get PDF
    Speech perception is an inherently multisensory process. When having a face-to-face conversation, a listener not only hears what a speaker is saying, but also sees the articulatory gestures that accompany those sounds. Speech signals in visual and auditory modalities provide complementary information to the listener (Kavanagh and Mattingly, 1974), and when both are perceived in unison, behavioral gains in in speech perception are observed (Sumby and Pollack, 1954). Notably, this benefit is accentuated when speech is perceived in a noisy environment (Sumby and Pollack, 1954). To achieve a behavioral gain from multisensory processing of speech, however, the auditory and visual signals must be perceptually bound into a single, unified percept. The most commonly cited effect that demonstrates perceptual binding in audiovisual speech perception is the McGurk effect (McGurk and MacDonald, 1976), where a listener hears a speaker utter the syllable “ba,” and sees the speaker utter the syllable “ga.” When these two speech signals are perceptually bound, the listener perceives the speaker as having said “da” or “tha,” syllables that are not contained in either of the unisensory signals, resulting in a perceptual binding, or integration, of the speech signals (Calvert and Thesen, 2004)

    Seeing the Forest and the Trees: Default Local Processing in Individuals with High Autistic Traits Does Not Come at the Expense of Global Attention.

    Get PDF
    Atypical sensory perception is one of the most ubiquitous symptoms of autism, including a tendency towards a local-processing bias. We investigated whether local-processing biases were associated with global-processing impairments on a global/local attentional-scope paradigm in conjunction with a composite-face task. Behavioural results were related to individuals\u27 levels of autistic traits, specifically the Attention to Detail subscale of the Autism Quotient, and the Sensory Profile Questionnaire. Individuals showing high rates of Attention to Detail were more susceptible to global attentional-scope manipulations, suggesting that local-processing biases associated with Attention to Detail do not come at the cost of a global-processing deficit, but reflect a difference in default global versus local bias. This relationship operated at the attentional/perceptual level, but not response criterion

    Deficits in audiovisual speech perception in normal aging emerge at the level of whole-word recognition.

    Get PDF
    Over the next 2 decades, a dramatic shift in the demographics of society will take place, with a rapid growth in the population of older adults. One of the most common complaints with healthy aging is a decreased ability to successfully perceive speech, particularly in noisy environments. In such noisy environments, the presence of visual speech cues (i.e., lip movements) provide striking benefits for speech perception and comprehension, but previous research suggests that older adults gain less from such audiovisual integration than their younger peers. To determine at what processing level these behavioral differences arise in healthy-aging populations, we administered a speech-in-noise task to younger and older adults. We compared the perceptual benefits of having speech information available in both the auditory and visual modalities and examined both phoneme and whole-word recognition across varying levels of signal-to-noise ratio. For whole-word recognition, older adults relative to younger adults showed greater multisensory gains at intermediate SNRs but reduced benefit at low SNRs. By contrast, at the phoneme level both younger and older adults showed approximately equivalent increases in multisensory gain as signal-to-noise ratio decreased. Collectively, the results provide important insights into both the similarities and differences in how older and younger adults integrate auditory and visual speech cues in noisy environments and help explain some of the conflicting findings in previous studies of multisensory speech perception in healthy aging. These novel findings suggest that audiovisual processing is intact at more elementary levels of speech perception in healthy-aging populations and that deficits begin to emerge only at the more complex word-recognition level of speech signals

    Cognitive mapping style relates to posterior-anterior hippocampal volume ratio

    Get PDF
    As London taxi drivers acquire ‘the knowledge’ and develop a detailed cognitive map of London, their posterior hippocampi (pHPC) gradually increase in volume, reflecting an increasing pHPC/aHPC volume ratio. In the mnemonic domain, greater pHPC/aHPC volume ratios in young adults have been found to relate to better recollection ability, indicating that the balance between pHPC and aHPC volumes might be reflective of cross-domain individual differences. Here, we examined participants’ self-reported use of cognitive map-based navigational strategies in relation to their pHPC/aHPC hippocampal volume ratio. We find that greater reported cognitive map use was related to significantly greater posterior, relative to anterior, hippocampal volume in two separate samples of young adults. Further, greater reported cognitive map usage correlated with better performance on a self-initiated navigation task. Together, these data help to advance our understanding of differences between aHPC and pHPC and the greater role of pHPC in spatial mapping

    Medial temporal lobe activity during complex discrimination of faces, objects, and scenes: Effects of viewpoint

    Get PDF
    ABSTRACT: The medial temporal lobe (MTL), a set of heavily interconnected structures including the hippocampus and underlying entorhinal, perirhinal and parahippocampal cortex, is traditionally believed to be part of a unitary system dedicated to declarative memory. Recent studies, however, demonstrated perceptual impairments in amnesic individuals with MTL damage, with hippocampal lesions causing scene discrimination deficits, and perirhinal lesions causing object and face discrimination deficits. The degree of impairment on these tasks was influenced by the need to process complex conjunctions of features: discriminations requiring the integration of multiple visual features caused deficits, whereas discriminations that could be solved on the basis of a single feature did not. Here, we address these issues with functional neuroimaging in healthy participants as they performed a version of the oddity discrimination task used previously in patients. Three different types of stimuli (faces, scenes, novel objects) were presented from either identical or different viewpoints. Consistent with studies in patients, we observed increased perirhinal activity when participants distinguished between faces and objects presented from different, compared to identical, viewpoints. The posterior hippocampus, by contrast, showed an effect of viewpoint for both faces and scenes. These findings provide convergent evidence that the MTL is involved in processes beyond longterm declarative memory and suggest a critical role for these structures in integrating complex features of faces, objects, and scenes into viewinvariant, abstract representations. V V C 2009 Wiley-Liss, Inc

    Conjunctive Visual Processing Appears Abnormal in Autism

    Get PDF
    Face processing in autism spectrum disorder (ASD) is thought to be atypical, but it is unclear whether differences in visual conjunctive processing are specific to faces. To address this, we adapted a previously established eye-tracking paradigm which modulates the need for conjunctive processing by varying the degree of feature ambiguity in faces and objects. Typically-developed (TD) participants showed a canonical pattern of conjunctive processing: High-ambiguity objects were processed more conjunctively than low-ambiguity objects, and faces were processed in an equally conjunctive manner regardless of ambiguity level. In contrast, autistic individuals did not show differences in conjunctive processing based on stimulus category, providing evidence that atypical visual conjunctive processing in ASD is the result of a domain general lack of perceptual specialization

    It does not look odd to me: Perceptual impairments and eye movements in amnesic patients with medial temporal lobe damage

    Get PDF
    Studies of people with memory impairments have shown that a specific set of brain structures in the medial temporal lobe (MTL) is vital for memory function. However, whether these structures have a role outside of memory remains contentious. Recent studies of amnesic patients with damage to two structures within the MTL, the hippocampus and the perirhinal cortex, indicated that these patients also performed poorly on perceptual tasks. More specifically, they performed worse than controls when discriminating between objects, faces and scenes with overlapping features. In order to investigate whether these perceptual deficits are reflected in their viewing strategies, we tested a group of amnesic patients with MTL damage that included the hippocampus and perirhinal cortex on a series of oddity discrimination tasks in which they had to select an odd item from a visual array. Participants' eye movements were monitored throughout the experiment. Results revealed that patients were impaired on tasks that required them to discriminate between items that shared many features, and tasks that required processing items from different viewpoints. An analysis of their eye movements revealed that they exhibited a similar viewing pattern as controls: they fixated more on the target item on trials answered correctly, but not on trials answered incorrectly. In addition, their impaired performance was not explained by an abnormal viewing-strategy that assessed their use of working memory. These results suggest that the perceptual deficits in the MTL patients are not a consequence of abnormal viewing patterns of the objects and scenes, but instead, could involve an inability to bind information gathered from several fixations into a cohesive percept. These data also support the view that MTL structures are important not only for long-term memory, but are also involved in perceptual tasks

    Reducing Perceptual Interference Improves Visual Discrimination in Mild Cognitive Impairment: Implications for a Model of Perirhinal Cortex Function

    No full text
    Memory loss resulting from damage to the medial temporal lobes (MTL) is traditionally considered to reflect damage to a dedicated, exclusive memory system. Recent work, however, has suggested that damage to one MTL structure, the perirhinal cortex (PRC), compromises complex object representations that are necessary for both memory and perception. These representations are thought to be critical in shielding against the interference caused by a stream of visually similar input. In this study, we administered a complex object discrimination task to two memory-impaired populations thought to have brain damage that includes the PRC [patients diagnosed with amnestic mild cognitive impairment (MCI), and older adults at risk for MCI], as well as age-matched controls. Importantly, we carefully manipulated the level of interference: in the High Interference condition, participants completed a block of consecutive perceptually similar complex object discriminations, whereas in the Low Interference condition, we interspersed perceptually dissimilar objects such that there was less buildup of visual interference. We found that both memory-impaired populations were impaired on the High Interference condition compared with controls, but critically, by reducing the degree of perceptual interference, we were largely able to improve their performance. These findings, when taken together with convergent evidence from animals with selective PRC lesions and amnesic patients with focal damage to the PRC, provide support for a representational-hierarchical model of PRC function and suggest that memory loss following PRC damage may reflect a heightened vulnerability to perceptual interference.CIHR; Grant number: MOP-115148 (MDB); Grant sponsor: Emory Alzheimer’s Disease Research Center; Grant number: 2P50AG025688-06
    corecore