149,776 research outputs found

    Maintaining binding in working memory: Comparing the effects of intentional goals and incidental affordances

    Get PDF
    Much research on memory for binding depends on incidental measures. However, if encoding associations benefits from conscious attention, then incidental measures of binding memory might not yield a sufficient understanding of how binding is accomplished. Memory for letters and spatial locations was compared in three within-participants tasks, one in which binding was not afforded by stimulus presentation, one in which incidental binding was possible, and one in which binding was explicitly to be remembered. Some evidence for incidental binding was observed, but unique benefits of explicit binding instructions included preserved discrimination as set size increased and drastic reduction in false alarms to lures that included a new spatial location and an old letter. This suggests that substantial cognitive benefits, including enhanced memory for features themselves, might occur through intentional binding, and that incidental measures of binding might not reflect these advantages. (C) 2011 Elsevier Inc. All rights reserved

    The Influence of Surface Detail on Object Identification in Alzheimer's Patients and Healthy Participants

    Get PDF
    Image format (Laws, Adlington, Gale, Moreno-Martínez, & Sartori, 2007), ceiling effects in controls (Fung et al., 2001; Laws et al., 2005; Moreno-Martínez, & Laws, 2007; 2008), and nuisance variables (Funnell & De Mornay Davis, 1996; Funnell & Sheridan, 1992; Stewart, Parkin & Hunkin, 1992) all influence the emergence of category specific deficits in Alzheimer‟s dementia (AD). Thus, the predominant use of line drawings of familiar, everyday items in category specific research is problematic. Moreover, this does not allow researchers to explore the extent to which format may influence object recognition. As such, the initial concern of this thesis was the development of a new corpus of 147 colour images of graded naming difficulty, the Hatfield Image Test (HIT; Adlington, Laws, & Gale, 2009), and the collection of relevant normative data including ratings of: age of acquisition, colour diagnosticity, familiarity, name agreement, visual complexity, and word frequency. Furthermore, greyscale and line-drawn versions of the HIT corpus were developed (and again, the associated normative data obtained), to permit research into the influence of image format on the emergence of category specific effects in patients with AD, and in healthy controls. Using the HIT, several studies were conducted including: (i) a normative investigation of the effects of category and image format on naming accuracy and latencies in healthy controls; (ii) an exploration of the effects of image format (using the HIT images presented in colour, greyscale, and line-drawn formats) and category on the naming performance of AD patients, and age-matched controls performing below ceiling; (iii) a longitudinal investigation comparing AD patient performance to that of age-matched controls, on a range of semantic tasks (naming, sorting, word-picture matching), using colour, greyscale, and line-drawn versions of the HIT; (iv) a comparison of naming in AD patients and age-matched controls on the HIT and the (colour, greyscale and line-drawn) images from the Snodgrass and Vanderwart (1980) corpus; and (v) a meta-analysis to explore category specific naming in AD using the Snodgrass and Vanderwart (1980) versus other corpora. Taken together, the results of these investigations showed first, that image format interacts with category. For both AD patients and controls, colour is more important for the recognition of living things, with a significant nonliving advantage emerging for the line-drawn images, but not the colour images. Controls benefitted more from additional surface information than AD patients, which chapter 6 shows results from low-level visual cortical impairment in AD. For controls, format was also more important for the recognition of low familiarity, low frequency items. In addition, the findings show that adequate control data affects the emergence of category specific deficits in AD. Specifically, based on within-group comparison chapters 6, 7, and 8 revealed a significant living deficit in AD patients. However, when compared to controls performing below ceiling, as demonstrated in chapters 7 and 8, this deficit was only significant for the line drawings, showing that the performance observed in AD patients is simply an exaggeration of the norm

    ROC analysis of the verbal overshadowing effect: testing the effect of verbalisation on memory sensitivity

    Get PDF
    This study investigated the role of memory sensitivity versus recognition criterion in the verbal overshadowing effect (VOE). Lineup recognition data was analysed using ROC analysis to separate the effects of verbalisation on memory sensitivity from criterion placement. Participants watched a short crime video, described the perpetrator's facial features then attempted a lineup identification. Description instructions were varied between participants. There was a standard (free report), forced (report everything), and warning (report accurate information) condition. Control participants did not describe the perpetrator. Memory sensitivity was greater in the control compared to the standard condition. Memory sensitivity was also greater in the warning compared to forced and standard conditions. Memory sensitivity did not differ across the forced and standard description conditions, although a more conservative lineup decision standard was employed in the forced condition. These results, along with qualitative analyses of descriptions, support both retrieval-based and criterion-based explanations of the VOE

    Developing a Sufficient Knowledge Base for Faces: Implicit Recognition Memory for Distinctive versus Typical Female Faces

    Get PDF
    Research on adults' face recognition abilities provides evidence for a distinctiveness effect such that distinctive faces are remembered better and more easily than typical faces. Research on this effect in the developmental literature is limited. In the current study, two experiments tested recognition memory for evidence of the distinctiveness effect. Study 1 tested infants (9- and 10-month olds) using a novelty preference paradigm. Infants were tested for immediate and delayed memory. Results indicated memory for only the most distinctive faces. Study 2 tested preschool children (3- and 4-year-olds) using an interactive story. Children were tested with an implicit (i.e. surprise) memory test. Results indicated a memory advantage for distinctive faces by three-year-old girls and four-year-old boys and girls. Contrary to traditional theories of changes in children's processing strategies, experience is also a critical factor in the development of face recognition abilities

    Short article: When are moving images remembered better? Study–test congruence and the dynamic superiority effect

    Get PDF
    It has previously been shown that moving images are remembered better than static ones. In two experiments, we investigated the basis for this dynamic superiority effect. Participants studied scenes presented as a single static image, a sequence of still images, or a moving video clip, and 3 days later completed a recognition test in which familiar and novel scenes were presented in all three formats. We found a marked congruency effect: For a given study format, accuracy was highest when test items were shown in the same format. Neither the dynamic superiority effect nor the study–test congruency effect was affected by encoding (Experiment 1) or retrieval (Experiment 2) manipulations, suggesting that these effects are relatively impervious to strategic control. The results demonstrate that the spatio-temporal properties of complex, realistic scenes are preserved in long-term memory. </jats:p

    Development of audiovisual comprehension skills in prelingually deaf children with cochlear implants

    Get PDF
    Objective: The present study investigated the development of audiovisual comprehension skills in prelingually deaf children who received cochlear implants. Design: We analyzed results obtained with the Common Phrases (Robbins et al., 1995) test of sentence comprehension from 80 prelingually deaf children with cochlear implants who were enrolled in a longitudinal study, from pre-implantation to 5 years after implantation. Results: The results revealed that prelingually deaf children with cochlear implants performed better under audiovisual (AV) presentation compared with auditory-alone (A-alone) or visual-alone (V-alone) conditions. AV sentence comprehension skills were found to be strongly correlated with several clinical outcome measures of speech perception, speech intelligibility, and language. Finally, pre-implantation V-alone performance on the Common Phrases test was strongly correlated with 3-year postimplantation performance on clinical outcome measures of speech perception, speech intelligibility, and language skills. Conclusions: The results suggest that lipreading skills and AV speech perception reflect a common source of variance associated with the development of phonological processing skills that is shared among a wide range of speech and language outcome measures

    Verbal Learning and Memory After Cochlear Implantation in Postlingually Deaf Adults: Some New Findings with the CVLT-II

    Get PDF
    OBJECTIVES: Despite the importance of verbal learning and memory in speech and language processing, this domain of cognitive functioning has been virtually ignored in clinical studies of hearing loss and cochlear implants in both adults and children. In this article, we report the results of two studies that used a newly developed visually based version of the California Verbal Learning Test-Second Edition (CVLT-II), a well-known normed neuropsychological measure of verbal learning and memory. DESIGN: The first study established the validity and feasibility of a computer-controlled visual version of the CVLT-II, which eliminates the effects of audibility of spoken stimuli, in groups of young normal-hearing and older normal-hearing (ONH) adults. A second study was then carried out using the visual CVLT-II format with a group of older postlingually deaf experienced cochlear implant (ECI) users (N = 25) and a group of ONH controls (N = 25) who were matched to ECI users for age, socioeconomic status, and nonverbal IQ. In addition to the visual CVLT-II, subjects provided data on demographics, hearing history, nonverbal IQ, reading fluency, vocabulary, and short-term memory span for visually presented digits. ECI participants were also tested for speech recognition in quiet. RESULTS: The ECI and ONH groups did not differ on most measures of verbal learning and memory obtained with the visual CVLT-II, but deficits were identified in ECI participants that were related to recency recall, the buildup of proactive interference, and retrieval-induced forgetting. Within the ECI group, nonverbal fluid IQ, reading fluency, and resistance to the buildup of proactive interference from the CVLT-II consistently predicted better speech recognition outcomes. CONCLUSIONS: Results from this study suggest that several underlying foundational neurocognitive abilities are related to core speech perception outcomes after implantation in older adults. Implications of these findings for explaining individual differences and variability and predicting speech recognition outcomes after implantation are discussed

    Are Face and Object Recognition Independent? A Neurocomputational Modeling Exploration

    Full text link
    Are face and object recognition abilities independent? Although it is commonly believed that they are, Gauthier et al.(2014) recently showed that these abilities become more correlated as experience with nonface categories increases. They argued that there is a single underlying visual ability, v, that is expressed in performance with both face and nonface categories as experience grows. Using the Cambridge Face Memory Test and the Vanderbilt Expertise Test, they showed that the shared variance between Cambridge Face Memory Test and Vanderbilt Expertise Test performance increases monotonically as experience increases. Here, we address why a shared resource across different visual domains does not lead to competition and to an inverse correlation in abilities? We explain this conundrum using our neurocomputational model of face and object processing (The Model, TM). Our results show that, as in the behavioral data, the correlation between subordinate level face and object recognition accuracy increases as experience grows. We suggest that different domains do not compete for resources because the relevant features are shared between faces and objects. The essential power of experience is to generate a "spreading transform" for faces that generalizes to objects that must be individuated. Interestingly, when the task of the network is basic level categorization, no increase in the correlation between domains is observed. Hence, our model predicts that it is the type of experience that matters and that the source of the correlation is in the fusiform face area, rather than in cortical areas that subserve basic level categorization. This result is consistent with our previous modeling elucidating why the FFA is recruited for novel domains of expertise (Tong et al., 2008)

    Is there room for the BBC in the mental lexicon? On the recognition of acronyms

    Get PDF
    It has been suggested that acronyms like BBC are processed like real words. This claim has been based on improved performance with acronyms in the Reicher-Wheeler task, the letter string matching task, the visual feature integration task, and the N400 component in event-related potential (ERP) studies. Unfortunately, in all these tasks performance on acronyms resembled performance on pseudowords more than performance on words. To further assess the similarity of acronyms and words, we focused on the meaning of the acronyms and used masked priming to examine whether target words can be primed to the same extent with associatively related acronyms as with associatively related words. Such priming was possible at a stimulus onset asynchrony (SOA) of 84ms. In addition, the priming of the acronyms did not depend on the letter case in which they were presented: The target word books was primed as much by isbn and iSbN as by ISBN
    corecore