108 research outputs found

    The Role of Sleep in Learning New Meanings for Familiar Words through Stories

    Get PDF
    Adults often learn new meanings for familiar words, and in doing so they must integrate information about the newly-acquired meanings with existing knowledge about the prior meanings of the words in their mental lexicon. Numerous studies have confirmed the importance of sleep for learning novel word forms (e.g., “cathedruke”) either with or without associated meanings. By teaching participants new meanings for familiar word forms, this is the first study to focus exclusively on the specific role of sleep on learning word meanings. In two experiments participants were trained on new meanings for familiar words through a naturalistic story reading paradigm to minimize explicit learning strategies. Experiment 1 confirmed the benefit of sleep for recall and recognition of word meanings, with better retention after 12 hours including overnight sleep than 12 hours awake. Experiment 2, which was preregistered, further explored this sleep benefit. Recall performance was best in the condition in which participants slept immediately after exposure and were tested soon after they woke up, compared with three conditions which all included an extended period of wake during which they would encounter their normal language environment. The results are consistent with the view that, at least under these learning conditions, a benefit of sleep arises due to passive protection from linguistic interference while asleep, rather than being due to active consolidation

    Pupil Dilation Is Sensitive to Semantic Ambiguity and Acoustic Degradation

    Get PDF
    © The Author(s) 2020. Speech comprehension is challenged by background noise, acoustic interference, and linguistic factors, such as the presence of words with more than one meaning (homonyms and homophones). Previous work suggests that homophony in spoken language increases cognitive demand. Here, we measured pupil dilation—a physiological index of cognitive demand—while listeners heard high-ambiguity sentences, containing words with more than one meaning, or well-matched low-ambiguity sentences without ambiguous words. This semantic-ambiguity manipulation was crossed with an acoustic manipulation in two experiments. In Experiment 1, sentences were masked with 30-talker babble at 0 and +6 dB signal-to-noise ratio (SNR), and in Experiment 2, sentences were heard with or without a pink noise masker at –2 dB SNR. Speech comprehension was measured by asking listeners to judge the semantic relatedness of a visual probe word to the previous sentence. In both experiments, comprehension was lower for high- than for low-ambiguity sentences when SNRs were low. Pupils dilated more when sentences included ambiguous words, even when no noise was added (Experiment 2). Pupil also dilated more when SNRs were low. The effect of masking was larger than the effect of ambiguity for performance and pupil responses. This work demonstrates that the presence of homophones, a condition that is ubiquitous in natural language, increases cognitive demand and reduces intelligibility of speech heard with a noisy background

    Why Clowns Taste Funny: The Relationship between Humor and Semantic Ambiguity

    Get PDF
    What makes us laugh? One crucial component of many jokes is the disambiguation of words with multiple meanings. In this functional MRI study of normal participants, the neural mechanisms that underlie our experience of getting a joke that depends on the resolution of semantically ambiguous words were explored. Jokes that contained ambiguous words were compared with sentences that contained ambiguous words but were not funny, as well as to matched verbal jokes that did not depend on semantic ambiguity. The results confirm that both the left inferior temporal gyrus and left inferior frontal gyrus are involved in processing the semantic aspects of language comprehension, while a more widespread network that includes both of these regions and the temporoparietal junction bilaterally is involved in processing humorous verbal jokes when compared with matched nonhumorous material. In addition, hearing jokes was associated with increased activity in a network of subcortical regions, including the amygdala, the ventral striatum, and the midbrain, that have been implicated in experiencing positive reward. Moreover, activity in these regions correlated with the subjective ratings of funniness of the presented material. These results allow a more precise account of how the neural and cognitive processes that are involved in ambiguity resolution contribute to the appreciation of jokes that depend on semantic ambiguity

    Episodic memory and sleep are involved in the maintenance of context-specific lexical information

    Get PDF
    Familiar words come with a wealth of associated knowledge about their variety of usage, accumulated over a lifetime. How do we track and adjust this knowledge as new instances of a word are encountered? A recent study (Cognition) found that, for homonyms (e.g., bank), sleep-associated consolidation facilitates the updating of meaning dominance. Here, we tested the generality of this finding by exposing participants to (Experiment 1; N = 125) nonhomonyms (e.g., bathtub) in sentences that biased their meanings toward a specific interpretation (e.g., bathtub-slip vs. bathtub-relax), and (Experiment 2; N = 128) word-class ambiguous words (e.g., loan) in sentences where the words were used in their dispreferred word class (e.g., "He will loan me money"). Both experiments showed that such sentential experience influenced later interpretation and usage of the words more after a night's sleep than a day awake. We interpret these results as evidence for a general role of episodic memory in language comprehension such that new episodic memories are formed every time a sentence is comprehended, and these memories contribute to lexical processing next time the word is encountered, as well as potentially to the fine-tuning of long-term lexical knowledge. (PsycInfo Database Record (c) 2023 APA, all rights reserved)

    Learning new word meanings from story reading:The benefit of immediate testing

    Get PDF
    This study investigated how word meanings can be learned from natural story reading. Three experiments with adult participants compared naturalistic incidental learning with intentional learning of new meanings for familiar words, and examined the role of immediate tests in maintaining memory of new word meanings. In Experiment 1, participants learned new meanings for familiar words through incidental (story reading) and intentional (definition training task) conditions. Memory was tested with cued recall of meanings and multiple-choice meaning-to-word matching immediately and 24 h later. Results for both measures showed higher accuracy for intentional learning, which was also more time efficient than incidental learning. However, there was reasonably good learning from both methods, and items learned incidentally through stories appeared less susceptible to forgetting over 24 h. It was possible that retrieval practice at the immediate test may have aided learning and improved memory of new word meanings 24 h later, especially for the incidental story reading condition. Two preregistered experiments then examined the role of immediate testing in long-term retention of new meanings for familiar words. There was a strong testing effect for word meanings learned through intentional and incidental conditions (Experiment 2), which was non-significantly larger for items learned incidentally through stories. Both cued recall and multiple-choice tests were each individually sufficient to enhance retention compared to having no immediate test (Experiment 3), with a larger learning boost from multiple-choice. This research emphasises (i) the resilience of word meanings learned incidentally through stories and (ii) the key role that testing can play in boosting vocabulary learning from story reading

    Towards a distributed connectionist account of cognates and interlingual homographs: evidence from semantic relatedness tasks

    Get PDF
    Background Current models of how bilinguals process cognates (e.g., “wolf”, which has the same meaning in Dutch and English) and interlingual homographs (e.g., “angel”, meaning “insect’s sting” in Dutch) are based primarily on data from lexical decision tasks. A major drawback of such tasks is that it is difficult—if not impossible—to separate processes that occur during decision making (e.g., response competition) from processes that take place in the lexicon (e.g., lateral inhibition). Instead, we conducted two English semantic relatedness judgement experiments. Methods In Experiment 1, highly proficient Dutch–English bilinguals (N = 29) and English monolinguals (N = 30) judged the semantic relatedness of word pairs that included a cognate (e.g., “wolf”–“howl”; n = 50), an interlingual homograph (e.g., “angel”–“heaven”; n = 50) or an English control word (e.g., “carrot”–“vegetable”; n = 50). In Experiment 2, another group of highly proficient Dutch–English bilinguals (N = 101) read sentences in Dutch that contained one of those cognates, interlingual homographs or the Dutch translation of one of the English control words (e.g., “wortel” for “carrot”) approximately 15 minutes prior to completing the English semantic relatedness task. Results In Experiment 1, there was an interlingual homograph inhibition effect of 39 ms only for the bilinguals, but no evidence for a cognate facilitation effect. Experiment 2 replicated these findings and also revealed that cross-lingual long-term priming had an opposite effect on the cognates and interlingual homographs: recent experience with a cognate in Dutch speeded processing of those items 15 minutes later in English but slowed processing of interlingual homographs. However, these priming effects were smaller than previously observed using a lexical decision task. Conclusion After comparing our results to studies in both the bilingual and monolingual domain, we argue that bilinguals appear to process cognates and interlingual homographs as monolinguals process polysemes and homonyms, respectively. In the monolingual domain, processing of such words is best modelled using distributed connectionist frameworks. We conclude that it is necessary to explore the viability of such a model for the bilingual case. Data, scripts, materials and pre-registrations. Experiment 1: http://www.osf.io/ndb7p; Experiment 2: http://www.osf.io/2at49

    The Role of Sleep in Learning New Meanings for Familiar Words through Stories

    Get PDF
    Adults often learn new meanings for familiar words, and in doing so they must integrate information about the newly-acquired meanings with existing knowledge about the prior meanings of the words in their mental lexicon. Numerous studies have confirmed the importance of sleep for learning novel word forms (e.g., “cathedruke”) either with or without associated meanings. By teaching participants new meanings for familiar word forms, this is the first study to focus exclusively on the specific role of sleep on learning word meanings. In two experiments participants were trained on new meanings for familiar words through a naturalistic story reading paradigm to minimize explicit learning strategies. Experiment 1 confirmed the benefit of sleep for recall and recognition of word meanings, with better retention after 12 hours including overnight sleep than 12 hours awake. Experiment 2, which was preregistered, further explored this sleep benefit. Recall performance was best in the condition in which participants slept immediately after exposure and were tested soon after they woke up, compared with three conditions which all included an extended period of wake during which they would encounter their normal language environment. The results are consistent with the view that, at least under these learning conditions, a benefit of sleep arises due to passive protection from linguistic interference while asleep, rather than being due to active consolidation

    Studying Individual Differences in Language Comprehension: The Challenges of Item-Level Variability and Well-Matched Control Conditions

    Get PDF
    Translating experimental tasks that were designed to investigate differences between conditions at the group-level into valid and reliable instruments to measure individual differences in cognitive skills is challenging (Hedge et al., 2018; Rouder et al., 2019; Rouder & Haaf, 2019). For psycholinguists, the additional complexities associated with selecting or constructing language stimuli, and the need for appropriate well-matched baseline conditions make this endeavour particularly complex. In a typical experiment, a process-of-interest (e.g. ambiguity resolution) is targeted by contrasting performance in an experimental condition with performance in a well-matched control condition. In many cases, careful between-condition matching precludes the same participant from encountering all stimulus items. Unfortunately, solutions that work for group-level research (e.g. constructing counterbalanced experiment versions) are inappropriate for individual-differences designs. As a case study, we report an ambiguity resolution experiment that illustrates the steps that researchers can take to address this issue and assess whether their measurement instrument is both valid and reliable. On the basis of our findings, we caution against the widespread approach of using datasets from group-level studies to also answer important questions about individual differences

    Listeners and readers generalise their experience with word meanings across modalities

    Get PDF
    Research has shown that adults’ lexical-semantic representations are surprisingly malleable. For instance, the interpretation of ambiguous words (e.g. bark) is influenced by experience such that recently encountered meanings become more readily available (Rodd et al., 2016, 2013). However the mechanism underlying this word-meaning priming effect remains unclear, and competing accounts make different predictions about the extent to which information about word meanings that is gained within one modality (e.g. speech) is transferred to the other modality (e.g. reading) to aid comprehension. In two web-based experiments, ambiguous target words were primed with either written or spoken sentences that biased their interpretation toward a subordinate meaning, or were unprimed. About 20 minutes after the prime exposure, interpretation of these target words was tested by presenting them in either written or spoken form, using word association (Experiment 1, N=78) and speeded semantic relatedness decisions (Experiment 2, N=181). Both experiments replicated the auditory unimodal priming effect shown previously (Rodd et al., 2016, 2013) and revealed significant cross-modal priming: primed meanings were retrieved more frequently and swiftly across all primed conditions compared to the unprimed baseline. Furthermore, there were no reliable differences in priming levels between unimodal and cross-modal prime-test conditions. These results indicate that recent experience with ambiguous word meanings can bias the reader’s or listener’s later interpretation of these words in a modality-general way. We identify possible loci of this effect within the context of models of long-term priming and ambiguity resolution
    • 

    corecore