154 research outputs found

    Lexical Ambiguity

    Get PDF

    Settling Into Semantic Space: An Ambiguity-Focused Account of Word-Meaning Access

    Get PDF
    Most words are ambiguous: Individual word forms (e.g., run) can map onto multiple different interpretations depending on their sentence context (e.g., the athlete/politician/river runs). Models of word-meaning access must therefore explain how listeners and readers can rapidly settle on a single, contextually appropriate meaning for each word that they encounter. I present a new account of word-meaning access that places semantic disambiguation at its core and integrates evidence from a wide variety of experimental approaches to explain this key aspect of language comprehension. The model has three key characteristics. (a) Lexical-semantic knowledge is viewed as a high-dimensional space; familiar word meanings correspond to stable states within this lexical-semantic space. (b) Multiple linguistic and paralinguistic cues can influence the settling process by which the system resolves on one of these familiar meanings. (c) Learning mechanisms play a vital role in facilitating rapid word-meaning access by shaping and maintaining high-quality lexical-semantic knowledge throughout the life span. In contrast to earlier models of word-meaning access, I highlight individual differences in lexical-semantic knowledge: Each person's lexicon is uniquely structured by specific, idiosyncratic linguistic experiences

    Resolving Semantic Ambiguities in Sentences: Cognitive Processes and Brain Mechanisms

    Get PDF
    fMRI studies of how the brain processes sentences containing semantically ambiguous words have consistently implicated (i) the left inferior frontal gyrus (LIFG) and (ii) posterior regions of the left temporal lobe in processing high-ambiguity sentences. This article reviews recent findings on this topic and relates them to (i) psycholinguistic theories about the underlying cognitive processes and (ii) general neuro-cognitive accounts of the relevant brain regions. We suggest that the LIFG plays a general role in the cognitive control process that are necessary to select contextually relevant meanings and to reinterpret sentences that were initially misunderstood, but it is currently unclear whether these control processes should best be characterised in terms of specific processes such as conflict resolution and controlled retrieval that are required for high-ambiguity sentences, or whether its function is better characterised in terms of a more general set of ‘unification’ processes. In contrast to the relatively rapid progress that has been made in understanding the function of the LIFG, we suggest that the contribution of the posterior temporal lobe is less well understood and future work is needed to clarify its role in sentence comprehension

    Dominance Norms and Data for Spoken Ambiguous Words in British English

    Get PDF
    Words with multiple meanings (e.g. bark of the tree/dog) have provided important insights into several key topics within psycholinguistics. Experiments that use ambiguous words require stimuli to be carefully controlled for the relative frequency (dominance) of their different meanings, as this property has pervasive effects on numerous tasks. Dominance scores are often calculated from word association responses: by measuring the proportion of participants who respond to the word ‘bark’ with dog-related (e.g. “woof”) or tree-related (e.g. “branch”) responses, researchers can estimate people’s relative preferences for these meanings. We collated data from a number of recent experiments and pre-tests to construct a dataset of 29,542 valid responses for 243 spoken ambiguous words from participants from the United Kingdom. We provide summary dominance data for the 182 ambiguous words that have a minimum of 100 responses, and a tool for automatically coding new word association responses based on responses in our coded set, which allows additional data to be more easily scored and added to this database. All files can be found at: https://osf.io/uy47w/

    Pupil Dilation Is Sensitive to Semantic Ambiguity and Acoustic Degradation

    Get PDF
    Speech comprehension is challenged by background noise, acoustic interference, and linguistic factors, such as the presence of words with more than one meaning (homonyms and homophones). Previous work suggests that homophony in spoken language increases cognitive demand. Here, we measured pupil dilation—a physiological index of cognitive demand—while listeners heard high-ambiguity sentences, containing words with more than one meaning, or well-matched low-ambiguity sentences without ambiguous words. This semantic-ambiguity manipulation was crossed with an acoustic manipulation in two experiments. In Experiment 1, sentences were masked with 30-talker babble at 0 and +6 dB signal-to-noise ratio (SNR), and in Experiment 2, sentences were heard with or without a pink noise masker at –2 dB SNR. Speech comprehension was measured by asking listeners to judge the semantic relatedness of a visual probe word to the previous sentence. In both experiments, comprehension was lower for high- than for low-ambiguity sentences when SNRs were low. Pupils dilated more when sentences included ambiguous words, even when no noise was added (Experiment 2). Pupil also dilated more when SNRs were low. The effect of masking was larger than the effect of ambiguity for performance and pupil responses. This work demonstrates that the presence of homophones, a condition that is ubiquitous in natural language, increases cognitive demand and reduces intelligibility of speech heard with a noisy background

    Listeners and Readers Generalize Their Experience With Word Meanings Across Modalities

    Get PDF
    Research has shown that adults' lexical-semantic representations are surprisingly malleable. For instance, the interpretation of ambiguous words (e.g., bark) is influenced by experience such that recently encountered meanings become more readily available (Rodd et al., 2016, 2013). However, the mechanism underlying this word-meaning priming effect remains unclear, and competing accounts make different predictions about the extent to which information about word meanings that is gained within one modality (e.g., speech) is transferred to the other modality (e.g., reading) to aid comprehension. In two Web-based experiments, ambiguous target words were primed with either written or spoken sentences that biased their interpretation toward a subordinate meaning, or were unprimed. About 20 min after the prime exposure, interpretation of these target words was tested by presenting them in either written or spoken form, using word association (Experiment 1, N = 78) and speeded semantic relatedness decisions (Experiment 2, N = 181). Both experiments replicated the auditory unimodal priming effect shown previously (Rodd et al., 2016, 2013) and revealed significant cross-modal priming: primed meanings were retrieved more frequently and swiftly across all primed conditions compared with the unprimed baseline. Furthermore, there were no reliable differences in priming levels between unimodal and cross-modal prime-test conditions. These results indicate that recent experience with ambiguous word meanings can bias the reader's or listener's later interpretation of these words in a modality-general way. We identify possible loci of this effect within the context of models of long-term priming and ambiguity resolution

    How to study spoken language understanding: a survey of neuroscientific methods

    Get PDF
    The past 20 years have seen a methodological revolution in spoken language research. A diverse range of neuroscientific techniques are now available that allow researchers to observe the brain’s responses to different types of speech stimuli in both healthy and impaired listeners, and also to observe how individuals’ abilities to process speech change as a consequence of disrupting processing in specific brain regions. This special issue provides a tutorial review of the most important of these methods to guide researchers to make informed choices about which methods are best suited to addressing specific questions concerning the neuro-computational foundations of spoken language understanding. This introductory review provides (i) an historical overview of the experimental study of spoken language understanding, (ii) a summary of the key method currently being used by cognitive neuroscientists in this field, and (iii) thoughts on the likely future developments of these methods

    Towards a distributed connectionist account of cognates and interlingual homographs: evidence from semantic relatedness tasks

    Get PDF
    BACKGROUND: Current models of how bilinguals process cognates (e.g., “wolf”, which has the same meaning in Dutch and English) and interlingual homographs (e.g., “angel”, meaning “insect’s sting” in Dutch) are based primarily on data from lexical decision tasks. A major drawback of such tasks is that it is difficult—if not impossible—to separate processes that occur during decision making (e.g., response competition) from processes that take place in the lexicon (e.g., lateral inhibition). Instead, we conducted two English semantic relatedness judgement experiments. METHODS: In Experiment 1, highly proficient Dutch–English bilinguals (N = 29) and English monolinguals (N = 30) judged the semantic relatedness of word pairs that included a cognate (e.g., “wolf”–“howl”; n = 50), an interlingual homograph (e.g., “angel”–“heaven”; n = 50) or an English control word (e.g., “carrot”–“vegetable”; n = 50). In Experiment 2, another group of highly proficient Dutch–English bilinguals (N = 101) read sentences in Dutch that contained one of those cognates, interlingual homographs or the Dutch translation of one of the English control words (e.g., “wortel” for “carrot”) approximately 15 minutes prior to completing the English semantic relatedness task. RESULTS: In Experiment 1, there was an interlingual homograph inhibition effect of 39 ms only for the bilinguals, but no evidence for a cognate facilitation effect. Experiment 2 replicated these findings and also revealed that cross-lingual long-term priming had an opposite effect on the cognates and interlingual homographs: recent experience with a cognate in Dutch speeded processing of those items 15 minutes later in English but slowed processing of interlingual homographs. However, these priming effects were smaller than previously observed using a lexical decision task. CONCLUSION: After comparing our results to studies in both the bilingual and monolingual domain, we argue that bilinguals appear to process cognates and interlingual homographs as monolinguals process polysemes and homonyms, respectively. In the monolingual domain, processing of such words is best modelled using distributed connectionist frameworks. We conclude that it is necessary to explore the viability of such a model for the bilingual case

    Recent experience with cognates and interlingual homographs in one language affects subsequent processing in another language

    Get PDF
    This experiment shows that recent experience in one language influences subsequent processing of the same word-forms in a different language. Dutch–English bilinguals read Dutch sentences containing Dutch–English cognates and interlingual homographs, which were presented again 16 minutes later in isolation in an English lexical decision task. Priming produced faster responses for the cognates but slower responses for the interlingual homographs. These results show that language switching can influence bilingual speakers at the level of individual words, and require models of bilingual word recognition (e.g. BIA+) to allow access to word meanings to be modulated by recent experience

    Localising semantic and syntactic processing in spoken and written language comprehension: an Activation Likelihood Estimation meta-analysis

    Get PDF
    We conducted an Activation Likelihood Estimation (ALE) meta-analysis to identify brain regions that are recruited by linguistic stimuli requiring relatively demanding semantic or syntactic processing. We included 54 functional MRI studies that explicitly varied the semantic or syntactic processing load, while holding constant demands on earlier stages of processing. We included studies that introduced a syntactic/semantic ambiguity or anomaly, used a priming manipulation that specifically reduced the load on semantic/syntactic processing, or varied the level of syntactic complexity. The results confirmed the critical role of the posterior left Inferior Frontal Gyrus (LIFG) in semantic and syntactic processing. These results challenge models of sentence comprehension highlighting the role of anterior LIFG for semantic processing. In addition, the results emphasise the posterior (but not anterior) temporal lobe for both semantic and syntactic processing
    • 

    corecore