6,473 research outputs found

    Bilingual word recognition in a sentence context

    Get PDF
    This article provides an overview of bilingualism research on visual word recognition in isolation and in sentence context. Many studies investigating the processing of words out-of-context have shown that lexical representations from both languages are activated when reading in one language (language-nonselective lexical access). A newly developed research line asks whether language-nonselective access generalizes to word recognition in sentence contexts, providing a language cue and/or semantic constraint information for upcoming words. Recent studies suggest that the language of the preceding words is insufficient to restrict lexical access to words of the target language, even when reading in the native language. Eyetracking studies revealing the time course of word activation further showed that semantic constraint does not restrict language-nonselective access at early reading stages, but there is evidence that it has a relatively late effect. The theoretical implications for theories of bilingual word recognition are discussed in light of the Bilingual Interactive Activation + model (Dijkstra & Van Heuven, 2002)

    Cross-linguistic activation in bilingual sentence processing: the role of word class meaning

    Get PDF
    This study investigates how categorial (word class) semantics influences cross-linguistic interactions when reading in L2. Previous homograph studies paid little attention to the possible influence of different word classes in the stimulus material on cross-linguistic activation. The present study examines the word recognition performance of Dutch-English bilinguals who performed a lexical decision task to word targets appearing in a sentence. To determine the influence of word class meaning, the critical words either showed a word class overlap (e. g. the homograph tree [ noun], which means "step" in Dutch) or not (e.g. big [ADJ], which is a noun in Dutch meaning "piglet"). In the condition of word class overlap, a facilitation effect was observed, suggesting that both languages were active. When there was no word class overlap, the facilitation effect disappeared. This result suggests that categorial meaning affects the word recognition process of bilinguals

    Shades Of Meaning: Capturing Meaningful Context-Based Variations In Neural Patterns

    Get PDF
    When cognitive psychologists and psycholinguists consider the variability that arises during the retrieval of conceptual information, this variability it is often understood to arise from the dynamic interactions between concepts and contexts. �When cognitive neuroscientists and neurolinguists think about this variability, it is typically treated as noise and discarded from the analyses. In this dissertation, we bridge these two traditions by asking: can the variability in neural patterns evoked by word meanings reflect the contextual variation that occurs during conceptual processing? We employ functional magnetic resonance imaging (fMRI) to measure, quantify, and predict brain activity during context-dependent retrieval of word meanings. Across three experiments, we test the ways in which word-evoked neural variability is influenced by the sentence context in which the word appears (Chapter 2); the current set of task demands (Chapter 3); or even undirected thoughts about other concepts (Chapter 4). Our findings indicate that not only do the neural patterns evoked by the same stimulus word vary over time, but we can predict the degree to which these patterns vary using meaningful, theoretically motivated variables. These results demonstrate that cross-context, within-concept variations in neural responses are not exclusively due to statistical noise or measurement error. Rather, the degree of a concept’s neural variability varies in a manner that accords with a context-dependent view of semantic representation. In addition, we present preliminary evidence that prefrontally-mediated cognitive control processes are involved in expression of context-appropriate neural patterns. In sum, these studies provide a novel perspective on the flexibility of word meanings and the variable brain activity patterns associated with them

    Learning semantic sentence representations from visually grounded language without lexical knowledge

    Get PDF
    Current approaches to learning semantic representations of sentences often use prior word-level knowledge. The current study aims to leverage visual information in order to capture sentence level semantics without the need for word embeddings. We use a multimodal sentence encoder trained on a corpus of images with matching text captions to produce visually grounded sentence embeddings. Deep Neural Networks are trained to map the two modalities to a common embedding space such that for an image the corresponding caption can be retrieved and vice versa. We show that our model achieves results comparable to the current state-of-the-art on two popular image-caption retrieval benchmark data sets: MSCOCO and Flickr8k. We evaluate the semantic content of the resulting sentence embeddings using the data from the Semantic Textual Similarity benchmark task and show that the multimodal embeddings correlate well with human semantic similarity judgements. The system achieves state-of-the-art results on several of these benchmarks, which shows that a system trained solely on multimodal data, without assuming any word representations, is able to capture sentence level semantics. Importantly, this result shows that we do not need prior knowledge of lexical level semantics in order to model sentence level semantics. These findings demonstrate the importance of visual information in semantics

    A survey of cross-lingual word embedding models

    Get PDF
    Cross-lingual representations of words enable us to reason about word meaning in multilingual contexts and are a key facilitator of cross-lingual transfer when developing natural language processing models for low-resource languages. In this survey, we provide a comprehensive typology of cross-lingual word embedding models. We compare their data requirements and objective functions. The recurring theme of the survey is that many of the models presented in the literature optimize for the same objectives, and that seemingly different models are often equivalent, modulo optimization strategies, hyper-parameters, and such. We also discuss the different ways cross-lingual word embeddings are evaluated, as well as future challenges and research horizons.</jats:p

    Development of an Automated Scoring Model Using SentenceTransformers for Discussion Forums in Online Learning Environments

    Get PDF
    Due to the limitations of public datasets, research on automatic essay scoring in Indonesian has been restrained and resulted in suboptimal accuracy. In general, the main goal of the essay scoring system is to improve execution time, which is usually done manually with human judgment. This study uses a discussion forum in online learning to generate an assessment between the responses and the lecturer\u27s rubric in the automated essay scoring. A SentenceTransformers pre-trained model that can construct the highest vector embedding was proposed to identify the semantic meaning between the responses and the lecturer\u27s rubric. The effectiveness of monolingual and multilingual models was compared. This research aims to determine the model\u27s effectiveness and the appropriate model for the Automated Essay Scoring (AES) used in paired sentence Natural Language Processing tasks. The distiluse-base-multilingual-cased-v1 model, which employed the Pearson correlation method, obtained the highest performance. Specifically, it obtained a correlation value of 0.63 and a mean absolute error (MAE) score of 0.70. It indicates that the overall prediction result is enhanced when compared to the earlier regression task research
    corecore