11,503 research outputs found

    Lexical representation explains cortical entrainment during speech comprehension

    Get PDF
    Results from a recent neuroimaging study on spoken sentence comprehension have been interpreted as evidence for cortical entrainment to hierarchical syntactic structure. We present a simple computational model that predicts the power spectra from this study, even though the model's linguistic knowledge is restricted to the lexical level, and word-level representations are not combined into higher-level units (phrases or sentences). Hence, the cortical entrainment results can also be explained from the lexical properties of the stimuli, without recourse to hierarchical syntax.Comment: Submitted for publicatio

    Investigating Linguistic Pattern Ordering in Hierarchical Natural Language Generation

    Full text link
    Natural language generation (NLG) is a critical component in spoken dialogue system, which can be divided into two phases: (1) sentence planning: deciding the overall sentence structure, (2) surface realization: determining specific word forms and flattening the sentence structure into a string. With the rise of deep learning, most modern NLG models are based on a sequence-to-sequence (seq2seq) model, which basically contains an encoder-decoder structure; these NLG models generate sentences from scratch by jointly optimizing sentence planning and surface realization. However, such simple encoder-decoder architecture usually fail to generate complex and long sentences, because the decoder has difficulty learning all grammar and diction knowledge well. This paper introduces an NLG model with a hierarchical attentional decoder, where the hierarchy focuses on leveraging linguistic knowledge in a specific order. The experiments show that the proposed method significantly outperforms the traditional seq2seq model with a smaller model size, and the design of the hierarchical attentional decoder can be applied to various NLG systems. Furthermore, different generation strategies based on linguistic patterns are investigated and analyzed in order to guide future NLG research work.Comment: accepted by the 7th IEEE Workshop on Spoken Language Technology (SLT 2018). arXiv admin note: text overlap with arXiv:1808.0274

    Cross-domain priming from mathematics to relative-clause attachment: a visual-world study in French

    Get PDF
    Human language processing must rely on a certain degree of abstraction, as we can produce and understand sentences that we have never produced or heard before. One way to establish syntactic abstraction is by investigating structural priming. Structural priming has been shown to be effective within a cognitive domain, in the present case, the linguistic domain. But does priming also work across different domains? In line with previous experiments, we investigated cross-domain structural priming from mathematical expressions to linguistic structures with respect to relative clause attachment in French (e.g., la fille du professeur qui habitait Ă  Paris/the daughter of the teacher who lived in Paris). Testing priming in French is particularly interesting because it will extend earlier results established for English to a language where the baseline for relative clause attachment preferences is different form English: in English, relative clauses (RCs) tend to be attached to the local noun phrase (low attachment) while in French there is a preference for high attachment of relative clauses to the first noun phrase (NP). Moreover, in contrast to earlier studies, we applied an online-technique (visual world eye-tracking). Our results confirm cross-domain priming from mathematics to linguistic structures in French. Most interestingly, different from less mathematically adept participants, we found that in mathematically skilled participants, the effect emerged very early on (at the beginning of the relative clause in the speech stream) and is also present later (at the end of the relative clause). In line with previous findings, our experiment suggests that mathematics and language share aspects of syntactic structure at a very high-level of abstraction

    Introduction to the special issue on cross-language algorithms and applications

    Get PDF
    With the increasingly global nature of our everyday interactions, the need for multilingual technologies to support efficient and efective information access and communication cannot be overemphasized. Computational modeling of language has been the focus of Natural Language Processing, a subdiscipline of Artificial Intelligence. One of the current challenges for this discipline is to design methodologies and algorithms that are cross-language in order to create multilingual technologies rapidly. The goal of this JAIR special issue on Cross-Language Algorithms and Applications (CLAA) is to present leading research in this area, with emphasis on developing unifying themes that could lead to the development of the science of multi- and cross-lingualism. In this introduction, we provide the reader with the motivation for this special issue and summarize the contributions of the papers that have been included. The selected papers cover a broad range of cross-lingual technologies including machine translation, domain and language adaptation for sentiment analysis, cross-language lexical resources, dependency parsing, information retrieval and knowledge representation. We anticipate that this special issue will serve as an invaluable resource for researchers interested in topics of cross-lingual natural language processing.Postprint (published version

    Neural connectivity in syntactic movement processing

    Get PDF
    Linguistic theory suggests non-canonical sentences subvert the dominant agent-verb-theme order in English via displacement of sentence constituents to argument (NP-movement) or non-argument positions (wh-movement). Both processes have been associated with the left inferior frontal gyrus and posterior superior temporal gyrus, but differences in neural activity and connectivity between movement types have not been investigated. In the current study, functional magnetic resonance imaging data were acquired from 21 adult participants during an auditory sentence-picture verification task using passive and active sentences contrasted to isolate NP-movement, and object- and subject-cleft sentences contrasted to isolate wh-movement. Then, functional magnetic resonance imaging data from regions common to both movement types were entered into a dynamic causal modeling analysis to examine effective connectivity for wh-movement and NP-movement. Results showed greater left inferior frontal gyrus activation for Wh > NP-movement, but no activation for NP > Wh-movement. Both types of movement elicited activity in the opercular part of the left inferior frontal gyrus, left posterior superior temporal gyrus, and left medial superior frontal gyrus. The dynamic causal modeling analyses indicated that neither movement type significantly modulated the connection from the left inferior frontal gyrus to the left posterior superior temporal gyrus, nor vice-versa, suggesting no connectivity differences between wh- and NP-movement. These findings support the idea that increased complexity of wh-structures, compared to sentences with NP-movement, requires greater engagement of cognitive resources via increased neural activity in the left inferior frontal gyrus, but both movement types engage similar neural networks.This work was supported by the NIH-NIDCD, Clinical Research Center Grant, P50DC012283 (PI: CT), and the Graduate Research Grant and School of Communication Graduate Ignition Grant from Northwestern University (awarded to EE). (P50DC012283 - NIH-NIDCD, Clinical Research Center Grant; Graduate Research Grant and School of Communication Graduate Ignition Grant from Northwestern University)Published versio
    • …
    corecore