37 research outputs found

    The Unification Space implemented as a localist neural net: predictions and error-tolerance in a constraint-based parser

    Get PDF
    We introduce a novel computer implementation of the Unification-Space parser (Vosse and Kempen in Cognition 75:105–143, 2000) in the form of a localist neural network whose dynamics is based on interactive activation and inhibition. The wiring of the network is determined by Performance Grammar (Kempen and Harbusch in Verb constructions in German and Dutch. Benjamins, Amsterdam, 2003), a lexicalist formalism with feature unification as binding operation. While the network is processing input word strings incrementally, the evolving shape of parse trees is represented in the form of changing patterns of activation in nodes that code for syntactic properties of words and phrases, and for the grammatical functions they fulfill. The system is capable, at least qualitatively and rudimentarily, of simulating several important dynamic aspects of human syntactic parsing, including garden-path phenomena and reanalysis, effects of complexity (various types of clause embeddings), fault-tolerance in case of unification failures and unknown words, and predictive parsing (expectation-based analysis, surprisal effects). English is the target language of the parser described

    Prolegomena to a neurocomputational architecture for human grammatical encoding and decoding

    No full text
    The study develops a neurocomputational architecture for grammatical processing in language production and language comprehension (grammatical encoding and decoding, respectively). It seeks to answer two questions. First, how is online syntactic structure formation of the complexity required by natural-language grammars possible in a fixed, preexisting neural network without the need for online creation of new connections or associations? Second, is it realistic to assume that the seemingly disparate instantiations of syntactic structure formation in grammatical encoding and grammatical decoding can run on the same neural infrastructure? This issue is prompted by accumulating experimental evidence for the hypothesis that the mechanisms for grammatical decoding overlap with those for grammatical encoding to a considerable extent, thus inviting the hypothesis of a single “grammatical coder.” The paper answers both questions by providing the blueprint for a syntactic structure formation mechanism that is entirely based on prewired circuitry (except for referential processing, which relies on the rapid learning capacity of the hippocampal complex), and can subserve decoding as well as encoding tasks. The model builds on the “Unification Space” model of syntactic parsing developed by Vosse & Kempen (2000, 2008, 2009). The design includes a neurocomputational mechanism for the treatment of an important class of grammatical movement phenomena

    Looking Through the Lens of Individual Differences: Relationships Between Personality, Cognitive Control, Language Processing, and Genes

    Get PDF
    The study of individual differences in cognitive abilities and personality traits has the potential to inform our understanding of how the processing mechanisms underlying different behaviors are organized. In the current set of studies, we applied an individual-differences approach to the study of sources of variation in individuals’ personality traits, cognitive control, and linguistic ambiguity resolution abilities. In Chapter 2, we investigated the relationship between motivational personality traits and cognitive control abilities. The results demonstrated that individual differences in the personality traits of approach and avoidance predict performance on verbal and nonverbal versions of the Stroop task. These results are suggestive of a hemisphere-specific organization of approach/avoidance personality traits and verbal/nonverbal cognitive control abilities. Furthermore, these results are consistent with previous findings of hemispheric asymmetry in terms of the distribution of dopaminergic and norephinephrine signaling pathways. In Chapter 3, we investigated the extent to which the same processing mechanisms are used to resolve lexical and syntactic conflict. In addition, we incorporated a behavioral genetics approach to investigate this commonality at the neurotransmitter level. We explored whether genetic variation in catechol-O-methyltransferase (COMT), a gene that regulates the catabolism of dopamine in prefrontal cortex, is related to individuals’ ability to resolve lexical and syntactic conflict. The results of this study demonstrated that individual differences in the ability to resolve lexical conflict are related to variation in syntactic conflict resolution abilities. This finding supports constraint satisfaction theories of language processing. We also showed that those individuals with the variant of the COMT gene resulting in less availability of dopamine at the synapse tended to have greater difficulty processing both lexical and syntactic ambiguities. These results provide novel evidence that dopamine plays a role in linguistic ambiguity resolution. In sum, the results from the current set of studies reveal how an individual-differences approach can be used to investigate several different factors involved in the context-dependent regulation of behavior

    Quantity and Quality: Not a Zero-Sum Game

    Get PDF
    Quantification of existing theories is a great challenge but also a great chance for the study of language in the brain. While quantification is necessary for the development of precise theories, it demands new methods and new perspectives. In light of this, four complementary methods were introduced to provide a quantitative and computational account of the extended Argument Dependency Model from Bornkessel-Schlesewsky and Schlesewsky. First, a computational model of human language comprehension was introduced on the basis of dependency parsing. This model provided an initial comparison of two potential mechanisms for human language processing, the traditional "subject" strategy, based on grammatical relations, and the "actor" strategy based on prominence and adopted from the eADM. Initial results showed an advantage for the traditional subject" model in a restricted context; however, the "actor" model demonstrated behavior in a test run that was more similar to human behavior than that of the "subject" model. Next, a computational-quantitative implementation of the "actor" strategy as weighted feature comparison between memory units was used to compare it to other memory-based models from the literature on the basis of EEG data. The "actor" strategy clearly provided the best model, showing a better global fit as well as better match in all details. Building upon the success modeling EEG data, the feasibility of estimating free parameters from empirical data was demonstrated. Both the procedure for doing so and the necessary software were introduced and applied at the level of individual participants. Using empirically estimated parameters, the models from the previous EEG experiment were calculated again and yielded similar results, thus reinforcing the previous work. In a final experiment, the feasibility of analyzing EEG data from a naturalistic auditory stimulus was demonstrated, which conventional wisdom says is not possible. The analysis suggested a new perspective on the nature of event-related potentials (ERPs), which does not contradict existing theory yet nonetheless goes against previous intuition. Using this new perspective as a basis, a preliminary attempt at a parsimonious neurocomputational theory of cognitive ERP components was developed

    Syntactic structure assembly in human parsing: A computational model based on competitive inhibition and a lexicalist grammar

    No full text
    We present the design, implementation and simulation results of a psycholinguistic model of human syntactic processing that meets major empirical criteria. The parser operates in conjunction with a lexicalist grammar and is driven by syntactic information associated with heads of phrases. The dynamics of the model are based on competition by lateral inhibition ('competitive inhibition'). Input words activate lexical frames (i.e. elementary trees anchored to input words) in the mental lexicon, and a network of candidate 'unification links' is set up between frame nodes. These links represent tentative attachments that are graded rather than all-or-none. Candidate links that, due to grammatical or 'treehood' constraints, are incompatible, compete for inclusion in the final syntactic tree by sending each other inhibitory signals that reduce the competitor's attachment strength. The outcome of these local and simultaneous competitions is controlled by dynamic parameters, in particular by the Entry Activation and the Activation Decay rate of syntactic nodes, and by the Strength and Strength Build-up rate of Unification links. In case of a successful parse, a single syntactic tree is returned that covers the whole input string and consists of lexical frames connected by winning Unification links. Simulations are reported of a significant range of psycholinguistic parsing phenomena in both normal and aphasic speakers of English: (i) various effects of linguistic complexity (single versus double, center versus right-hand self-embeddings of relative clauses; the difference between relative clauses with subject and object extraction; the contrast between a complement clause embedded within a relative clause versus a relative clause embedded within a complement clause); (ii) effects of local and global ambiguity, and of word-class and syntactic ambiguity (including recency and length effects); (iii) certain difficulty-of-reanalysis effects (contrasts between local ambiguities that are easy to resolve versus ones that lead to serious garden-path effects); (iv) effects of agrammatism on parsing performance, in particular the performance of various groups of aphasic patients on several sentence types

    Bi-Directional Evidence Linking Sentence Production and Comprehension: A Cross-Modality Structural Priming Study

    Get PDF
    Natural language involves both speaking and listening. Recent models claim that production and comprehension share aspects of processing and are linked within individuals (Pickering and Garrod, 2004, 2013; MacDonald, 2013; Dell and Chang, 2014). Evidence for this claim has come from studies of cross-modality structural priming, mainly examining processing in the direction of comprehension to production. The current study replicated these comprehension to production findings and developed a novel cross-modal structural priming paradigm from production to comprehension using a temporally sensitive online measure of comprehension, Event-Related Potentials. For Comprehension-to-Production priming, participants first listened to active or passive sentences and then described target pictures using either structure. In Production-to-Comprehension priming, participants first described a picture using either structure and then listened to target passive sentences while EEG was recorded. Comprehension-to-Production priming showed the expected passive sentence priming for syntactic choice, but not response time (RT) or average syllable duration. In Production-to-Comprehension priming, primed, versus unprimed, passive sentences elicited a reduced N400. These effects support the notion that production and comprehension share aspects of processing and are linked within the individual. Moreover, this paradigm can be used for the exploration priming at different linguistic levels as well as the influence of extra-linguistic factors on natural language use

    Widening agreement processing: a matter of time, features and distance

    Get PDF
    Published online: 08 Mar 2018Existing psycholinguistic models typically describe agreement relations as monolithic phenomena amounting to mechanisms that check mere feature consistency. This eye-tracking study aimed at widening this perspective by investigating the time spent reading subject-verb (number, person) and adverb-verb (tense) violations on an inflected verb during sentence comprehension in Spanish. Results suggest that (i) distinct processing mechanisms underlie the analysis of subject-verb and adverb-verb relations, (ii) the parser is sensitive to the different interpretive properties that characterise the person, number and tense features encoded in the verb (i.e. anchoring to discourse for person and tense interpretation, as opposed to anchoring to cardinality information for number), and (iii) the (local, distal) position of the agreement controller with respect to the verb affects the interpretation of these dependencies. An account is proposed that capitalises on the importance of enriching current sentence processing formalizations using a feature and relation-based approach.S.M acknowledges funding from the Gipuzkoa Fellowship Program, from grants FFI2016-76432 and SEV-2015-490, Severo Ochoa Programme for Centres/Units of excellence in R&D awarded by the Spanish Ministry of Industry, Economy and Competitiveness (MINECO). L.R. acknowledges support from the ERC Advanced Grant n. 340297 “SynCart”

    The resolution of the clause that is relative? Prosody and plausibility as cues to RC attachment in English: evidence from structural priming and event related potentials

    Get PDF
    In spoken language, different types of linguistic information are used by the parser to arrive at a coherent syntactic interpretation of the input. In this thesis I investigated two of these information sources, namely overt prosodic features and plausibility constraints. More specifically, I was interested in how these cues interact in resolving the relative clause attachment ambiguity. Much research has explored the single cues and much is know about the influence that each cue exerts independent of the other. However, the interaction of prosodic and semantic cues to attachment has to date received little attention. Two experimental paradigms were used in 4 experiments, i.e. the method of structural priming and the online method of event‐related potentials. The data from these experiments suggest that the cues interact in a complex way. The results imply that the prominence of the dispreferred cues, the surprisal and the type of revision associated with them play a major role during processing. I propose three processing principles that might account for the results observed

    Empirical studies on word representations

    Get PDF
    One of the most fundamental tasks in natural language processing is representing words with mathematical objects (such as vectors). The word representations, which are most often estimated from data, allow capturing the meaning of words. They enable comparing words according to their semantic similarity, and have been shown to work extremely well when included in complex real-world applications. A large part of our work deals with ways of estimating word representations directly from large quantities of text. Our methods exploit the idea that words which occur in similar contexts have a similar meaning. How we define the context is an important focus of our thesis. The context can consist of a number of words to the left and to the right of the word in question, but, as we show, obtaining context words via syntactic links (such as the link between the verb and its subject) often works better. We furthermore investigate word representations that accurately capture multiple meanings of a single word. We show that translation of a word in context contains information that can be used to disambiguate the meaning of that word
    corecore