6 research outputs found
Connectionist natural language parsing
The key developments of two decades of connectionist parsing are reviewed. Connectionist parsers are assessed according to their ability to learn to represent syntactic structures from examples automatically, without being presented with symbolic grammar rules. This review also considers the extent to which connectionist parsers offer computational models of human sentence processing and provide plausible accounts of psycholinguistic data. In considering these issues, special attention is paid to the level of realism, the nature of the modularity, and the type of processing that is to be found in a wide range of parsers
UvA-DARE (Digital Academic Repository) Generalization and Systematicity in Echo State Networks
Abstract Echo state networks (ESNs) are recurrent neural networks that can be trained efficiently because the weights of recurrent connections remain fixed at random values. Investigations of these networks' ability to generalize in sentence-processing tasks have resulted in mixed outcomes. Here, we argue that ESNs do generalize but that they are not systematic, which we define as the ability to generally outperform Markov models on test sentences that violate the training sentences' grammar. Moreover, we show that systematicity in ESNs can easily be obtained by switching from arbitrary to informative representations of words, suggesting that the information provided by such representations facilitates connectionist systematicity
Recommended from our members
Syntactic Systematicity Arising from Semantic Predictions in a Hebbian-Competitive Network
A Hebbiein-inspired, competitive network is presented which learns to predict the typical semantic features of denoting terms in simple and moderately complex sentences. In addition, the network learns to predict the appearance of syntactically key words, such as prepositions and relative pronouns. Importantly, as a by-product of the network's semantic training, a strong form of syntactic systematicity emerges. Moreover, the network can integrate novel nouns and verbs into its training process. This is achieved by assigning predicted semantic features as a default meaning when a novel word is encountered. All network training is unsupervised with respect to error feedback. Issues addressed here have been the subject of debate by notable psychologists, philosophers, and linguists within the last decade
Integrative (Synchronisations-)Mechanismen der (Neuro-)Kognition vor dem Hintergrund des (Neo-)Konnektionismus, der Theorie der nichtlinearen dynamischen Systeme, der Informationstheorie und des Selbstorganisationsparadigmas
Der Gegenstand der vorliegenden Arbeit besteht darin, aufbauend auf dem (Haupt-)Thema, der Darlegung und Untersuchung der Lösung des Bindungsproblems anhand von temporalen integrativen (Synchronisations-)Mechanismen im Rahmen der kognitiven (Neuro-)Architekturen im (Neo-)Konnektionismus mit Bezug auf die Wahrnehmungs- und Sprachkognition, vor allem mit Bezug auf die dabei auftretende Kompositionalitäts- und Systematizitätsproblematik, die Konstruktion einer noch zu entwickelnden integrativen Theorie der (Neuro-)Kognition zu skizzie-ren, auf der Basis des Repräsentationsformats einer sog. „vektoriellen Form“, u.z. vor dem Hintergrund des (Neo-)Konnektionismus, der Theorie der nichtlinearen dynamischen Systeme, der Informationstheorie und des Selbstorganisations-Paradigmas