72,574 research outputs found
Distinct patterns of syntactic agreement errors in recurrent networks and humans
Determining the correct form of a verb in context requires an understanding
of the syntactic structure of the sentence. Recurrent neural networks have been
shown to perform this task with an error rate comparable to humans, despite the
fact that they are not designed with explicit syntactic representations. To
examine the extent to which the syntactic representations of these networks are
similar to those used by humans when processing sentences, we compare the
detailed pattern of errors that RNNs and humans make on this task. Despite
significant similarities (attraction errors, asymmetry between singular and
plural subjects), the error patterns differed in important ways. In particular,
in complex sentences with relative clauses error rates increased in RNNs but
decreased in humans. Furthermore, RNNs showed a cumulative effect of attractors
but humans did not. We conclude that at least in some respects the syntactic
representations acquired by RNNs are fundamentally different from those used by
humans.Comment: Proceedings of the 40th Annual Conference of the Cognitive Science
Societ
Recommended from our members
Incremental learning of independent, overlapping, and graded concept descriptions with an instance-based process framework
Supervised learning algorithms make several simplifying assumptions concerning the characteristics of the concept descriptions to be learned. For example, concepts are often assumed to be (1) defined with respect to the same set of relevant attributes, (2) disjoint in instance space, and (3) have uniform instance distributions. While these assumptions constrain the learning task, they unfortunately limit an algorithm's applicability. We believe that supervised learning algorithms should learn attribute relevancies independently for each concept, allow instances to be members of any subset of concepts, and represent graded concept descriptions. This paper introduces a process framework for instance-based learning algorithms that exploit only specific instance and performance feedback information to guide their concept learning processes. We also introduce Bloom, a specific instantiation of this framework. Bloom is a supervised, incremental, instance-based learning algorithm that learns relative attribute relevancies independently for each concept, allows instances to be members of any subset of concepts, and represents graded concept memberships. We describe empirical evidence to support our claims that Bloom can learn independent, overlapping, and graded concept descriptions
Forest matrices around the Laplacian matrix
We study the matrices Q_k of in-forests of a weighted digraph G and their
connections with the Laplacian matrix L of G. The (i,j) entry of Q_k is the
total weight of spanning converging forests (in-forests) with k arcs such that
i belongs to a tree rooted at j. The forest matrices, Q_k, can be calculated
recursively and expressed by polynomials in the Laplacian matrix; they provide
representations for the generalized inverses, the powers, and some eigenvectors
of L. The normalized in-forest matrices are row stochastic; the normalized
matrix of maximum in-forests is the eigenprojection of the Laplacian matrix,
which provides an immediate proof of the Markov chain tree theorem. A source of
these results is the fact that matrices Q_k are the matrix coefficients in the
polynomial expansion of adj(a*I+L). Thereby they are precisely Faddeev's
matrices for -L.
Keywords: Weighted digraph; Laplacian matrix; Spanning forest; Matrix-forest
theorem; Leverrier-Faddeev method; Markov chain tree theorem; Eigenprojection;
Generalized inverse; Singular M-matrixComment: 19 pages, presented at the Edinburgh (2001) Conference on Algebraic
Graph Theor
- …