3 research outputs found
The marker yypothesis: a constructivist theory of language acquisition
This thesis presents a theory of the early stages of first language acquisition. Language is
characterised as constituting an instructional environment - diachronic change in language
serves to maintain and enhance sources of structural marking which act as salient cues that
guide the development of linguistic representations in the child's brain. Language learning is
characterised as a constructivist process in which the underlying grammatical representation
and modular structure arise out of developmental processes. In particular, I investigate the
role of closed-class elements in language which obtain salience through their high occurrence
frequency and which serve to both label and segment useful grammatical units. I adopt an
inter-disciplinary approach which encompasses analyses of child language and agrammatic
speech, psycholinguistic data, the development of a developmental linguistic theory based on
the Dependency Grammar formalism, and a number of computational investigations of
spoken language corpora. I conclude that language development is highly interactionist and
that in trying to understand the processes involved in learning we must begin with the child
and not with the end-point of adult linguistic competence
Recommended from our members
Structured Learning with Inexact Search: Advances in Shift-Reduce CCG Parsing
Statistical shift-reduce parsing involves the interplay of representation learning, structured learning, and inexact search. This dissertation considers approaches that tightly integrate these three elements and explores three novel models for shift-reduce CCG parsing. First, I develop a dependency model, in which the selection of shift-reduce action sequences producing a dependency structure is treated as a hidden variable; the key components of the model are a dependency oracle and a learning algorithm that integrates the dependency oracle, the structured perceptron, and beam search. Second, I present expected F-measure training and show how to derive a globally normalized RNN model, in which beam search is naturally incorporated and used in conjunction with the
objective to learn shift-reduce action sequences optimized for the final evaluation metric. Finally, I describe an LSTM model that is able to construct parser state representations incrementally by following the shift-reduce syntactic derivation process; I show expected F-measure training, which is agnostic to the underlying neural network, can be applied in this setting to obtain globally normalized greedy and beam-search LSTM shift-reduce parsers.The Carnegie Trust for the Universities of Scotland;
The Cambridge Trus