28,497 research outputs found
Recommended from our members
Simulating the referential properties of Dutch, German and English Root Infinitives in MOSAIC
Children learning many languages go through an Optional Infinitive stage in which they produce non-finite verb forms in contexts in which a finite verb form is required (e.g. âThat go thereâ instead of âThat goes thereâ). MOSAIC (Model of Syntax Acquisition in Children) is a computational model of language learning that successfully simulates the developmental patterning of the Optional Infinitive (OI) phenomenon in English, Dutch, German and Spanish (Freudenthal, Pine, Aguado-Orea & Gobet, 2007). In the present study, MOSAIC is applied to the simulation of certain subtle but theoretically important phenomena in the cross-linguistic patterning of the OI phenomenon that are typically assumed to require a more complex formal analysis. MOSAIC is shown to successfully simulate 1) The Modal Reference Effect: the finding that Dutch and German children tend to use Root Infinitives in modal contexts, 2) The Eventivity constraint: the finding that Dutch and German Root Infinitives refer predominantly to actions rather than static situations, and 3) The absence or reduced size of these effects in English. These results provide strong support for input-driven explanations of the Modal Reference Effect as well as MOSAICâs mechanism for producing Root Infinitives, and the wider claim that it is possible to explain key aspects of childrenâs early multi-word speech in terms of the interaction between a resource-limited distributional learning mechanism and the surface properties of the language to which children are exposed
A Processing Model for Free Word Order Languages
Like many verb-final languages, Germn displays considerable word-order
freedom: there is no syntactic constraint on the ordering of the nominal
arguments of a verb, as long as the verb remains in final position. This effect
is referred to as ``scrambling'', and is interpreted in transformational
frameworks as leftward movement of the arguments. Furthermore, arguments from
an embedded clause may move out of their clause; this effect is referred to as
``long-distance scrambling''. While scrambling has recently received
considerable attention in the syntactic literature, the status of long-distance
scrambling has only rarely been addressed. The reason for this is the
problematic status of the data: not only is long-distance scrambling highly
dependent on pragmatic context, it also is strongly subject to degradation due
to processing constraints. As in the case of center-embedding, it is not
immediately clear whether to assume that observed unacceptability of highly
complex sentences is due to grammatical restrictions, or whether we should
assume that the competence grammar does not place any restrictions on
scrambling (and that, therefore, all such sentences are in fact grammatical),
and the unacceptability of some (or most) of the grammatically possible word
orders is due to processing limitations. In this paper, we will argue for the
second view by presenting a processing model for German.Comment: 23 pages, uuencoded compressed ps file. In {\em Perspectives on
Sentence Processing}, C. Clifton, Jr., L. Frazier and K. Rayner, editors.
Lawrence Erlbaum Associates, 199
Parsing as Reduction
We reduce phrase-representation parsing to dependency parsing. Our reduction
is grounded on a new intermediate representation, "head-ordered dependency
trees", shown to be isomorphic to constituent trees. By encoding order
information in the dependency labels, we show that any off-the-shelf, trainable
dependency parser can be used to produce constituents. When this parser is
non-projective, we can perform discontinuous parsing in a very natural manner.
Despite the simplicity of our approach, experiments show that the resulting
parsers are on par with strong baselines, such as the Berkeley parser for
English and the best single system in the SPMRL-2014 shared task. Results are
particularly striking for discontinuous parsing of German, where we surpass the
current state of the art by a wide margin
Frequency vs. Association for Constraint Selection in Usage-Based Construction Grammar
A usage-based Construction Grammar (CxG) posits that slot-constraints
generalize from common exemplar constructions. But what is the best model of
constraint generalization? This paper evaluates competing frequency-based and
association-based models across eight languages using a metric derived from the
Minimum Description Length paradigm. The experiments show that
association-based models produce better generalizations across all languages by
a significant margin
- âŠ