20,605 research outputs found
Using semantic cues to learn syntax
We present a method for dependency grammar induction that utilizes sparse annotations of semantic relations. This induction set-up is attractive because such annotations provide useful
clues about the underlying syntactic structure, and they are readily available in many domains (e.g., info-boxes and HTML markup). Our method is based on the intuition that syntactic realizations of the same semantic predicate exhibit some degree of consistency. We incorporate this intuition in
a directed graphical model that tightly links the syntactic and semantic structures. This design enables us to exploit syntactic regularities while still allowing for variations. Another strength of the model lies in its ability to capture non-local dependency relations. Our results demonstrate that even a small amount of semantic annotations greatly improves the accuracy of learned dependencies when tested on both in-domain and out-of-domain texts.United States. Defense Advanced Research Projects Agency (Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0172)United States. Defense Advanced Research Projects Agency (Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0172)U.S. Army Research Laboratory (contract no. W911NF-10-1-0533
Variability, negative evidence, and the acquisition of verb argument constructions
We present a hierarchical Bayesian framework for modeling the acquisition of verb argument constructions. It embodies a domain-general approach to learning higher-level knowledge in the form of inductive constraints (or overhypotheses), and has been used to explain other aspects of language development such as the shape bias in learning object names. Here, we demonstrate that the same model captures several phenomena in the acquisition of verb constructions. Our model, like adults in a series of artificial language learning experiments, makes inferences about the distributional statistics of verbs on several levels of abstraction simultaneously. It also produces the qualitative learning patterns displayed by children over the time course of acquisition. These results suggest that the patterns of generalization observed in both children and adults could emerge from basic assumptions about the nature of learning. They also provide an example of a broad class of computational approaches that can resolve Baker's Paradox
Lexical acquisition in elementary science classes
The purpose of this study was to further researchers' understanding of lexical acquisition in the beginning primary schoolchild by investigating word learning in small-group elementary science classes. Two experiments were conducted to examine the role of semantic scaffolding (e.g., use of synonymous terms) and physical scaffolding (e.g., pointing to referents) in children's acquisition of novel property terms. Children's lexical knowledge was assessed using multiple tasks (naming, comprehension, and definitional). Children struggled to acquire meanings of adjectives without semantic or physical scaffolding (Experiment 1), but they were successful in acquiring extensive lexical knowledge when offered semantic scaffolding (Experiment 2). Experiment 2 also shows that semantic scaffolding used in combination with physical scaffolding helped children acquire novel adjectives and that children who correctly named pictures of adjectives had acquired definitions
On the Evaluation of Semantic Phenomena in Neural Machine Translation Using Natural Language Inference
We propose a process for investigating the extent to which sentence
representations arising from neural machine translation (NMT) systems encode
distinct semantic phenomena. We use these representations as features to train
a natural language inference (NLI) classifier based on datasets recast from
existing semantic annotations. In applying this process to a representative NMT
system, we find its encoder appears most suited to supporting inferences at the
syntax-semantics interface, as compared to anaphora resolution requiring
world-knowledge. We conclude with a discussion on the merits and potential
deficiencies of the existing process, and how it may be improved and extended
as a broader framework for evaluating semantic coverage.Comment: To be presented at NAACL 2018 - 11 page
Morphological Cues for Lexical Semantics
Most natural language processing tasks require lexical semantic information.
Automated acquisition of this information would thus increase the robustness
and portability of NLP systems. This paper describes an acquisition method
which makes use of fixed correspondences between derivational affixes and
lexical semantic information. One advantage of this method, and of other
methods that rely only on surface characteristics of language, is that the
necessary input is currently available
- âŠ