3,288 research outputs found
Recommended from our members
Combining numeric and symbolic learning techniques
Incremental learning from examples in a noisy domain is a difficult problem in Machine Learning. In this paper we divide the task into two subproblems and present a combination of numeric and symbolic approaches that yields robust learning of boolean characterizations. Our method has been implemented in a computer program, and we plot its empirical learning performance in the presence of varying amounts of noise
Transfer Learning for Neural Semantic Parsing
The goal of semantic parsing is to map natural language to a machine
interpretable meaning representation language (MRL). One of the constraints
that limits full exploration of deep learning technologies for semantic parsing
is the lack of sufficient annotation training data. In this paper, we propose
using sequence-to-sequence in a multi-task setup for semantic parsing with a
focus on transfer learning. We explore three multi-task architectures for
sequence-to-sequence modeling and compare their performance with an
independently trained model. Our experiments show that the multi-task setup
aids transfer learning from an auxiliary task with large labeled data to a
target task with smaller labeled data. We see absolute accuracy gains ranging
from 1.0% to 4.4% in our in- house data set, and we also see good gains ranging
from 2.5% to 7.0% on the ATIS semantic parsing tasks with syntactic and
semantic auxiliary tasks.Comment: Accepted for ACL Repl4NLP 201
Integrated speech and morphological processing in a connectionist continuous speech understanding for Korean
A new tightly coupled speech and natural language integration model is
presented for a TDNN-based continuous possibly large vocabulary speech
recognition system for Korean. Unlike popular n-best techniques developed for
integrating mainly HMM-based speech recognition and natural language processing
in a {\em word level}, which is obviously inadequate for morphologically
complex agglutinative languages, our model constructs a spoken language system
based on a {\em morpheme-level} speech and language integration. With this
integration scheme, the spoken Korean processing engine (SKOPE) is designed and
implemented using a TDNN-based diphone recognition module integrated with a
Viterbi-based lexical decoding and symbolic phonological/morphological
co-analysis. Our experiment results show that the speaker-dependent continuous
{\em eojeol} (Korean word) recognition and integrated morphological analysis
can be achieved with over 80.6% success rate directly from speech inputs for
the middle-level vocabularies.Comment: latex source with a4 style, 15 pages, to be published in computer
processing of oriental language journa
Reassessing second language reading comprehension: Insights from the psycholinguistics notion of sentence processing
Theories and practices in second language reading pedagogy often overlook the sentence processing description from the psycholinguistics perspective. Second language reading comprehension is easily associated with vocabulary learning or discourse strategy. Yet, such activities can lead to an unnatural way of reading such as translating vocabularies or pointing out information as required. Meanwhile the authentic way of reading should encourage a natural stream of ideas to be interpreted from sentence to sentence. As suggested by the sentence processing notion from the psycholinguistics point of view, syntax appears to be the key to effective and authentic reading as opposed to the general belief of semantic or discourse information being the primary concern. This article argues that understanding the architecture of sentence processing, with syntactic parsing at the core of the underlying mechanism, can offer insights into the second language reading pedagogy. The concepts of syntactic parsing, reanalysis, and sentence processing models are described to give the idea of how sentence processing works. Additionally, a critical review on the differences between L1 and L2 sentence processing is presented considering the recent debate on individual differences as significant indicators of nativelike L2 sentence processing. Lastly, implications for the L2 reading pedagogy and potential implementation in instructional setting are discussed
An integrated theory of language production and comprehension
Currently, production and comprehension are regarded as quite distinct in accounts of language processing. In rejecting this dichotomy, we instead assert that producing and understanding are interwoven, and that this interweaving is what enables people to predict themselves and each other. We start by noting that production and comprehension are forms of action and action perception. We then consider the evidence for interweaving in action, action perception, and joint action, and explain such evidence in terms of prediction. Specifically, we assume that actors construct forward models of their actions before they execute those actions, and that perceivers of others' actions covertly imitate those actions, then construct forward models of those actions. We use these accounts of action, action perception, and joint action to develop accounts of production, comprehension, and interactive language. Importantly, they incorporate well-defined levels of linguistic representation (such as semantics, syntax, and phonology). We show (a) how speakers and comprehenders use covert imitation and forward modeling to make predictions at these levels of representation, (b) how they interweave production and comprehension processes, and (c) how they use these predictions to monitor the upcoming utterances. We show how these accounts explain a range of behavioral and neuroscientific data on language processing and discuss some of the implications of our proposal
- …