13,550 research outputs found
Learning to select data for transfer learning with Bayesian Optimization
Domain similarity measures can be used to gauge adaptability and select
suitable data for transfer learning, but existing approaches define ad hoc
measures that are deemed suitable for respective tasks. Inspired by work on
curriculum learning, we propose to \emph{learn} data selection measures using
Bayesian Optimization and evaluate them across models, domains and tasks. Our
learned measures outperform existing domain similarity measures significantly
on three tasks: sentiment analysis, part-of-speech tagging, and parsing. We
show the importance of complementing similarity with diversity, and that
learned measures are -- to some degree -- transferable across models, domains,
and even tasks.Comment: EMNLP 2017. Code available at:
https://github.com/sebastianruder/learn-to-select-dat
When is multitask learning effective? Semantic sequence prediction under varying data conditions
Multitask learning has been applied successfully to a range of tasks, mostly
morphosyntactic. However, little is known on when MTL works and whether there
are data characteristics that help to determine its success. In this paper we
evaluate a range of semantic sequence labeling tasks in a MTL setup. We examine
different auxiliary tasks, amongst which a novel setup, and correlate their
impact to data-dependent conditions. Our results show that MTL is not always
effective, significant improvements are obtained only for 1 out of 5 tasks.
When successful, auxiliary tasks with compact and more uniform label
distributions are preferable.Comment: In EACL 201
Keystroke dynamics as signal for shallow syntactic parsing
Keystroke dynamics have been extensively used in psycholinguistic and writing
research to gain insights into cognitive processing. But do keystroke logs
contain actual signal that can be used to learn better natural language
processing models?
We postulate that keystroke dynamics contain information about syntactic
structure that can inform shallow syntactic parsing. To test this hypothesis,
we explore labels derived from keystroke logs as auxiliary task in a multi-task
bidirectional Long Short-Term Memory (bi-LSTM). Our results show promising
results on two shallow syntactic parsing tasks, chunking and CCG supertagging.
Our model is simple, has the advantage that data can come from distinct
sources, and produces models that are significantly better than models trained
on the text annotations alone.Comment: In COLING 201
What to do about non-standard (or non-canonical) language in NLP
Real world data differs radically from the benchmark corpora we use in
natural language processing (NLP). As soon as we apply our technologies to the
real world, performance drops. The reason for this problem is obvious: NLP
models are trained on samples from a limited set of canonical varieties that
are considered standard, most prominently English newswire. However, there are
many dimensions, e.g., socio-demographics, language, genre, sentence type, etc.
on which texts can differ from the standard. The solution is not obvious: we
cannot control for all factors, and it is not clear how to best go beyond the
current practice of training on homogeneous data from a single domain and
language.
In this paper, I review the notion of canonicity, and how it shapes our
community's approach to language. I argue for leveraging what I call fortuitous
data, i.e., non-obvious data that is hitherto neglected, hidden in plain sight,
or raw data that needs to be refined. If we embrace the variety of this
heterogeneous data by combining it with proper algorithms, we will not only
produce more robust models, but will also enable adaptive language technology
capable of addressing natural language variation.Comment: KONVENS 201
Assessing schematic knowledge of introductory probability theory
[Abstract]: The ability to identify schematic knowledge is an important goal for both assessment
and instruction. In the current paper, schematic knowledge of statistical probability theory is
explored from the declarative-procedural framework using multiple methods of assessment.
A sample of 90 undergraduate introductory statistics students was required to classify 10
pairs of probability problems as similar or different; to identify whether 15 problems
contained sufficient, irrelevant, or missing information (text-edit); and to solve 10 additional
problems. The complexity of the schema on which the problems were based was also
manipulated. Detailed analyses compared text-editing and solution accuracy as a function of
text-editing category and schema complexity. Results showed that text-editing tends to be
easier than solution and differentially sensitive to schema complexity. While text-editing and
classification were correlated with solution, only text-editing problems with missing
information uniquely predicted success. In light of previous research these results suggest
that text-editing is suitable for supplementing the assessment of schematic knowledge in
development
When silver glitters more than gold: Bootstrapping an Italian part-of-speech tagger for Twitter
We bootstrap a state-of-the-art part-of-speech tagger to tag Italian Twitter
data, in the context of the Evalita 2016 PoSTWITA shared task. We show that
training the tagger on native Twitter data enriched with little amounts of
specifically selected gold data and additional silver-labelled data scraped
from Facebook, yields better results than using large amounts of manually
annotated data from a mix of genres.Comment: Proceedings of the 5th Evaluation Campaign of Natural Language
Processing and Speech Tools for Italian (EVALITA 2016
- …
