1,382 research outputs found
2kenize: Tying Subword Sequences for Chinese Script Conversion
Simplified Chinese to Traditional Chinese character conversion is a common
preprocessing step in Chinese NLP. Despite this, current approaches have poor
performance because they do not take into account that a simplified Chinese
character can correspond to multiple traditional characters. Here, we propose a
model that can disambiguate between mappings and convert between the two
scripts. The model is based on subword segmentation, two language models, as
well as a method for mapping between subword sequences. We further construct
benchmark datasets for topic classification and script conversion. Our proposed
method outperforms previous Chinese Character conversion approaches by 6 points
in accuracy. These results are further confirmed in a downstream application,
where 2kenize is used to convert pretraining dataset for topic classification.
An error analysis reveals that our method's particular strengths are in dealing
with code-mixing and named entities.Comment: Accepted to ACL 202
Tracking Typological Traits of Uralic Languages in Distributed Language Representations
Although linguistic typology has a long history, computational approaches
have only recently gained popularity. The use of distributed representations in
computational linguistics has also become increasingly popular. A recent
development is to learn distributed representations of language, such that
typologically similar languages are spatially close to one another. Although
empirical successes have been shown for such language representations, they
have not been subjected to much typological probing. In this paper, we first
look at whether this type of language representations are empirically useful
for model transfer between Uralic languages in deep neural networks. We then
investigate which typological features are encoded in these representations by
attempting to predict features in the World Atlas of Language Structures, at
various stages of fine-tuning of the representations. We focus on Uralic
languages, and find that some typological traits can be automatically inferred
with accuracies well above a strong baseline.Comment: Finnish abstract included in the pape
Multi-Task Learning of Keyphrase Boundary Classification
Keyphrase boundary classification (KBC) is the task of detecting keyphrases
in scientific articles and labelling them with respect to predefined types.
Although important in practice, this task is so far underexplored, partly due
to the lack of labelled data. To overcome this, we explore several auxiliary
tasks, including semantic super-sense tagging and identification of multi-word
expressions, and cast the task as a multi-task learning problem with deep
recurrent neural networks. Our multi-task models perform significantly better
than previous state of the art approaches on two scientific KBC datasets,
particularly for long keyphrases.Comment: ACL 201
Multi-task Learning of Pairwise Sequence Classification Tasks Over Disparate Label Spaces
We combine multi-task learning and semi-supervised learning by inducing a
joint embedding space between disparate label spaces and learning transfer
functions between label embeddings, enabling us to jointly leverage unlabelled
data and auxiliary, annotated datasets. We evaluate our approach on a variety
of sequence classification tasks with disparate label spaces. We outperform
strong single and multi-task baselines and achieve a new state-of-the-art for
topic-based sentiment analysis.Comment: To appear at NAACL 2018 (long
A Supervised Approach to Extractive Summarisation of Scientific Papers
Automatic summarisation is a popular approach to reduce a document to its
main arguments. Recent research in the area has focused on neural approaches to
summarisation, which can be very data-hungry. However, few large datasets exist
and none for the traditionally popular domain of scientific publications, which
opens up challenging research avenues centered on encoding large, complex
documents. In this paper, we introduce a new dataset for summarisation of
computer science publications by exploiting a large resource of author provided
summaries and show straightforward ways of extending it further. We develop
models on the dataset making use of both neural sentence encoding and
traditionally used summarisation features and show that models which encode
sentences as well as their local and global context perform best, significantly
outperforming well-established baseline methods.Comment: 11 pages, 6 figure
Nightmare at test time: How punctuation prevents parsers from generalizing
Punctuation is a strong indicator of syntactic structure, and parsers trained
on text with punctuation often rely heavily on this signal. Punctuation is a
diversion, however, since human language processing does not rely on
punctuation to the same extent, and in informal texts, we therefore often leave
out punctuation. We also use punctuation ungrammatically for emphatic or
creative purposes, or simply by mistake. We show that (a) dependency parsers
are sensitive to both absence of punctuation and to alternative uses; (b)
neural parsers tend to be more sensitive than vintage parsers; (c) training
neural parsers without punctuation outperforms all out-of-the-box parsers
across all scenarios where punctuation departs from standard punctuation. Our
main experiments are on synthetically corrupted data to study the effect of
punctuation in isolation and avoid potential confounds, but we also show
effects on out-of-domain data.Comment: Analyzing and interpreting neural networks for NLP, EMNLP 2018
worksho
Zero-Shot Cross-Lingual Transfer with Meta Learning
Learning what to share between tasks has been a topic of great importance
recently, as strategic sharing of knowledge has been shown to improve
downstream task performance. This is particularly important for multilingual
applications, as most languages in the world are under-resourced. Here, we
consider the setting of training models on multiple different languages at the
same time, when little or no data is available for languages other than
English. We show that this challenging setup can be approached using
meta-learning, where, in addition to training a source language model, another
model learns to select which training instances are the most beneficial to the
first. We experiment using standard supervised, zero-shot cross-lingual, as
well as few-shot cross-lingual settings for different natural language
understanding tasks (natural language inference, question answering). Our
extensive experimental setup demonstrates the consistent effectiveness of
meta-learning for a total of 15 languages. We improve upon the state-of-the-art
for zero-shot and few-shot NLI (on MultiNLI and XNLI) and QA (on the MLQA
dataset). A comprehensive error analysis indicates that the correlation of
typological features between languages can partly explain when parameter
sharing learned via meta-learning is beneficial.Comment: Accepted as long paper in EMNLP2020 main conferenc
Stable isotope analysis of human hair and nail samples: the effects of storage on samples
When submitting samples for analysis, maintaining sample integrity is essential. Appropriate packaging must be used to prevent damage, contamination or loss of sample. This is particularly important for stable isotope analysis by isotope ratio mass spectrometry as this technique is capable of detecting subtle differences in isotopic composition with great precision. In a novel study, scalp hair and fingernail samples were placed in five different types of packaging, routinely used in forensic laboratories and stored for 6 weeks and 6 months. Samples were subsequently cleaned and submitted for 13C/12C, 15N/14N, 2H/1H and 18O/16O analysis. Results from 13C analysis indicate that type of packaging can cause slight changes in 13C abundance over time. Differences were noted in the 15N isotope signatures of both hair and nail samples after 6-week storage, but not after 6 months. This apparent discrepancy could be a result of the packaging not being properly sealed in the 6 weeks study. Fewer differences were noted when analyzing samples for 2H and 18O abundance
Issue Framing in Online Discussion Fora
In online discussion fora, speakers often make arguments for or against
something, say birth control, by highlighting certain aspects of the topic. In
social science, this is referred to as issue framing. In this paper, we
introduce a new issue frame annotated corpus of online discussions. We explore
to what extent models trained to detect issue frames in newswire and social
media can be transferred to the domain of discussion fora, using a combination
of multi-task and adversarial training, assuming only unlabeled training data
in the target domain.Comment: To appear in NAACL-HLT 201
- …