8,014 research outputs found
Learning Robust Representations of Text
Deep neural networks have achieved remarkable results across many language
processing tasks, however these methods are highly sensitive to noise and
adversarial attacks. We present a regularization based method for limiting
network sensitivity to its inputs, inspired by ideas from computer vision, thus
learning models that are more robust. Empirical evaluation over a range of
sentiment datasets with a convolutional neural network shows that, compared to
a baseline model and the dropout method, our method achieves superior
performance over noisy inputs and out-of-domain data.Comment: 5 pages with 2 pages reference, 2 tables, 1 figur
Making "fetch" happen: The influence of social and linguistic context on nonstandard word growth and decline
In an online community, new words come and go: today's "haha" may be replaced
by tomorrow's "lol." Changes in online writing are usually studied as a social
process, with innovations diffusing through a network of individuals in a
speech community. But unlike other types of innovation, language change is
shaped and constrained by the system in which it takes part. To investigate the
links between social and structural factors in language change, we undertake a
large-scale analysis of nonstandard word growth in the online community Reddit.
We find that dissemination across many linguistic contexts is a sign of growth:
words that appear in more linguistic contexts grow faster and survive longer.
We also find that social dissemination likely plays a less important role in
explaining word growth and decline than previously hypothesized
- …