18 research outputs found
Information Privacy Opinions on Twitter: A Cross-Language Study
The Cambridge Analytica scandal triggered a conversation on Twitter about
data practices and their implications. Our research proposes to leverage this
conversation to extend the understanding of how information privacy is framed
by users worldwide. We collected tweets about the scandal written in Spanish
and English between April and July 2018. We created a word embedding to create
a reduced multi-dimensional representation of the tweets in each language. For
each embedding, we conducted open coding to characterize the semantic contexts
of key concepts: "information", "privacy", "company" and "users" (and their
Spanish translations). Through a comparative analysis, we found a broader
emphasis on privacy-related words associated with companies in English. We also
identified more terms related to data collection in English and fewer
associated with security mechanisms, control, and risks. Our findings hint at
the potential of cross-language comparisons of text to extend the understanding
of worldwide differences in information privacy perspectives.Comment: Proceeding CSCW '19: Conference Companion Publication of the 2019 on
Computer Supported Cooperative Work and Social Computin
Probabilistic Bias Mitigation in Word Embeddings
It has been shown that word embeddings derived from large corpora tend to
incorporate biases present in their training data. Various methods for
mitigating these biases have been proposed, but recent work has demonstrated
that these methods hide but fail to truly remove the biases, which can still be
observed in word nearest-neighbor statistics. In this work we propose a
probabilistic view of word embedding bias. We leverage this framework to
present a novel method for mitigating bias which relies on probabilistic
observations to yield a more robust bias mitigation algorithm. We demonstrate
that this method effectively reduces bias according to three separate measures
of bias while maintaining embedding quality across various popular benchmark
semantic tasksComment: 4 pages, 4 figures, Workshop on Human-Centric Machine Learning at
NeurIPS 201
Improving Negative Sampling for Word Representation using Self-embedded Features
Although the word-popularity based negative sampler has shown superb
performance in the skip-gram model, the theoretical motivation behind
oversampling popular (non-observed) words as negative samples is still not well
understood. In this paper, we start from an investigation of the gradient
vanishing issue in the skipgram model without a proper negative sampler. By
performing an insightful analysis from the stochastic gradient descent (SGD)
learning perspective, we demonstrate that, both theoretically and intuitively,
negative samples with larger inner product scores are more informative than
those with lower scores for the SGD learner in terms of both convergence rate
and accuracy. Understanding this, we propose an alternative sampling algorithm
that dynamically selects informative negative samples during each SGD update.
More importantly, the proposed sampler accounts for multi-dimensional
self-embedded features during the sampling process, which essentially makes it
more effective than the original popularity-based (one-dimensional) sampler.
Empirical experiments further verify our observations, and show that our
fine-grained samplers gain significant improvement over the existing ones
without increasing computational complexity.Comment: Accepted in WSDM 201
Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning
A lot of the recent success in natural language processing (NLP) has been
driven by distributed vector representations of words trained on large amounts
of text in an unsupervised manner. These representations are typically used as
general purpose features for words across a range of NLP problems. However,
extending this success to learning representations of sequences of words, such
as sentences, remains an open problem. Recent work has explored unsupervised as
well as supervised learning techniques with different training objectives to
learn general purpose fixed-length sentence representations. In this work, we
present a simple, effective multi-task learning framework for sentence
representations that combines the inductive biases of diverse training
objectives in a single model. We train this model on several data sources with
multiple training objectives on over 100 million sentences. Extensive
experiments demonstrate that sharing a single recurrent sentence encoder across
weakly related tasks leads to consistent improvements over previous methods. We
present substantial improvements in the context of transfer learning and
low-resource settings using our learned general-purpose representations.Comment: Accepted at ICLR 201
Efficient distributed representations beyond negative sampling
This article describes an efficient method to learn distributed
representations, also known as embeddings. This is accomplished minimizing an
objective function similar to the one introduced in the Word2Vec algorithm and
later adopted in several works. The optimization computational bottleneck is
the calculation of the softmax normalization constants for which a number of
operations scaling quadratically with the sample size is required. This
complexity is unsuited for large datasets and negative sampling is a popular
workaround, allowing one to obtain distributed representations in linear time
with respect to the sample size. Negative sampling consists, however, in a
change of the loss function and hence solves a different optimization problem
from the one originally proposed. Our contribution is to show that the sotfmax
normalization constants can be estimated in linear time, allowing us to design
an efficient optimization strategy to learn distributed representations. We
test our approximation on two popular applications related to word and node
embeddings. The results evidence competing performance in terms of accuracy
with respect to negative sampling with a remarkably lower computational time