12,498 research outputs found
The automated evaluation of inferred word classifications
Although automatically inferring classifications of words has been attempted by many researchers recently, no formal attempts to evaluate their results were made. Instead they relied on a looks good to me intuitive self-evaluation. We outline a method by which automated word classification techniques can be fairly compared. The process by which words are automatically grouped into classes involves a number of decision points. The experiments selected a set of options for many of the decision points and rated each combination of the factors so that the most successful approach can be found. We directly compare some of the adopted approaches of other researchers with the set of factors that were found to produce the most linguistically plausible classification in our experiments. The evaluation method is also shown to be a valuable aid to highlighting approaches that are inefficient
On the accuracy of language trees
Historical linguistics aims at inferring the most likely language
phylogenetic tree starting from information concerning the evolutionary
relatedness of languages. The available information are typically lists of
homologous (lexical, phonological, syntactic) features or characters for many
different languages.
From this perspective the reconstruction of language trees is an example of
inverse problems: starting from present, incomplete and often noisy,
information, one aims at inferring the most likely past evolutionary history. A
fundamental issue in inverse problems is the evaluation of the inference made.
A standard way of dealing with this question is to generate data with
artificial models in order to have full access to the evolutionary process one
is going to infer. This procedure presents an intrinsic limitation: when
dealing with real data sets, one typically does not know which model of
evolution is the most suitable for them. A possible way out is to compare
algorithmic inference with expert classifications. This is the point of view we
take here by conducting a thorough survey of the accuracy of reconstruction
methods as compared with the Ethnologue expert classifications. We focus in
particular on state-of-the-art distance-based methods for phylogeny
reconstruction using worldwide linguistic databases.
In order to assess the accuracy of the inferred trees we introduce and
characterize two generalizations of standard definitions of distances between
trees. Based on these scores we quantify the relative performances of the
distance-based algorithms considered. Further we quantify how the completeness
and the coverage of the available databases affect the accuracy of the
reconstruction. Finally we draw some conclusions about where the accuracy of
the reconstructions in historical linguistics stands and about the leading
directions to improve it.Comment: 36 pages, 14 figure
Like trainer, like bot? Inheritance of bias in algorithmic content moderation
The internet has become a central medium through which `networked publics'
express their opinions and engage in debate. Offensive comments and personal
attacks can inhibit participation in these spaces. Automated content moderation
aims to overcome this problem using machine learning classifiers trained on
large corpora of texts manually annotated for offence. While such systems could
help encourage more civil debate, they must navigate inherently normatively
contestable boundaries, and are subject to the idiosyncratic norms of the human
raters who provide the training data. An important objective for platforms
implementing such measures might be to ensure that they are not unduly biased
towards or against particular norms of offence. This paper provides some
exploratory methods by which the normative biases of algorithmic content
moderation systems can be measured, by way of a case study using an existing
dataset of comments labelled for offence. We train classifiers on comments
labelled by different demographic subsets (men and women) to understand how
differences in conceptions of offence between these groups might affect the
performance of the resulting models on various test sets. We conclude by
discussing some of the ethical choices facing the implementers of algorithmic
moderation systems, given various desired levels of diversity of viewpoints
amongst discussion participants.Comment: 12 pages, 3 figures, 9th International Conference on Social
Informatics (SocInfo 2017), Oxford, UK, 13--15 September 2017 (forthcoming in
Springer Lecture Notes in Computer Science
Time-Sensitive Bayesian Information Aggregation for Crowdsourcing Systems
Crowdsourcing systems commonly face the problem of aggregating multiple
judgments provided by potentially unreliable workers. In addition, several
aspects of the design of efficient crowdsourcing processes, such as defining
worker's bonuses, fair prices and time limits of the tasks, involve knowledge
of the likely duration of the task at hand. Bringing this together, in this
work we introduce a new time--sensitive Bayesian aggregation method that
simultaneously estimates a task's duration and obtains reliable aggregations of
crowdsourced judgments. Our method, called BCCTime, builds on the key insight
that the time taken by a worker to perform a task is an important indicator of
the likely quality of the produced judgment. To capture this, BCCTime uses
latent variables to represent the uncertainty about the workers' completion
time, the tasks' duration and the workers' accuracy. To relate the quality of a
judgment to the time a worker spends on a task, our model assumes that each
task is completed within a latent time window within which all workers with a
propensity to genuinely attempt the labelling task (i.e., no spammers) are
expected to submit their judgments. In contrast, workers with a lower
propensity to valid labeling, such as spammers, bots or lazy labelers, are
assumed to perform tasks considerably faster or slower than the time required
by normal workers. Specifically, we use efficient message-passing Bayesian
inference to learn approximate posterior probabilities of (i) the confusion
matrix of each worker, (ii) the propensity to valid labeling of each worker,
(iii) the unbiased duration of each task and (iv) the true label of each task.
Using two real-world public datasets for entity linking tasks, we show that
BCCTime produces up to 11% more accurate classifications and up to 100% more
informative estimates of a task's duration compared to state-of-the-art
methods
An analysis of the user occupational class through Twitter content
Social media content can be used as a complementary source to the traditional
methods for extracting and studying collective social attributes. This study focuses on the prediction of the occupational class for a public user profile. Our analysis is conducted on a new annotated corpus of Twitter users, their respective job titles, posted textual content and platform-related attributes. We frame our task as classification using latent feature representations such as word clusters and embeddings. The employed linear and, especially, non-linear methods can predict a userās occupational class with strong accuracy for the coarsest level of a standard occupation taxonomy which includes nine classes. Combined with a qualitative assessment, the derived results confirm the feasibility of our approach in inferring a new user attribute that can be embedded in a multitude of downstream applications
- ā¦