44,990 research outputs found
A Wikipedia Literature Review
This paper was originally designed as a literature review for a doctoral
dissertation focusing on Wikipedia. This exposition gives the structure of
Wikipedia and the latest trends in Wikipedia research
Are anonymity-seekers just like everybody else? An analysis of contributions to Wikipedia from Tor
User-generated content sites routinely block contributions from users of
privacy-enhancing proxies like Tor because of a perception that proxies are a
source of vandalism, spam, and abuse. Although these blocks might be effective,
collateral damage in the form of unrealized valuable contributions from
anonymity seekers is invisible. One of the largest and most important
user-generated content sites, Wikipedia, has attempted to block contributions
from Tor users since as early as 2005. We demonstrate that these blocks have
been imperfect and that thousands of attempts to edit on Wikipedia through Tor
have been successful. We draw upon several data sources and analytical
techniques to measure and describe the history of Tor editing on Wikipedia over
time and to compare contributions from Tor users to those from other groups of
Wikipedia users. Our analysis suggests that although Tor users who slip through
Wikipedia's ban contribute content that is more likely to be reverted and to
revert others, their contributions are otherwise similar in quality to those
from other unregistered participants and to the initial contributions of
registered users.Comment: To appear in the IEEE Symposium on Security & Privacy, May 202
Semantic Sort: A Supervised Approach to Personalized Semantic Relatedness
We propose and study a novel supervised approach to learning statistical
semantic relatedness models from subjectively annotated training examples. The
proposed semantic model consists of parameterized co-occurrence statistics
associated with textual units of a large background knowledge corpus. We
present an efficient algorithm for learning such semantic models from a
training sample of relatedness preferences. Our method is corpus independent
and can essentially rely on any sufficiently large (unstructured) collection of
coherent texts. Moreover, the approach facilitates the fitting of semantic
models for specific users or groups of users. We present the results of
extensive range of experiments from small to large scale, indicating that the
proposed method is effective and competitive with the state-of-the-art.Comment: 37 pages, 8 figures A short version of this paper was already
published at ECML/PKDD 201
Automatic Text Summarization Approaches to Speed up Topic Model Learning Process
The number of documents available into Internet moves each day up. For this
reason, processing this amount of information effectively and expressibly
becomes a major concern for companies and scientists. Methods that represent a
textual document by a topic representation are widely used in Information
Retrieval (IR) to process big data such as Wikipedia articles. One of the main
difficulty in using topic model on huge data collection is related to the
material resources (CPU time and memory) required for model estimate. To deal
with this issue, we propose to build topic spaces from summarized documents. In
this paper, we present a study of topic space representation in the context of
big data. The topic space representation behavior is analyzed on different
languages. Experiments show that topic spaces estimated from text summaries are
as relevant as those estimated from the complete documents. The real advantage
of such an approach is the processing time gain: we showed that the processing
time can be drastically reduced using summarized documents (more than 60\% in
general). This study finally points out the differences between thematic
representations of documents depending on the targeted languages such as
English or latin languages.Comment: 16 pages, 4 tables, 8 figure
Noisy-parallel and comparable corpora filtering methodology for the extraction of bi-lingual equivalent data at sentence level
Text alignment and text quality are critical to the accuracy of Machine
Translation (MT) systems, some NLP tools, and any other text processing tasks
requiring bilingual data. This research proposes a language independent
bi-sentence filtering approach based on Polish (not a position-sensitive
language) to English experiments. This cleaning approach was developed on the
TED Talks corpus and also initially tested on the Wikipedia comparable corpus,
but it can be used for any text domain or language pair. The proposed approach
implements various heuristics for sentence comparison. Some of them leverage
synonyms and semantic and structural analysis of text as additional
information. Minimization of data loss was ensured. An improvement in MT system
score with text processed using the tool is discussed.Comment: arXiv admin note: text overlap with arXiv:1509.09093,
arXiv:1509.0888
- …