234 research outputs found
Fourth International Workshop on Uncovering Plagiarism, Authorship, and Social Software Misuse
© ACM, 2011. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM SIGIR Forum (2011) http://doi.acm.org/10.1145/1988852.1988860[EN] The Fourth International Workshop on Uncovering Plagiarism, Authorship, and Social
Software Misuse (PAN 10) was held in conjunction with the 2010 Conference on Multilingual
and Multimodal Information Access Evaluation (CLEF-10) in Padua, Italy. The workshop
was organized as a competition covering two tasks: plagiarism detection and Wikipedia
vandalism detection. This report gives a short overview of the plagiarism detection task.
Detailed analyses of both tasks have been published as CLEF Notebook Papers [3, 6], which
can be downloaded at www.webis.de/publications.Our special thanks go to the participants of the competition for their devoted work. We also
thank Yahoo! Research for their sponsorship. This work is partially funded by CONACYTMexico
and the MICINN project TEXT-ENTERPRISE 2.0 TIN2009-13391-C04-03 (Plan
I+D+i).Stein, B.; Rosso, P.; Stamatatos, E.; Potthast, M.; Barrón Cedeño, LA.; Koppel, M. (2011). Fourth International Workshop on Uncovering Plagiarism, Authorship, and Social Software Misuse. ACM SIGIR Forum. 45(1):45-48. https://doi.org/10.1145/1988852.1988860S454845
Identifying Clickbait: A Multi-Strategy Approach Using Neural Networks
Online media outlets, in a bid to expand their reach and subsequently
increase revenue through ad monetisation, have begun adopting clickbait
techniques to lure readers to click on articles. The article fails to fulfill
the promise made by the headline. Traditional methods for clickbait detection
have relied heavily on feature engineering which, in turn, is dependent on the
dataset it is built for. The application of neural networks for this task has
only been explored partially. We propose a novel approach considering all
information found in a social media post. We train a bidirectional LSTM with an
attention mechanism to learn the extent to which a word contributes to the
post's clickbait score in a differential manner. We also employ a Siamese net
to capture the similarity between source and target information. Information
gleaned from images has not been considered in previous approaches. We learn
image embeddings from large amounts of data using Convolutional Neural Networks
to add another layer of complexity to our model. Finally, we concatenate the
outputs from the three separate components, serving it as input to a fully
connected layer. We conduct experiments over a test corpus of 19538 social
media posts, attaining an F1 score of 65.37% on the dataset bettering the
previous state-of-the-art, as well as other proposed approaches, feature
engineering or otherwise.Comment: Accepted at SIGIR 2018 as Short Pape
Technologies for Reusing Text from the Web
Texts from the web can be reused individually or in large quantities. The former is called text reuse and the latter language reuse. We first present a comprehensive overview of the different ways in which text and language is reused today, and how exactly information retrieval technologies can be applied in this respect. The remainder of the thesis then deals with specific retrieval tasks. In general, our contributions consist of models and algorithms, their evaluation, and for that purpose, large-scale corpus construction.
The thesis divides into two parts. The first part introduces technologies for text reuse detection, and our contributions are as follows: (1) A unified view of projecting-based and embedding-based fingerprinting for near-duplicate detection and the first time evaluation of fingerprint algorithms on Wikipedia revision histories as a new, large-scale corpus of near-duplicates. (2) A new retrieval model for the quantification of cross-language text similarity, which gets by without parallel corpora. We have evaluated the model in comparison to other models on many different pairs of languages. (3) An evaluation framework for text reuse and particularly plagiarism detectors, which consists of tailored detection performance measures and a large-scale corpus of automatically generated and manually written plagiarism cases. The latter have been obtained via crowdsourcing. This framework has been successfully applied to evaluate many different state-of-the-art plagiarism detection approaches within three international evaluation competitions.
The second part introduces technologies that solve three retrieval tasks based on language reuse, and our contributions are as follows: (4) A new model for the comparison of textual and non-textual web items across media, which exploits web comments as a source of information about the topic of an item. In this connection, we identify web comments as a largely neglected information source and introduce the rationale of comment retrieval. (5) Two new algorithms for query segmentation, which exploit web n-grams and Wikipedia as a means of discerning the user intent of a keyword query. Moreover, we crowdsource a new corpus for the evaluation of query segmentation which surpasses existing corpora by two orders of magnitude. (6) A new writing assistance tool called Netspeak, which is a search engine for commonly used language. Netspeak indexes the web in the form of web n-grams as a source of writing examples and implements a wildcard query processor on top of it.Texte aus dem Web können einzeln oder in großen Mengen wiederverwendet werden. Ersteres wird Textwiederverwendung und letzteres Sprachwiederverwendung genannt. Zunächst geben wir einen ausführlichen Überblick darüber, auf welche Weise Text und Sprache heutzutage wiederverwendet und wie Technologien des Information Retrieval in diesem Zusammenhang angewendet werden können. In der übrigen Arbeit werden dann spezifische Retrievalaufgaben behandelt. Unsere Beiträge bestehen dabei aus Modellen und Algorithmen, ihrer empirischen Auswertung und der Konstruktion von großen Korpora hierfür.
Die Dissertation ist in zwei Teile gegliedert. Im ersten Teil präsentieren wir Technologien zur Erkennung von Textwiederverwendungen und leisten folgende Beiträge: (1) Ein Überblick über projektionsbasierte- und einbettungsbasierte Fingerprinting-Verfahren für die Erkennung nahezu identischer Texte, sowie die erstmalige Evaluierung einer Reihe solcher Verfahren auf den Revisionshistorien der Wikipedia. (2) Ein neues Modell zum sprachübergreifenden, inhaltlichen Vergleich von Texten. Das Modell basiert auf einem mehrsprachigen Korpus bestehend aus Pärchen themenverwandter Texte, wie zum Beispiel der Wikipedia. Wir vergleichen das Modell in mehreren Sprachen mit herkömmlichen Modellen. (3) Eine Evaluierungsumgebung für Algorithmen zur Plagiaterkennung. Die Umgebung besteht aus Maßen, die die Güte der Erkennung eines Algorithmus' quantifizieren, und einem großen Korpus von Plagiaten. Die Plagiate wurden automatisch generiert sowie mit Hilfe von Crowdsourcing manuell erstellt. Darüber hinaus haben wir zwei Workshops veranstaltet, in denen unsere Evaluierungsumgebung erfolgreich zur Evaluierung aktueller Plagiaterkennungsalgorithmen eingesetzt wurde.
Im zweiten Teil präsentieren wir auf Sprachwiederverwendung basierende Technologien für drei verschiedene Retrievalaufgaben und leisten folgende Beiträge: (4) Ein neues Modell zum medienübergreifenden, inhaltlichen Vergleich von Objekten aus dem Web. Das Modell basiert auf der Auswertung der zu einem Objekt vorliegenden Kommentare. In diesem Zusammenhang identifizieren wir Webkommentare als eine in der Forschung bislang vernachlässigte Informationsquelle und stellen die Grundlagen des Kommentarretrievals vor. (5) Zwei neue Algorithmen zur Segmentierung von Websuchanfragen. Die Algorithmen nutzen Web n-Gramme sowie Wikipedia, um die Intention des Suchenden in einer Suchanfrage festzustellen. Darüber hinaus haben wir mittels Crowdsourcing ein neues Evaluierungskorpus erstellt, das zwei Größenordnungen größer ist als bisherige Korpora. (6) Eine neuartige Suchmaschine, genannt Netspeak, die die Suche nach gebräuchlicher Sprache ermöglicht. Netspeak indiziert das Web als Quelle für gebräuchliche Sprache in der Form von n-Grammen und implementiert eine Wildcardsuche darauf
Paraphrase Acquisition from Image Captions
We propose to use image captions from the Web as a previously underutilized
resource for paraphrases (i.e., texts with the same "message") and to create
and analyze a corresponding dataset. When an image is reused on the Web, an
original caption is often assigned. We hypothesize that different captions for
the same image naturally form a set of mutual paraphrases. To demonstrate the
suitability of this idea, we analyze captions in the English Wikipedia, where
editors frequently relabel the same image for different articles. The paper
introduces the underlying mining technology, the resulting Wikipedia-IPC
dataset, and compares known paraphrase corpora with respect to their syntactic
and semantic paraphrase similarity to our new resource. In this context, we
introduce characteristic maps along the two similarity dimensions to identify
the style of paraphrases coming from different sources. An annotation study
demonstrates the high reliability of the algorithmically determined
characteristic maps
Task-Oriented Paraphrase Analytics
Since paraphrasing is an ill-defined task, the term "paraphrasing" covers
text transformation tasks with different characteristics. Consequently,
existing paraphrasing studies have applied quite different (explicit and
implicit) criteria as to when a pair of texts is to be considered a paraphrase,
all of which amount to postulating a certain level of semantic or lexical
similarity. In this paper, we conduct a literature review and propose a
taxonomy to organize the 25~identified paraphrasing (sub-)tasks. Using
classifiers trained to identify the tasks that a given paraphrasing instance
fits, we find that the distributions of task-specific instances in the known
paraphrase corpora vary substantially. This means that the use of these
corpora, without the respective paraphrase conditions being clearly defined
(which is the normal case), must lead to incomparable and misleading results.Comment: Accepted at LREC-COLING 202
- …