19,628 research outputs found
Link anchors in images: is there truth?
While automatic linking in text collections is well understood, little is known about links in images. In this work, we investigate two aspects of anchors, the origin of a link, in images: 1) the requirements of users for such anchors, e.g. the things users would like more information on, and 2) possible evaluation methods assessing anchor selection al- gorithms. To investigate these aspects, we perform a study with 102 users. We find that 59% of the required anchors are image segments, as opposed to the whole image, and most users require information on displayed persons. The agreement of users on the required anchors is too low (often below 30%) for a ground truth-based evaluation, which is the standard IR evaluation method. As an alternative, we propose a novel evaluation method based on improved search performance and user experience
Interactive document summarisation.
This paper describes the Interactive Document Summariser (IDS), a dynamic document summarisation system, which can help users of digital libraries to access on-line documents more effectively. IDS provides dynamic control over summary characteristics, such as length and topic focus, so that changes made by the user are instantly reflected in an on-screen summary. A range of 'summary-in-context' views support seamless transitions between summaries and their source documents. IDS creates summaries by extracting keyphrases from a document with the Kea system, scoring sentences according to the keyphrases that they contain, and then extracting the highest scoring sentences. We report an evaluation of IDS summaries, in which human assessors identified suitable summary sentences in source documents, against which IDS summaries were judged. We found that IDS summaries were better than baseline summaries, and identify the characteristics of Kea keyphrases that lead to the best summaries
Stop Clickbait: Detecting and Preventing Clickbaits in Online News Media
Most of the online news media outlets rely heavily on the revenues generated
from the clicks made by their readers, and due to the presence of numerous such
outlets, they need to compete with each other for reader attention. To attract
the readers to click on an article and subsequently visit the media site, the
outlets often come up with catchy headlines accompanying the article links,
which lure the readers to click on the link. Such headlines are known as
Clickbaits. While these baits may trick the readers into clicking, in the long
run, clickbaits usually don't live up to the expectation of the readers, and
leave them disappointed.
In this work, we attempt to automatically detect clickbaits and then build a
browser extension which warns the readers of different media sites about the
possibility of being baited by such headlines. The extension also offers each
reader an option to block clickbaits she doesn't want to see. Then, using such
reader choices, the extension automatically blocks similar clickbaits during
her future visits. We run extensive offline and online experiments across
multiple media sites and find that the proposed clickbait detection and the
personalized blocking approaches perform very well achieving 93% accuracy in
detecting and 89% accuracy in blocking clickbaits.Comment: 2016 IEEE/ACM International Conference on Advances in Social Networks
Analysis and Mining (ASONAM
Human evaluation of Kea, an automatic keyphrasing system.
This paper describes an evaluation of the Kea automatic keyphrase extraction algorithm. Tools that automatically identify keyphrases are desirable because document keyphrases have numerous applications in digital library systems, but are costly and time consuming to manually assign. Keyphrase extraction algorithms are usually evaluated by comparison to author-specified keywords, but this methodology has several well-known shortcomings. The results presented in this paper are based on subjective evaluations of the quality and appropriateness of keyphrases by human assessors, and make a number of contributions. First, they validate previous evaluations of Kea that rely on author keywords. Second, they show Kea's performance is comparable to that of similar systems that have been evaluated by human assessors. Finally, they justify the use of author keyphrases as a performance metric by showing that authors generally choose good keywords
- …