149,628 research outputs found

    Second-Level Digital Divide: Mapping Differences in People's Online Skills

    Full text link
    Much of the existing approach to the digital divide suffers from an important limitation. It is based on a binary classification of Internet use by only considering whether someone is or is not an Internet user. To remedy this shortcoming, this project looks at the differences in people's level of skill with respect to finding information online. Findings suggest that people search for content in a myriad of ways and there is a large variance in how long people take to find various types of information online. Data are collected to see how user demographics, users' social support networks, people's experience with the medium, and their autonomy of use influence their level of user sophistication.Comment: 29th TPRC Conference, 200

    Supporting aspect-based video browsing - analysis of a user study

    Get PDF
    In this paper, we present a novel video search interface based on the concept of aspect browsing. The proposed strategy is to assist the user in exploratory video search by actively suggesting new query terms and video shots. Our approach has the potential to narrow the "Semantic Gap" issue by allowing users to explore the data collection. First, we describe a clustering technique to identify potential aspects of a search. Then, we use the results to propose suggestions to the user to help them in their search task. Finally, we analyse this approach by exploiting the log files and the feedbacks of a user study

    What is Usability? A Characterization based on ISO 9241-11 and ISO/IEC 25010

    Get PDF
    According to Brooke* "Usability does not exist in any absolute sense; it can only be defined with reference to particular contexts." That is, one cannot speak of usability without specifying what that particular usability is characterized by. Driven by the feedback of a reviewer at an international conference, I explore in which way one can precisely specify the kind of usability they are investigating in a given setting. Finally, I come up with a formalism that defines usability as a quintuple comprising the elements level of usability metrics, product, users, goals and context of use. Providing concrete values for these elements then constitutes the investigated type of usability. The use of this formalism is demonstrated in two case studies. * J. Brooke. SUS: A "quick and dirty" usability scale. In P. W. Jordan, B. Thomas, B. A. Weerdmeester, and A. L. McClelland, editors, Usability Evaluation in Industry. Taylor and Francis, 1996.Comment: Technical Report; Department of Computer Science, Technische Universit\"at Chemnitz; also available from https://www.tu-chemnitz.de/informatik/service/ib/2015.php.e

    Beyond 2D-grids: a dependence maximization view on image browsing

    Get PDF
    Ideally, one would like to perform image search using an intuitive and friendly approach. Many existing image search engines, however, present users with sets of images arranged in some default order on the screen, typically the relevance to a query, only. While this certainly has its advantages, arguably, a more flexible and intuitive way would be to sort images into arbitrary structures such as grids, hierarchies, or spheres so that images that are visually or semantically alike are placed together. This paper focuses on designing such a navigation system for image browsers. This is a challenging task because arbitrary layout structure makes it difficult -- if not impossible -- to compute cross-similarities between images and structure coordinates, the main ingredient of traditional layouting approaches. For this reason, we resort to a recently developed machine learning technique: kernelized sorting. It is a general technique for matching pairs of objects from different domains without requiring cross-domain similarity measures and hence elegantly allows sorting images into arbitrary structures. Moreover, we extend it so that some images can be preselected for instance forming the tip of the hierarchy allowing to subsequently navigate through the search results in the lower levels in an intuitive way

    DeltaPhish: Detecting Phishing Webpages in Compromised Websites

    Full text link
    The large-scale deployment of modern phishing attacks relies on the automatic exploitation of vulnerable websites in the wild, to maximize profit while hindering attack traceability, detection and blacklisting. To the best of our knowledge, this is the first work that specifically leverages this adversarial behavior for detection purposes. We show that phishing webpages can be accurately detected by highlighting HTML code and visual differences with respect to other (legitimate) pages hosted within a compromised website. Our system, named DeltaPhish, can be installed as part of a web application firewall, to detect the presence of anomalous content on a website after compromise, and eventually prevent access to it. DeltaPhish is also robust against adversarial attempts in which the HTML code of the phishing page is carefully manipulated to evade detection. We empirically evaluate it on more than 5,500 webpages collected in the wild from compromised websites, showing that it is capable of detecting more than 99% of phishing webpages, while only misclassifying less than 1% of legitimate pages. We further show that the detection rate remains higher than 70% even under very sophisticated attacks carefully designed to evade our system.Comment: Preprint version of the work accepted at ESORICS 201
    • 

    corecore