17,328 research outputs found

    Evaluating the End-User Experience of Private Browsing Mode

    Get PDF
    Nowadays, all major web browsers have a private browsing mode. However, the mode's benefits and limitations are not particularly understood. Through the use of survey studies, prior work has found that most users are either unaware of private browsing or do not use it. Further, those who do use private browsing generally have misconceptions about what protection it provides. However, prior work has not investigated \emph{why} users misunderstand the benefits and limitations of private browsing. In this work, we do so by designing and conducting a three-part study: (1) an analytical approach combining cognitive walkthrough and heuristic evaluation to inspect the user interface of private mode in different browsers; (2) a qualitative, interview-based study to explore users' mental models of private browsing and its security goals; (3) a participatory design study to investigate why existing browser disclosures, the in-browser explanations of private browsing mode, do not communicate the security goals of private browsing to users. Participants critiqued the browser disclosures of three web browsers: Brave, Firefox, and Google Chrome, and then designed new ones. We find that the user interface of private mode in different web browsers violates several well-established design guidelines and heuristics. Further, most participants had incorrect mental models of private browsing, influencing their understanding and usage of private mode. Additionally, we find that existing browser disclosures are not only vague, but also misleading. None of the three studied browser disclosures communicates or explains the primary security goal of private browsing. Drawing from the results of our user study, we extract a set of design recommendations that we encourage browser designers to validate, in order to design more effective and informative browser disclosures related to private mode

    ImageSieve: Exploratory search of museum archives with named entity-based faceted browsing

    Get PDF
    Over the last few years, faceted search emerged as an attractive alternative to the traditional "text box" search and has become one of the standard ways of interaction on many e-commerce sites. However, these applications of faceted search are limited to domains where the objects of interests have already been classified along several independent dimensions, such as price, year, or brand. While automatic approaches to generate faceted search interfaces were proposed, it is not yet clear to what extent the automatically-produced interfaces will be useful to real users, and whether their quality can match or surpass their manually-produced predecessors. The goal of this paper is to introduce an exploratory search interface called ImageSieve, which shares many features with traditional faceted browsing, but can function without the use of traditional faceted metadata. ImageSieve uses automatically extracted and classified named entities, which play important roles in many domains (such as news collections, image archives, etc.). We describe one specific application of ImageSieve for image search. Here, named entities extracted from the descriptions of the retrieved images are used to organize a faceted browsing interface, which then helps users to make sense of and further explore the retrieved images. The results of a user study of ImageSieve demonstrate that a faceted search system based on named entities can help users explore large collections and find relevant information more effectively

    User - Thesaurus Interaction in a Web-Based Database: An Evaluation of Users' Term Selection Behaviour

    Get PDF
    A major challenge faced by users during the information search and retrieval process is the selection of search terms for query formulation and expansion. Thesauri are recognised as one source of search terms which can assist users in query construction and expansion. As the number of electronic thesauri attached to information retrieval systems has grown, a range of interface facilities and features have been developed to aid users in formulating their queries. The pilot study reported here aimed to explore and evaluate how a thesaurus-enhanced search interface assisted end-users in selecting search terms. Specifically, it focused on the evaluation of users' attitudes toward both the thesaurus and its interface as tools for facilitating search term selection for query expansion. Thesaurusbased searching and browsing behaviours adopted by users while interacting with a thesaurus-enhanced search interface were also examined

    Overview of VideoCLEF 2009: New perspectives on speech-based multimedia content enrichment

    Get PDF
    VideoCLEF 2009 offered three tasks related to enriching video content for improved multimedia access in a multilingual environment. For each task, video data (Dutch-language television, predominantly documentaries) accompanied by speech recognition transcripts were provided. The Subject Classification Task involved automatic tagging of videos with subject theme labels. The best performance was achieved by approaching subject tagging as an information retrieval task and using both speech recognition transcripts and archival metadata. Alternatively, classifiers were trained using either the training data provided or data collected from Wikipedia or via general Web search. The Affect Task involved detecting narrative peaks, defined as points where viewers perceive heightened dramatic tension. The task was carried out on the “Beeldenstorm” collection containing 45 short-form documentaries on the visual arts. The best runs exploited affective vocabulary and audience directed speech. Other approaches included using topic changes, elevated speaking pitch, increased speaking intensity and radical visual changes. The Linking Task, also called “Finding Related Resources Across Languages,” involved linking video to material on the same subject in a different language. Participants were provided with a list of multimedia anchors (short video segments) in the Dutch-language “Beeldenstorm” collection and were expected to return target pages drawn from English-language Wikipedia. The best performing methods used the transcript of the speech spoken during the multimedia anchor to build a query to search an index of the Dutch language Wikipedia. The Dutch Wikipedia pages returned were used to identify related English pages. Participants also experimented with pseudo-relevance feedback, query translation and methods that targeted proper names

    Mobile access to personal digital photograph archives

    Get PDF
    Handheld computing devices are becoming highly connected devices with high capacity storage. This has resulted in their being able to support storage of, and access to, personal photo archives. However the only means for mobile device users to browse such archives is typically a simple one-by-one scroll through image thumbnails in the order that they were taken, or by manually organising them based on folders. In this paper we describe a system for context-based browsing of personal digital photo archives. Photos are labeled with the GPS location and time they are taken and this is used to derive other context-based metadata such as weather conditions and daylight conditions. We present our prototype system for mobile digital photo retrieval, and an experimental evaluation illustrating the utility of location information for effective personal photo retrieval
    corecore