39 research outputs found

    User interface considerations for browser-based just-in-time-retrieval

    Get PDF
    With the availability of free online enrichment services injection of additional, external resources in existing Web content becomes more and more widespread. For the specific area of just-in-time retrieval of digital resources based on web page content, there are no specific guidelines of how to design and integrate the additional user interface components. In this paper, we conceptualise related user interface issues, investigating the central questions: (i) how can a user be visually notified that additional results are available, and (ii) with which user interface elements should the results be presented. Concretely, we identified four different notification styles and six different result presentation styles. In a survey-based study with 75 participants we elicited the users' preferences, revealing a clear preference for the representation style (split pane) and a strong preference for three notification styles (notification bubble, icon appearance and change of icon's appearance). The latter preferences are related to the preferred browser. The results can serve as guideline for designing web-based user interfaces for just-in-time retrieval

    Towards a Feature-Rich Data Set for Personalized Access to Long-Tail Content

    Get PDF
    Personalized data access has become one of the core challenges for intelligent information access, especially for non- mainstream long-tail content, as can be found in digital libraries. One of the main reasons that personalization remains a difficult task is the lack of standardized test corpora. In this paper we provide a comprehensive analysis of feature requirements for personalization together with a data collection tool for generating user models and collecting data for personalization of search and recommender system optimization in the long-tail. Based on the feature analysis, we provide a feature-rich publicly available data set, covering web content consumption and creation tasks. Our data set contains user models for eight users, including performed tasks, relevant topics for each task, relevance ratings, and relations between focus text and search queries. Altogether, the data set consists of 217 tasks, 4562 queries and over 15.000 ratings. On this data we perform automatic query prediction from web page content, achieving an accuracy of 89% using term identity, capitalization and part-of-speech tags as features. The results of the feature analysis can serve as guideline for feature collection for long-tail content personalization, and the provided data set as a gold standard for learning and evaluation of user models as well as for optimizing recommender or search engines for long-tail domains

    Guidance in Radiology Report Summarization: An Empirical Evaluation and Error Analysis

    Full text link
    Automatically summarizing radiology reports into a concise impression can reduce the manual burden of clinicians and improve the consistency of reporting. Previous work aimed to enhance content selection and factuality through guided abstractive summarization. However, two key issues persist. First, current methods heavily rely on domain-specific resources to extract the guidance signal, limiting their transferability to domains and languages where those resources are unavailable. Second, while automatic metrics like ROUGE show progress, we lack a good understanding of the errors and failure modes in this task. To bridge these gaps, we first propose a domain-agnostic guidance signal in form of variable-length extractive summaries. Our empirical results on two English benchmarks demonstrate that this guidance signal improves upon unguided summarization while being competitive with domain-specific methods. Additionally, we run an expert evaluation of four systems according to a taxonomy of 11 fine-grained errors. We find that the most pressing differences between automatic summaries and those of radiologists relate to content selection including omissions (up to 52%) and additions (up to 57%). We hypothesize that latent reporting factors and corpus-level inconsistencies may limit models to reliably learn content selection from the available data, presenting promising directions for future work.Comment: Accepted at INLG202
    corecore