43 research outputs found

    Group Recommendations: Survey and Perspectives

    Get PDF
    The popularity of group recommender systems has increased in the last years. More and more social activity is generated by users over the Web and thus not only domains as TV, music or holidays are used and researched anymore for group recommendation, but also collaborative learning support, digital libraries and other domains seems to be promising for group recommendation. Moreover, principles of group recommenders can be used in order to overcome some single user recommendation shortcomings, such as cold start problem. Numerous group recommenders have been proposed, they differ in application domains which are specific in group characteristics. Today's group recommenders do not include and use the power of social aspects (group structure, social status etc.), which can be extracted and derived from the group. We provide a survey of group recommendation principles for the Web domain and discuss trends and perspectives in this field

    Blog Style Classification: Refining Affective Blogs

    Get PDF
    In the constantly growing blogosphere with no restrictions on form or topic, a number of writing styles and genres have emerged. Recognition and classification of these styles has become significant for information processing with an aim to improve blog search or sentiment mining. One of the main issues in this field is detection of informative and affective articles. However, such differentiation does not suffice today. In this paper we extend the differentiation and suggest a fine-grained set of subcategories for affective articles. We propose and evaluate a classification method employing novel lexical, morphological, lightweight syntactic and structural features of written text. The results show that our method outperforms the existing approaches

    Poster: Discovering Code Dependencies by Harnessing Developer's Activity

    Full text link
    Abstract—Monitoring software developer’s interactions in an integrated development environment is sought for revealing new information about developers and developed software. In this paper we present an approach for identifying potential source code dependencies solely from interaction data. We identify three kinds of potential dependencies and additionally assign them to developer’s activity as well, to reveal detailed task-related connections in the source code. Interaction data as a source allow us to identify these candidates for dependencies even for dynamically typed programming languages, or across multiple languages in the source code. After first evaluations and positive results we continue with collecting data in professional environment of Web developers, and evaluating our approach. Index Terms—Source code dependency, interaction data, task context, implicit feedback, dynamic typing. I

    Automated, not Automatic: Needs and Practices in European Fact-checking Organizations as a basis for Designing Human-centered AI Systems

    Full text link
    To mitigate the negative effects of false information more effectively, the development of automated AI (artificial intelligence) tools assisting fact-checkers is needed. Despite the existing research, there is still a gap between the fact-checking practitioners' needs and pains and the current AI research. We aspire to bridge this gap by employing methods of information behavior research to identify implications for designing better human-centered AI-based supporting tools. In this study, we conducted semi-structured in-depth interviews with Central European fact-checkers. The information behavior and requirements on desired supporting tools were analyzed using iterative bottom-up content analysis, bringing the techniques from grounded theory. The most significant needs were validated with a survey extended to fact-checkers from across Europe, in which we collected 24 responses from 20 European countries, i.e., 62% active European IFCN (International Fact-Checking Network) signatories. Our contributions are theoretical as well as practical. First, by being able to map our findings about the needs of fact-checking organizations to the relevant tasks for AI research, we have shown that the methods of information behavior research are relevant for studying the processes in the organizations and that these methods can be used to bridge the gap between the users and AI researchers. Second, we have identified fact-checkers' needs and pains focusing on so far unexplored dimensions and emphasizing the needs of fact-checkers from Central and Eastern Europe as well as from low-resource language groups which have implications for development of new resources (datasets) as well as for the focus of AI research in this domain.Comment: 41 pages, 13 figures, 1 table, 2 annexe

    Considering temporal aspects in recommender systems: a survey

    Get PDF
    Under embargo until: 2023-07-04The widespread use of temporal aspects in user modeling indicates their importance, and their consideration showed to be highly effective in various domains related to user modeling, especially in recommender systems. Still, past and ongoing research, spread over several decades, provided multiple ad-hoc solutions, but no common understanding of the issue. There is no standardization and there is often little commonality in considering temporal aspects in different applications. This may ultimately lead to the problem that application developers define ad-hoc solutions for their problems at hand, sometimes missing or neglecting aspects that proved to be effective in similar cases. Therefore, a comprehensive survey of the consideration of temporal aspects in recommender systems is required. In this work, we provide an overview of various time-related aspects, categorize existing research, present a temporal abstraction and point to gaps that require future research. We anticipate this survey will become a reference point for researchers and practitioners alike when considering the potential application of temporal aspects in their personalized applications.acceptedVersio

    Disinformation Capabilities of Large Language Models

    Full text link
    Automated disinformation generation is often listed as one of the risks of large language models (LLMs). The theoretical ability to flood the information space with disinformation content might have dramatic consequences for democratic societies around the world. This paper presents a comprehensive study of the disinformation capabilities of the current generation of LLMs to generate false news articles in English language. In our study, we evaluated the capabilities of 10 LLMs using 20 disinformation narratives. We evaluated several aspects of the LLMs: how well they are at generating news articles, how strongly they tend to agree or disagree with the disinformation narratives, how often they generate safety warnings, etc. We also evaluated the abilities of detection models to detect these articles as LLM-generated. We conclude that LLMs are able to generate convincing news articles that agree with dangerous disinformation narratives

    Eye-tracking en masse: Group user studies, lab infrastructure, and practices

    Get PDF
    The costs of eye-tracking technologies steadily decrease. This allows research institutions to obtain multiple eye-tracking devices. Already, several multiple eye-tracker laboratories have been established. Researchers begin to recognize the subfield of group eye-tracking. In comparison to the single-participant eye-tracking, group eye-tracking brings new technical and methodological challenges. Solutions to these challenges are far from being established within the research community. In this paper, we present the Group Studies system, which manages the infrastructure of the group eye-tracking laboratory at the User Experience and Interaction Research Center (UXI) at the Slovak University of Technology in Bratislava. We discuss the functional and architectural characteristics of the system. Furthermore, we illustrate our infrastructure with one of our past studies. With this paper, we also publish the source code and the documentation of our system to be re-used
    corecore