25 research outputs found

    What makes a reporter human? : A research agenda for augmented journalism

    Get PDF
    In french: Qu’est-ce qui fait qu’un journaliste est humain ? Un programme de recherche pour un journalisme augmentéPeer reviewe

    Social Epistemology as a New Paradigm for Journalism and Media Studies

    Get PDF
    Journalism and media studies lack robust theoretical concepts for studying journalistic knowledge ‎generation. More specifically, conceptual challenges attend the emergence of big data and ‎algorithmic sources of journalistic knowledge. A family of frameworks apt to this challenge is ‎provided by “social epistemology”: a young philosophical field which regards society’s participation ‎in knowledge generation as inevitable. Social epistemology offers the best of both worlds for ‎journalists and media scholars: a thorough familiarity with biases and failures of obtaining ‎knowledge, and a strong orientation toward best practices in the realm of knowledge-acquisition ‎and truth-seeking. This paper articulates the lessons of social epistemology for two central nodes of ‎knowledge-acquisition in contemporary journalism: human-mediated knowledge and technology-‎mediated knowledge.

    Concordance as evidence in the Watson for Oncology decision-support system

    Get PDF
    Machine learning platforms have emerged as a new promissory technology that some argue will revolutionize work practices across a broad range of professions, including medical care. During the past few years, IBM has been testing its Watson for Oncology platform at several oncology departments around the world. Published reports, news stories, as well as our own empirical research show that in some cases, the levels of concordance over recommended treatment protocols between the platform and human oncologists have been quite low. Other studies supported by IBM claim concordance rates as high as 96%. We use the Watson for Oncology case to examine the practice of using concordance levels between tumor boards and a machine learning decision-support system as a form of evidence. We address a challenge related to the epistemic authority between oncologists on tumor boards and the Watson Oncology platform by arguing that the use of concordance levels as a form of evidence of quality or trustworthiness is problematic. Although the platform provides links to the literature from which it draws its conclusion, it obfuscates the scoring criteria that it uses to value some studies over others. In other words, the platform "black boxes" the values that are coded into its scoring system.Peer reviewe

    Human vs. AI: Investigating Consumers’ Context-Dependent Purchase Intentions for Algorithm-Created Content

    Get PDF
    Increasingly digitalized media consumption is pressuring profitability in the content industry. Technological advancements in the realm of Artificial Intelligence (AI) render the potential to cut costs by applying algorithms to create content. Yet, before implementing algorithm-created content, content providers should be aware of the impact of algorithmic authorship on consumers’ intention to purchase said content. Accordingly, this study investigates user attitudes toward algorithmic content creation and their dependence on the underlying utilitarian or hedonic consumption context. In our online experiment (N=298), we find evidence for a positive effect of algorithmic authorship on consumers’ purchase intention. Even though the overall purchase intention is context dependent, this algorithm appreciation is independent of the content consumption context. Our study thus suggests that consumers appreciate algorithm-created content. Our results thus provide insights into the benefits of leveraging algorithms in order to maintain content providers’ profitability

    Falling for fake news: investigating the consumption of news via social media

    Get PDF
    In the so called ‘post-truth’ era, characterized by a loss of public trust in various institutions, and the rise of ‘fake news’ disseminated via the internet and social media, individuals may face uncertainty about the veracity of information available, whether it be satire or malicious hoax. We investigate attitudes to news delivered by social media, and subsequent verification strategies applied, or not applied, by individuals. A survey reveals that two thirds of respondents regularly consumed news via Facebook, and that one third had at some point come across fake news that they initially believed to be true. An analysis task involving news presented via Facebook reveals a diverse range of judgement forming strategies, with participants relying on personal judgements as to plausibility and scepticism around sources and journalistic style. This reflects a shift away from traditional methods of accessing the news, and highlights the difficulties in combating the spread of fake news

    The Role of Large Language Models in the Recognition of Territorial Sovereignty: An Analysis of the Construction of Legitimacy

    Full text link
    We examine the potential impact of Large Language Models (LLM) on the recognition of territorial sovereignty and its legitimization. We argue that while technology tools, such as Google Maps and Large Language Models (LLM) like OpenAI's ChatGPT, are often perceived as impartial and objective, this perception is flawed, as AI algorithms reflect the biases of their designers or the data they are built on. We also stress the importance of evaluating the actions and decisions of AI and multinational companies that offer them, which play a crucial role in aspects such as legitimizing and establishing ideas in the collective imagination. Our paper highlights the case of three controversial territories: Crimea, West Bank and Transnitria, by comparing the responses of ChatGPT against Wikipedia information and United Nations resolutions. We contend that the emergence of AI-based tools like LLMs is leading to a new scenario in which emerging technology consolidates power and influences our understanding of reality. Therefore, it is crucial to monitor and analyze the role of AI in the construction of legitimacy and the recognition of territorial sovereignty.Comment: 14 page

    Blurring Boundaries: Exploring Tweets as a Legitimate Journalism Artifact

    Get PDF
    This study explores journalists’ use of Twitter and what it means for their craft. Based on 8 weeks newsroom observation and more than a dozen in-depth interviews with reporters and editors at a big metro newspaper, the study found that journalists had contradicting views on whether or not to accept tweets, a form of snippet artifact, as a legitimate journalism artifact, leading to the blurring artifact boundary. Related, journalists faced uncertainties and ambiguities regarding the implications of such snippet artifact for the journalism craft and its core mission of informing the public
    corecore