813 research outputs found

    Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making

    Full text link
    ML decision-aid systems are increasingly common on the web, but their successful integration relies on people trusting them appropriately: they should use the system to fill in gaps in their ability, but recognize signals that the system might be incorrect. We measured how people's trust in ML recommendations differs by expertise and with more system information through a task-based study of 175 adults. We used two tasks that are difficult for humans: comparing large crowd sizes and identifying similar-looking animals. Our results provide three key insights: (1) People trust incorrect ML recommendations for tasks that they perform correctly the majority of the time, even if they have high prior knowledge about ML or are given information indicating the system is not confident in its prediction; (2) Four different types of system information all increased people's trust in recommendations; and (3) Math and logic skills may be as important as ML for decision-makers working with ML recommendations.Comment: 10 page

    Dead Angles of Personalization, Integrating Curation Algorithms in the Fabric of Design

    Get PDF
    International audienceThe amount of information available on the web is too vast for individuals to be able to process it all. To cope with this issue, digital platforms started relying on algorithms to curate, filter and recommend content to their users. This problem has generally been envisioned from a technical perspective, as an optimization issue and has been mostly untouched by design considerations. Through 16 interviews with daily users of platforms, we analyze how curation algorithms influence their daily experience and the strategies they use to try to adapt them to their own needs. Based on these empirical findings, we propose a set of four speculative design alternatives to explore how we can integrate curation algorithms as part of the larger fabric of design on the web. By exploring interactions to counter the binary nature of curation algorithms, their uniqueness, their anti-historicity and their implicit data collection, we provide tools to bridge the current divide between curation algorithms and people

    Shared visiting in Equator city

    Get PDF
    In this paper we describe an infrastructure and prototype system for sharing of visiting experiences across multiple media. The prototype supports synchronous co-visiting by physical and digital visitors, with digital access via either the World Wide Web or 3-dimensional graphics

    Slave to the Algorithm? Why a \u27Right to an Explanation\u27 Is Probably Not the Remedy You Are Looking For

    Get PDF
    Algorithms, particularly machine learning (ML) algorithms, are increasingly important to individuals’ lives, but have caused a range of concerns revolving mainly around unfairness, discrimination and opacity. Transparency in the form of a “right to an explanation” has emerged as a compellingly attractive remedy since it intuitively promises to open the algorithmic “black box” to promote challenge, redress, and hopefully heightened accountability. Amidst the general furore over algorithmic bias we describe, any remedy in a storm has looked attractive. However, we argue that a right to an explanation in the EU General Data Protection Regulation (GDPR) is unlikely to present a complete remedy to algorithmic harms, particularly in some of the core “algorithmic war stories” that have shaped recent attitudes in this domain. Firstly, the law is restrictive, unclear, or even paradoxical concerning when any explanation-related right can be triggered. Secondly, even navigating this, the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed, partially in response. ML explanations are restricted both by the type of explanation sought, the dimensionality of the domain and the type of user seeking an explanation. However, “subject-centric explanations (SCEs) focussing on particular regions of a model around a query show promise for interactive exploration, as do explanation systems based on learning a model from outside rather than taking it apart (pedagogical versus decompositional explanations) in dodging developers\u27 worries of intellectual property or trade secrets disclosure. Based on our analysis, we fear that the search for a “right to an explanation” in the GDPR may be at best distracting, and at worst nurture a new kind of “transparency fallacy.” But all is not lost. We argue that other parts of the GDPR related (i) to the right to erasure ( right to be forgotten ) and the right to data portability; and (ii) to privacy by design, Data Protection Impact Assessments and certification and privacy seals, may have the seeds we can use to make algorithms more responsible, explicable, and human-centered

    Culture Organism or Techno-Feudalism: How Growing Addictions and Artificial Intelligence Shape Contemporary Society

    Get PDF
    The book describes our tech driven society as the Culture Organism, while the most significant social challenge is repression of the individual by corrupt social agents. This is connected to the appearance of light and mild addictions, discovered through quantitative inquiry, which is put into a wider context, identifying the outcomes as social polarizations, appearance of echo chambers, spread of misinformation and fake news, rise of populist leaders and decreased democratic capacity. Theories of anomie, alienation and mass society are presented as a basis for research. The nature of media is examined in the context of addiction intensity, leading to the conclusion that new media, such as smartphones, are more addictive than the older media. This may be because new media have more reality-mimicking features. The study concludes that AI recommender algorithms are the most powerful social force and a new mass media, as they decide which content to expose billions of people across the globe on an individual level. That is why AI recommender algorithms may be considered a public good

    Evaluating the scale, growth, and origins of right-wing echo chambers on YouTube

    Full text link
    Although it is understudied relative to other social media platforms, YouTube is arguably the largest and most engaging online media consumption platform in the world. Recently, YouTube's outsize influence has sparked concerns that its recommendation algorithm systematically directs users to radical right-wing content. Here we investigate these concerns with large scale longitudinal data of individuals' browsing behavior spanning January 2016 through December 2019. Consistent with previous work, we find that political news content accounts for a relatively small fraction (11%) of consumption on YouTube, and is dominated by mainstream and largely centrist sources. However, we also find evidence for a small but growing "echo chamber" of far-right content consumption. Users in this community show higher engagement and greater "stickiness" than users who consume any other category of content. Moreover, YouTube accounts for an increasing fraction of these users' overall online news consumption. Finally, while the size, intensity, and growth of this echo chamber present real concerns, we find no evidence that they are caused by YouTube recommendations. Rather, consumption of radical content on YouTube appears to reflect broader patterns of news consumption across the web. Our results emphasize the importance of measuring consumption directly rather than inferring it from recommendations.Comment: 29 pages, 21 figures, 15 table
    corecore