140,048 research outputs found

    “Tinder Will Know You Are A 6”: Users’ Perceptions of Algorithms on Tinder

    Get PDF
    Through in-depth interviews of 22 Tinder users, we explore how users interpret their algorithmically mediated experience on the platform. We find that users have various explanations of whether and how Tinder uses algorithms and that users have varying degrees of certainty about these explanations. In response, users report that they act in particular ways given their explanations and degree of certainty. We discuss how users, as part of their sensemaking practice around how algorithms work, engage in forms of improvisation. In addition, we argue that algorithm awareness leads to a more nuanced acknowledgement of inequality and power, including the power-laden roles of platforms themselves

    Explaining how algorithms work reduces consumers' concerns regarding the collection of personal data and promotes AI technology adoption

    Get PDF
    Consumers' concerns about how companies gather and use their personal data can impede the widespread adoption of artificial intelligence (AI) technologies. This study demonstrates that mechanistic explanations of AI algorithms can inhibit such data collection concerns. Four independent online experiments show a negative effect of detailed mechanistic explanations on data collection concerns (Studies 1a and 1b), as well as mediating influences of a subjective understanding of how AI algorithms work (Study 2) and increased the likelihood to adopt AI technologies after data collection concerns have been mitigated (Study 3). These findings contribute to research on consumer privacy concerns and the adoption of AI technologies, by identifying (1) a new inhibitor of data collection concerns, namely, mechanistic explanations of AI algorithms; (2) the psychological mechanisms underlying mechanist explanation effects; and (3) how diminished data collection concerns promote AI technology adoption. These insights can help companies design more effective communication strategies that reduce the perceived opacity of AI algorithms, reassure consumers, and encourage their adoption of AI technologies

    Understanding Consumer Preferences for Explanations Generated by XAI Algorithms

    Full text link
    Explaining firm decisions made by algorithms in customer-facing applications is increasingly required by regulators and expected by customers. While the emerging field of Explainable Artificial Intelligence (XAI) has mainly focused on developing algorithms that generate such explanations, there has not yet been sufficient consideration of customers' preferences for various types and formats of explanations. We discuss theoretically and study empirically people's preferences for explanations of algorithmic decisions. We focus on three main attributes that describe automatically-generated explanations from existing XAI algorithms (format, complexity, and specificity), and capture differences across contexts (online targeted advertising vs. loan applications) as well as heterogeneity in users' cognitive styles. Despite their popularity among academics, we find that counterfactual explanations are not popular among users, unless they follow a negative outcome (e.g., loan application was denied). We also find that users are willing to tolerate some complexity in explanations. Finally, our results suggest that preferences for specific (vs. more abstract) explanations are related to the level at which the decision is construed by the user, and to the deliberateness of the user's cognitive style.Comment: 18 pages, 1 appendix, 3 figures, 4 table
    • 

    corecore