10 research outputs found

    Learning to Generate Natural Language Rationales for Game Playing Agents

    Get PDF
    Many computer games feature non-player charactert (NPC) teammates and companions; however, playing with or against NPCs can be frustrating when they perform unexpectedly. These frustrations can be avoided if the NPC has the ability to explain its actions and motivations. When NPC behavior is controlled by a black box AI system it can be hard to generate the necessary explanations. In this paper, we present a system that generates human-like, natural language explanations—called rationales—of an agent\u27s actions in a game environment regardless of how the decisions are made by a black box AI. We outline a robust data collection and neural network training pipeline that can be used to gather think-aloud data and train a rationale generation model for any similar sequential turn based decision making task. A human-subject study shows that our technique produces believable rationales for an agent playing the game, Frogger. We conclude with insights about how people perceive automatically generated rationales

    The Algorithmic Imprint

    Full text link
    When algorithmic harms emerge, a reasonable response is to stop using the algorithm to resolve concerns related to fairness, accountability, transparency, and ethics (FATE). However, just because an algorithm is removed does not imply its FATE-related issues cease to exist. In this paper, we introduce the notion of the "algorithmic imprint" to illustrate how merely removing an algorithm does not necessarily undo or mitigate its consequences. We operationalize this concept and its implications through the 2020 events surrounding the algorithmic grading of the General Certificate of Education (GCE) Advanced (A) Level exams, an internationally recognized UK-based high school diploma exam administered in over 160 countries. While the algorithmic standardization was ultimately removed due to global protests, we show how the removal failed to undo the algorithmic imprint on the sociotechnical infrastructures that shape students', teachers', and parents' lives. These events provide a rare chance to analyze the state of the world both with and without algorithmic mediation. We situate our case study in Bangladesh to illustrate how algorithms made in the Global North disproportionately impact stakeholders in the Global South. Chronicling more than a year-long community engagement consisting of 47 inter-views, we present the first coherent timeline of "what" happened in Bangladesh, contextualizing "why" and "how" they happened through the lenses of the algorithmic imprint and situated algorithmic fairness. Analyzing these events, we highlight how the contours of the algorithmic imprints can be inferred at the infrastructural, social, and individual levels. We share conceptual and practical implications around how imprint-awareness can (a) broaden the boundaries of how we think about algorithmic impact, (b) inform how we design algorithms, and (c) guide us in AI governance.Comment: Accepted to ACM FAccT 202

    Participation versus scale: Tensions in the practical demands on participatory AI

    No full text
    Ongoing calls from academic and civil society groups and regulatory demands for the central role of affected communities in development, evaluation, and deployment of artificial intelligence systems have created the conditions for an incipient “participatory turn” in AI. This turn encompasses a wide number of approaches — from legal requirements for consultation with civil society groups and community input in impact assessments, to methods for inclusive data labeling and co-design. However, more work remains in adapting the methods of participation to the scale of commercial AI. In this paper, we highlight the tensions between the localized engagement of community-based participatory methods, and the globalized operation of commercial AI systems. Namely, the scales of commercial AI and participatory methods tend to differ along the fault lines of (1) centralized to distributed development; (2) calculable to self-identified publics; and (3) instrumental to intrinsic perceptions of the value of public input. However, a close look at these differences in scale demonstrates that these tensions are not irresolvable but contingent. We note that beyond its reference to the size of any given system, scale serves as a measure of the infrastructural investments needed to extend a system across contexts. To scale for a more participatory AI, we argue that these same tensions become opportunities for intervention by offering case studies that illustrate how infrastructural investments have supported participation in AI design and governance. Just as scaling commercial AI has required significant investments, we argue that scaling participation accordingly will require the creation of infrastructure dedicated to the practical dimension of achieving the participatory tradition’s commitment to shifting power
    corecore