182,100 research outputs found

    Player-AI Interaction: What Neural Network Games Reveal About AI as Play

    Get PDF
    The advent of artificial intelligence (AI) and machine learning (ML) bring human-AI interaction to the forefront of HCI research. This paper argues that games are an ideal domain for studying and experimenting with how humans interact with AI. Through a systematic survey of neural network games (n = 38), we identified the dominant interaction metaphors and AI interaction patterns in these games. In addition, we applied existing human-AI interaction guidelines to further shed light on player-AI interaction in the context of AI-infused systems. Our core finding is that AI as play can expand current notions of human-AI interaction, which are predominantly productivity-based. In particular, our work suggests that game and UX designers should consider flow to structure the learning curve of human-AI interaction, incorporate discovery-based learning to play around with the AI and observe the consequences, and offer users an invitation to play to explore new forms of human-AI interaction

    How Can Organizations Design Purposeful Human-AI Interactions: A Practical Perspective From Existing Use Cases and Interviews

    Get PDF
    Artificial intelligence (AI) currently makes a tangible impact in many industries and humans’ daily lives. With humans interacting with AI agents more regularly, there is a need to examine human-AI interactions to design them purposefully. Thus, we draw on existing AI use cases and perceptions of human-AI interactions from 25 interviews with practitioners to elaborate on these interactions. From this practical lens on existing human-AI interactions, we introduce nine characteristic dimensions to describe human-AI interactions and distinguish five interaction types according to AI agents’ characteristics in the human-AI interaction. Besides, we provide initial design guidelines to stimulate both research and practice in creating purposeful designs for human-AI interactions

    Human participants in AI research: Ethics and transparency in practice

    Full text link
    In recent years, research involving human participants has been critical to advances in artificial intelligence (AI) and machine learning (ML), particularly in the areas of conversational, human-compatible, and cooperative AI. For example, around 12% and 6% of publications at recent AAAI and NeurIPS conferences indicate the collection of original human data, respectively. Yet AI and ML researchers lack guidelines for ethical, transparent research practices with human participants. Fewer than one out of every four of these AAAI and NeurIPS papers provide details of ethical review, the collection of informed consent, or participant compensation. This paper aims to bridge this gap by exploring normative similarities and differences between AI research and related fields that involve human participants. Though psychology, human-computer interaction, and other adjacent fields offer historic lessons and helpful insights, AI research raises several specific concerns\unicode{x2014}namely, participatory design, crowdsourced dataset development, and an expansive role of corporations\unicode{x2014}that necessitate a contextual ethics framework. To address these concerns, this paper outlines a set of guidelines for ethical and transparent practice with human participants in AI and ML research. These guidelines can be found in Section 4 on pp. 4\unicode{x2013}7

    FROM COMMERCIAL AGREEMENTS TO THE SOCIAL CONTRACT: HUMAN-CENTERED AI GUIDELINES FOR PUBLIC SERVICES

    Get PDF
    Human-centered Artificial Intelligence (HCAI) is a term frequently used in the discourse on how to guide the development and deployment of AI in responsible and trustworthy ways. Major technology actors including Microsoft, Apple and Google are fostering their own AI ecosystems, also providing HCAI guidelines, which operationalize theoretical concepts to inform the practice of AI development. Yet, their commonality seems to be an orientation to commercial contexts. This paper focuses on AI for public services and on the special relationship between governmental organizations and the public. Approaching human-AI interaction through the lens of social contract theory we identify amendments to improve the suitability of an existing HCAI framework for the public sector. Following the Action Design Research methodological approach, we worked with a public organization to apply, assess, and adapt the “Google PAIR guidelines”, a well-known framework for human-centered AI development. The guidelines informed the design of an interactive prototype for AI in public services and through this process we revealed gaps and potential enhancements. Specifically, we found that it’s important to a) articulate a clear value proposition by weighing the public good vs. the individual benefit, b) define boundaries for repurposing public data given the relationship between citizens and their government, c) accommodate user group diversity by considering the different levels of technical and administrative literacy of citizens. We aim to shift the perspective within human-AI interaction, acknowledging that exchanges are not always subject to commercial agreements but can also be based on the mechanisms of a social contract

    Towards Human-AI Interaction in Medical Emergency Call Handling

    Get PDF
    Call-takers in emergency medical dispatch centers typically rely on decision-support systems that help to structure emergency call dialogues and propose appropriate responses. Current research investigates whether such systems should follow a hybrid intelligent approach, which requires their extension with interfaces and mechanisms to enable an interaction between call-takers and artificial intelligence (AI). Yet unclear is how these interfaces and mechanisms should be designed to foster call handling performances while making efficient use of call-taker's often strained mental capacities. This paper moves towards closing this gap by 1) deriving required artifacts for human-AI interaction and 2) proposing an iterative procedure for their design and evaluation. For 1), we apply the guidelines for human-AI interaction and conduct workshops with domain experts. For 2), we argue that performing a full evaluation of the artifacts is too extensive at earlier iterations of the design process, and therefore propose to enact use-case-driven lightweight evaluations instead

    AI Governance in Healthcare: Explainability Standards, Safety Protocols, and Human-AI Interactions Dynamics in Contemporary Medical AI Systems

    Get PDF
    The fast-growing incorporation of artificial intelligence (AI) into the modern healthcare industry necessitates immediate consideration of its legal and ethical dimensions. In this research, we focused on three principal areas requiring specific, contextual direction from both governmental entities and industry participants to guide the responsible and ethical progression of AI in healthcare. First, the research discusses standards for explainability. Within healthcare, understanding AI-driven decisions is vital because of their profound implications for human health. Various participants, from patients to oversight bodies, require differing levels of transparency and explanation from AI systems. Next, we examine safety protocols. Given that employing AI in healthcare could result in decisions that carry severe ramifications, we argue for evaluating its objective criteria, search parameters, training applicability, risk for of poor data, and possible risks. Finally, the dynamics of human-AI interaction were discussed. Optimal interaction necessitates the creation of AI systems that augment human capabilities and acknowledge human cognitive processes. The involvement of AI system users in healthcare, defined through tiers of understanding, contribution, and oversight, spans from elementary to advanced engagements. Each tier relates to the depth of comprehension, the scope of data contribution, and the level of oversight exercised by the healthcare specialist regarding the AI instrument. This research emphasizes the necessity for specific guidelines for each of the three dimensions to guarantee the secure, ethical, and efficient utilization of AI in healthcare

    Diverse Humans and Human-AI Interaction: What Cognitive Style Disaggregation Reveals

    Full text link
    Although guidelines for human-AI interaction (HAI) are providing important advice on how to help improve user experiences with AI products, little is known about HAI for diverse users' experiences with such systems. Without understanding how diverse users' experiences with AI products differ, designers lack information they need to make AI products that serve users equitably. To investigate, we disaggregated data from 1,016 human participants according to five cognitive styles -- their attitudes toward risk, their motivations, their learning styles (by process vs. by tinkering), their information processing styles, and their computer self-efficacy. Our results revealed situations in which applying existing HAI guidelines helped these cognitively diverse participants equitably, where applying them helped participants inequitably, and where stubborn inequity problems persisted despite applying the guidelines.The results also revealed that these situations pervaded across 15 of the 16 experiments; and also that they arose for all five of the cognitive style spectra. Finally, the results revealed what the cognitive style disaggregation's impacts were by participants' demographics -- showing statistical clusterings not only by gender, but also clusterings for intersectional gender-age groups

    Fostering Effective Human-AI Collaboration: Bridging the Gap Between User-Centric Design and Ethical Implementation

    Get PDF
    The synergy between humans and artificial intelligence (AI) systems has become pivotal in contemporary technological landscapes. This research paper delves into the multifaceted domain of Human-AI collaboration, aiming to decipher the intricate interplay between user-centric design and ethical implementation. As AI systems continue to permeate various facets of society, the significance of seamless interaction and ethical considerations has emerged as a critical axis for exploration. This study critically examines the pivotal components of successful Human-AI collaboration, emphasizing the importance of user experience design that prioritizes intuitive interfaces and transparent interactions. Furthermore, ethical implications encompassing privacy, fairness, bias mitigation, and accountability in AI decision-making are thoroughly investigated, emphasizing the imperative need for responsible AI deployment. The paper presents an analysis of diverse scenarios where Human-AI collaboration manifests, elucidating the impact on various sectors such as education, healthcare, workforce augmentation, and problem-solving domains. Insights into the cognitive augmentation offered by AI systems and the consequential implications on human decision-making processes are also probed, offering a comprehensive understanding of collaborative problem-solving and decision support mechanisms. Through an integrative approach merging user-centric design philosophies and ethical frameworks, this research advocates for a paradigm shift in AI development. It underscores the necessity of incorporating user feedback, participatory design methodologies, and transparent ethical guidelines into the development life cycle of AI systems. Ultimately, the paper proposes a roadmap towards fostering a symbiotic relationship between humans and AI, fostering trust, reliability, and enhanced performance in collaborative endeavors. This abstract outline the scope, key areas of investigation, and proposed outcomes of a research paper centered on Human-AI collaboration, providing a glimpse into the depth and breadth of the study

    AI Through Ethical Lenses: A Discourse Analysis of Guidelines for AI in Healthcare.

    Get PDF
    peer reviewedWhile the technologies that enable Artificial Intelligence (AI) continue to advance rapidly, there are increasing promises regarding AI's beneficial outputs and concerns about the challenges of human-computer interaction in healthcare. To address these concerns, institutions have increasingly resorted to publishing AI guidelines for healthcare, aiming to align AI with ethical practices. However, guidelines as a form of written language can be analyzed to recognize the reciprocal links between its textual communication and underlying societal ideas. From this perspective, we conducted a discourse analysis to understand how these guidelines construct, articulate, and frame ethics for AI in healthcare. We included eight guidelines and identified three prevalent and interwoven discourses: (1) AI is unavoidable and desirable; (2) AI needs to be guided with (some forms of) principles (3) trust in AI is instrumental and primary. These discourses signal an over-spillage of technical ideals to AI ethics, such as over-optimism and resulting hyper-criticism. This research provides insights into the underlying ideas present in AI guidelines and how guidelines influence the practice and alignment of AI with ethical, legal, and societal values expected to shape AI in healthcare.3. Good health and well-bein
    corecore