42 research outputs found

    Technology Affordances and IT Identity

    Get PDF
    The study attempts to understand the impact of technology affordances on identifying the self with technology (IT identity). Furthermore, it seeks to understand the role of experiences in mediating the relationship between technology affordances and IT identity. To answer our research questions, we will conduct a cross-sectional survey

    Facilitating Employee Intention to Work with Robots

    Get PDF
    Organizations are adopting and integrating robots to work with and alongside their human employees. However, their human employees are not necessarily happy about this new work arrangement. This may be in part due to the increasing fears that robots will eventually take their jobs. Organizations are now facing the challenge of integrating robots into their workforce by encouraging humans to work with their robotic teammates. To address this issue, this study employs similarity and attraction theory to encourage humans to work with and alongside their robotic co-worker. Our research model asserts that surface and deep level similarity with the robot will impact a human’s willingness to work with a robot. We also seek to examine whether risk moderates the importance of both surface and deep level similarity. To empirically examine this model, this proposal presents an experimental design. Results of the study should provide new insights into the benefits and limitations of similarity to encourage humans to work with and alongside their robot co-worker

    Emotional Attachment, Performance, and Viability in Teams Collaborating with Embodied Physical Action (EPA) Robots

    Get PDF
    Although different types of teams increasingly employ embodied physical action (EPA) robots as a collaborative technology to accomplish their work, we know very little about what makes such teams successful. This paper has two objectives: the first is to examine whether a team’s emotional attachment to its robots can lead to better team performance and viability; the second is to determine whether robot and team identification can promote a team’s emotional attachment to its robots. To achieve these objectives, we conducted a between-subjects experiment with 57 teams working with robots. Teams performed better and were more viable when they were emotionally attached to their robots. Both robot and team identification increased a team’s emotional attachment to its robots. Results of this study have implications for collaboration using EPA robots specifically and for collaboration technology in general

    ICIS 2019 SIGHCI Workshop Panel Report: Human– Computer Interaction Challenges and Opportunities for Fair, Trustworthy and Ethical Artificial Intelligence

    Get PDF
    Artificial Intelligence (AI) is rapidly changing every aspect of our society—including amplifying our biases. Fairness, trust and ethics are at the core of many of the issues underlying the implications of AI. Despite this, research on AI with relation to fairness, trust and ethics in the information systems (IS) field is still scarce. This panel brought together academia, business and government perspectives to discuss the challenges and identify potential solutions to address such challenges. This panel report presents eight themes based around the discussion of two questions: (1) What are the biggest challenges to designing, implementing and deploying fair, ethical and trustworthy AI?; and (2) What are the biggest challenges to policy and governance for fair, ethical and trustworthy AI? The eight themes are: (1) identifying AI biases; (2) drawing attention to AI biases; (3) addressing AI biases; (4) designing transparent and explainable AI; (5) AI fairness, trust, ethics: old wine in a new bottle?; (6) AI accountability; (7) AI laws, policies, regulations and standards; and (8) frameworks for fair, ethical and trustworthy AI. Based on the results of the panel discussion, we present research questions for each theme to guide future research in the area of human–computer interaction

    Shocking the Crowd: The Effect of Censorship Shocks on Chinese Wikipedia

    Full text link
    Collaborative crowdsourcing has become a popular approach to organizing work across the globe. Being global also means being vulnerable to shocks -- unforeseen events that disrupt crowds -- that originate from any country. In this study, we examine changes in collaborative behavior of editors of Chinese Wikipedia that arise due to the 2005 government censor- ship in mainland China. Using the exogenous variation in the fraction of editors blocked across different articles due to the censorship, we examine the impact of reduction in group size, which we denote as the shock level, on three collaborative behavior measures: volume of activity, centralization, and conflict. We find that activity and conflict drop on articles that face a shock, whereas centralization increases. The impact of a shock on activity increases with shock level, whereas the impact on centralization and conflict is higher for moderate shock levels than for very small or very high shock levels. These findings provide support for threat rigidity theory -- originally introduced in the organizational theory literature -- in the context of large-scale collaborative crowds

    Considerations for Task Allocation in Human-Robot Teams

    Full text link
    In human-robot teams where agents collaborate together, there needs to be a clear allocation of tasks to agents. Task allocation can aid in achieving the presumed benefits of human-robot teams, such as improved team performance. Many task allocation methods have been proposed that include factors such as agent capability, availability, workload, fatigue, and task and domain-specific parameters. In this paper, selected work on task allocation is reviewed. In addition, some areas for continued and further consideration in task allocation are discussed. These areas include level of collaboration, novel tasks, unknown and dynamic agent capabilities, negotiation and fairness, and ethics. Where applicable, we also mention some of our work on task allocation. Through continued efforts and considerations in task allocation, human-robot teaming can be improved.Comment: Presented at AI-HRI symposium as part of AAAI-FSS 2022 (arXiv:2209.14292

    Introduction to the Special Issue on AI Fairness, Trust, and Ethics

    Get PDF
    It is our pleasure to welcome you to this AIS Transactions on Human Computer Interaction special issue on artificial intelligence (AI) fairness, trust, and ethics. This special issue received research papers that unpacked the potential, challenges, impacts, and theoretical implications of AI. This special issue contains four papers that integrate research across diverse fields of study, such as social science, computer science, engineering, design, values, and other diverse topics related to AI fairness, trust, and ethics broadly conceptualized. This issue contains three of the four papers (along with a regular paper of the journal). The fourth or last paper of this special issue is forthcoming in March 2021. We hope that you enjoy these papers and, like us, look forward to similar research published in AIS Transactions on Human Computer Interaction

    Look Who's Talking Now: Implications of AV's Explanations on Driver's Trust, AV Preference, Anxiety and Mental Workload

    Full text link
    Explanations given by automation are often used to promote automation adoption. However, it remains unclear whether explanations promote acceptance of automated vehicles (AVs). In this study, we conducted a within-subject experiment in a driving simulator with 32 participants, using four different conditions. The four conditions included: (1) no explanation, (2) explanation given before or (3) after the AV acted and (4) the option for the driver to approve or disapprove the AV's action after hearing the explanation. We examined four AV outcomes: trust, preference for AV, anxiety and mental workload. Results suggest that explanations provided before an AV acted were associated with higher trust in and preference for the AV, but there was no difference in anxiety and workload. These results have important implications for the adoption of AVs.Comment: 42 pages, 5 figures, 3 Table

    Intelligence Augmentation: Human Factors in AI and Future of Work

    Get PDF
    The availability of parallel and distributed processing at a reasonable cost and the diversity of data sources have contributed to advanced developments in artificial intelligence (AI). These developments in the AI computing environment are not concomitant with changes in the social, legal, and political environment. While considering deploying AI, the deployment context and the end goal of human intelligence augmentation for that specific context have surfaced as significant factors for professionals, organizations, and society. In this research commentary, we highlight some important socio-technical aspects associated with recent growth in AI systems. We elaborate on the intricacies of human-machine interaction that form the foundation of augmented intelligence. We also highlight the ethical considerations that relate to these interactions and explain how augmented intelligence can play a key role in shaping the future of human work
    corecore