339 research outputs found
Shocking the Crowd: The Effect of Censorship Shocks on Chinese Wikipedia
Collaborative crowdsourcing has become a popular approach to organizing work
across the globe. Being global also means being vulnerable to shocks --
unforeseen events that disrupt crowds -- that originate from any country. In
this study, we examine changes in collaborative behavior of editors of Chinese
Wikipedia that arise due to the 2005 government censor- ship in mainland China.
Using the exogenous variation in the fraction of editors blocked across
different articles due to the censorship, we examine the impact of reduction in
group size, which we denote as the shock level, on three collaborative behavior
measures: volume of activity, centralization, and conflict. We find that
activity and conflict drop on articles that face a shock, whereas
centralization increases. The impact of a shock on activity increases with
shock level, whereas the impact on centralization and conflict is higher for
moderate shock levels than for very small or very high shock levels. These
findings provide support for threat rigidity theory -- originally introduced in
the organizational theory literature -- in the context of large-scale
collaborative crowds
ICIS 2019 SIGHCI Workshop Panel Report: Human– Computer Interaction Challenges and Opportunities for Fair, Trustworthy and Ethical Artificial Intelligence
Artificial Intelligence (AI) is rapidly changing every aspect of our society—including amplifying our biases. Fairness, trust and ethics are at the core of many of the issues underlying the implications of AI. Despite this, research on AI with relation to fairness, trust and ethics in the information systems (IS) field is still scarce. This panel brought together academia, business and government perspectives to discuss the challenges and identify potential solutions to address such challenges. This panel report presents eight themes based around the discussion of two questions: (1) What are the biggest challenges to designing, implementing and deploying fair, ethical and trustworthy AI?; and (2) What are the biggest challenges to policy and governance for fair, ethical and trustworthy AI? The eight themes are: (1) identifying AI biases; (2) drawing attention to AI biases; (3) addressing AI biases; (4) designing transparent and explainable AI; (5) AI fairness, trust, ethics: old wine in a new bottle?; (6) AI accountability; (7) AI laws, policies, regulations and standards; and (8) frameworks for fair, ethical and trustworthy AI. Based on the results of the panel discussion, we present research questions for each theme to guide future research in the area of human–computer interaction
Examining the effects of emotional valence and arousal on takeover performance in conditionally automated driving
In conditionally automated driving, drivers have difficulty in takeover transitions as they become increasingly decoupled from the operational level of driving. Factors influencing takeover performance, such as takeover lead time and the engagement of non-driving-related tasks, have been studied in the past. However, despite the important role emotions play in human-machine interaction and in manual driving, little is known about how emotions influence drivers’ takeover performance. This study, therefore, examined the effects of emotional valence and arousal on drivers’ takeover timeliness and quality in conditionally automated driving. We conducted a driving simulation experiment with 32 participants. Movie clips were played for emotion induction. Participants with different levels of emotional valence and arousal were required to take over control from automated driving, and their takeover time and quality were analyzed. Results indicate that positive valence led to better takeover quality in the form of a smaller maximum resulting acceleration and a smaller maximum resulting jerk. However, high arousal did not yield an advantage in takeover time. This study contributes to the literature by demonstrating how emotional valence and arousal affect takeover performance. The benefits of positive emotions carry over from manual driving to conditionally automated driving while the benefits of arousal do not
Considerations for Task Allocation in Human-Robot Teams
In human-robot teams where agents collaborate together, there needs to be a
clear allocation of tasks to agents. Task allocation can aid in achieving the
presumed benefits of human-robot teams, such as improved team performance. Many
task allocation methods have been proposed that include factors such as agent
capability, availability, workload, fatigue, and task and domain-specific
parameters. In this paper, selected work on task allocation is reviewed. In
addition, some areas for continued and further consideration in task allocation
are discussed. These areas include level of collaboration, novel tasks, unknown
and dynamic agent capabilities, negotiation and fairness, and ethics. Where
applicable, we also mention some of our work on task allocation. Through
continued efforts and considerations in task allocation, human-robot teaming
can be improved.Comment: Presented at AI-HRI symposium as part of AAAI-FSS 2022
(arXiv:2209.14292
Mechanisms Underlying Social Loafing in Technology Teams: An Empirical Analysis
Prior research has identified team size and dispersion as important antecedents of social loafing in technology-enabled teams. However, the underlying mechanisms through which team size and team dispersion cause individuals to engage in social loafing is significantly understudied and needs to be researched. To address this exigency, we use Bandura’s Theory of Moral Disengagement to explain why individuals under conditions of increasing team size and dispersion engage in social loafing behavior. We identify three mechanisms—advantageous comparison, displacement of responsibility and moral justification —that mediate the relationship between team size, dispersion and social loafing. Herein, we present the theory development and arguments for our hypotheses. We also present the initial findings from this study. Implications of the expected research findings are also discussed
Introduction to the Special Issue on AI Fairness, Trust, and Ethics
It is our pleasure to welcome you to this AIS Transactions on Human Computer Interaction special issue on artificial intelligence (AI) fairness, trust, and ethics. This special issue received research papers that unpacked the potential, challenges, impacts, and theoretical implications of AI. This special issue contains four papers that integrate research across diverse fields of study, such as social science, computer science, engineering, design, values, and other diverse topics related to AI fairness, trust, and ethics broadly conceptualized. This issue contains three of the four papers (along with a regular paper of the journal). The fourth or last paper of this special issue is forthcoming in March 2021. We hope that you enjoy these papers and, like us, look forward to similar research published in AIS Transactions on Human Computer Interaction
E-profiles, Conflict, and Shared Understanding in Distributed Teams
In this research, we examine the efficacy of a technological intervention in shaping distributed team members’ perceptions about their teammates. We argue that, by exposing distributed team members to electronic profiles (e-profiles) with information emphasizing their personal similarities with one another, distributed teams should experience lower levels of relational and task conflict. In turn, reductions in conflict should facilitate a shared understanding among team members, which should increase their team effectiveness. The results of a laboratory experiment of 46 distributed teams generally support these assertions. Specifically, we found that a simple, technological intervention can reduce task conflict in distributed teams, which, in turn, improves shared understanding and team effectiveness. We also uncovered important differences in the antecedents and impacts of relational and task conflict. Although we found that the e-profile intervention was effective in accounting for variance in task conflict (R2 = .41), it was quite poor in accounting for variance in relational conflict (R2 = .04). The model accounts for 33% and 43% of the variance in shared understanding and team effectiveness, respectively. Taken together, the results of this research suggest that the information shared about team members in distributed team settings has important implications for their ability to collaborate, achieve a common understanding of their work, and accomplish their task effectively. We suggest that e-profiles may be a useful intervention for management to enhance effectiveness in distributed teams
A Review of Personality in Human Robot Interactions
Personality has been identified as a vital factor in understanding the
quality of human robot interactions. Despite this the research in this area
remains fragmented and lacks a coherent framework. This makes it difficult to
understand what we know and identify what we do not. As a result our knowledge
of personality in human robot interactions has not kept pace with the
deployment of robots in organizations or in our broader society. To address
this shortcoming, this paper reviews 83 articles and 84 separate studies to
assess the current state of human robot personality research. This review: (1)
highlights major thematic research areas, (2) identifies gaps in the
literature, (3) derives and presents major conclusions from the literature and
(4) offers guidance for future research.Comment: 70 pages, 2 figure
- …