4,635 research outputs found

    Nobody's Perfect : On Trust in Social Robot Failures

    Get PDF
    With robots increasingly succeeding in exhibiting more human-like behaviours, humans may be more likely to ‘forgive’ their errors and continue to trust them as a result of ascribing higher, more human-like intelligence to them. If an integral aspect of successful HRI is to accurately communicate the competence of a robot, it can be argued that the technical success of the robot in exhibiting human-like behaviour can, in some cases, lead to a failure of the interaction by resulting in misperceived human-like competence. We highlight this through the example of speech in robots, and discuss the implications of failures and their role in HRI design

    Call it robot: anthropomorphic framing and failure of self-service technologies

    Get PDF
    Purpose The purpose of this paper is to test the effect that anthropomorphic framing (i.e. robot vs automatic machine) has on consumers’ responses in case of service failure. Specifically, the authors hypothesize that consumers hold an unconscious association between the word “robot” and agency and that the higher agency attributed to self-service machines framed as robots (vs automatic machines) leads, in turn, to a more positive service evaluation in case of service failure. Design/methodology/approach The authors have conducted four experimental studies to test the framework presented in this paper. In Studies 1a and 1b, the authors used an Implicit Association Test to test for the unconscious association held by consumers about robots as being intelligent machines (i.e. agency). In Studies 2 and 3, the authors tested the effect that framing technology as robots (vs automatic machines) has on consumers’ responses to service failure using two online experiments across different consumption contexts (hotel, restaurant) and using different dependent variables (service evaluation, satisfaction and word-of-mouth). Findings The authors show that consumers evaluate more positively a service failure involving a self-service technology framed as a robot rather than one framed as an automatic machine. They provide evidence that this effect is driven by higher perceptions of agency and that the association between technology and agency held by consumers is an unconscious one. Originality/value This paper investigates a novel driver of consumers’ perception of agency of technology, namely, how the technology is framed. Moreover, this study sheds light on consumers’ responses to technology’s service failure

    Empathy in the Digital Administrative State

    Get PDF
    Humans make mistakes. Humans make mistakes especially while filling out tax returns, benefit applications, and other government forms, which are often tainted with complex language, requirements, and short deadlines. However, the unique human feature of forgiving these mistakes is disappearing with the digitalization of government services and the automation of government decision-making. While the role of empathy has long been controversial in law, empathic measures have helped public authorities balance administrative values with citizens’ needs and deliver fair and legitimate decisions. The empathy of public servants has been particularly important for vulnerable citizens (for example, disabled individuals, seniors, and underrepresented minorities). When empathy is threatened in the digital administrative state, vulnerable citizens are at risk of not being able to exercise their rights because they cannot engage with digital bureaucracy. This Article argues that empathy, which in this context is the ability to relate to others and understand a situation from multiple perspectives, is a key value of administrative law deserving of legal protection in the digital administrative state. Empathy can contribute to the advancement of procedural due process, the promotion of equal treatment, and the legitimacy of automation. The concept of administrative empathy does not aim to create arrays of exceptions, nor imbue law with emotions and individualized justice. Instead, this concept suggests avenues for humanizing digital government and automated decision-making through a more complete understanding of citizens’ needs. This Article explores the role of empathy in the digital administrative state at two levels: First, it argues that empathy can be a partial response to some of the shortcomings of digital bureaucracy. At this level, administrative empathy acknowledges that citizens have different skills and needs, and this requires the redesign of pre-filled application forms, government platforms, algorithms, as well as assistance. Second, empathy should also operate ex post as a humanizing measure which can help ensure that administrative mistakes made in good faith can be forgiven under limited circumstances, and vulnerable individuals are given second chances to exercise their rights. Drawing on comparative examples of empathic measures employed in the United States, the Netherlands, Estonia, and France, this Article’s contribution is twofold: first, it offers an interdisciplinary reflection on the role of empathy in administrative law and public administration for the digital age, and second, it operationalizes the concept of administrative empathy. These goals combine to advance the position of vulnerable citizens in the administrative state

    Claim success, but blame the bot? User reactions to service failure and recovery in interactions with humanoid service robots

    Get PDF
    Service robots are changing the nature of service delivery in the digital economy. However, frequently occurring service failures represent a great challenge to achieve service robot acceptance. To understand how different service outcomes in interactions with service robots affect usage intentions, this research investigates (1) how users attribute failures committed by humanoid service robots and (2) whether responsibility attribution varies depending on service robot design. In a 3 (success vs. failure vs. failure with recovery) ✕ 2 (warm vs. competent service robot design) between-subject online experiment, this research finds evidence for the self-serving bias in a service robot context, that is, attributing successes to oneself, but blaming others for failures. This effect emerges independently from service robot design. Furthermore, recovery through human intervention can mitigate consequences of failure only for robots with warm design. The authors discuss consequences for applications of humanoid service robots and implications for further research

    The Effects of a Prayer Intervention on the Process of Forgiveness

    Full text link
    A vast amount of research examining forgiveness has now been reported, as has a sizable amount of research on prayer, but these two constructs have rarely been examined together. This experimental intervention study investigated potential benefits of prayer among Christians seeking to forgive an interpersonal offense. Participants consisted of 411 undergraduate students from private Christian colleges across the United States, randomly assigned to a prayer group, a devotional attention group, or a no-contact control group. The prayer group participated in a 16-day devotional reading and prayer intervention focused on forgiveness, whereas those in the devotional attention group meditated on devotional readings not related to forgiveness. Those in the prayer and devotional attention groups showed significant changes in state forgiveness. Also, participants in the prayer intervention group showed significant changes in empathy toward their offender. Implications are considered

    Restoring Justice: The Moderating Role of AI Agent in Consumers’ Reactions to Service Recovery

    Get PDF
    Service failure is inevitable and service providers have a stake in minimizing the adverse consequences of service failure. As companies increasingly deploy Artificial Intelligence (AI) agents to augment or substitute conventional human customer service agents, there are growing scholarly attempts to elucidate the role of AI agents in shaping consumers’ reactions to service recovery. Synthesizing extant literature on service failure and recovery with restorative justice, this study contextualizes restorative justice to service recovery and examine the interplay of recovery components with agent type (AI vs. human) on restorative justice. We then conducted a scenario-based online experiment to validate our hypothesized relationships. Analytical findings point to the positive effects of empathy and remorse on affective restorative justice, but these relationships are attenuated when they are conveyed by AI agents. Insights from this study hence extends our understanding of AI deployment in customer service and yields practical guidelines for AI agent developers

    Paul's conception of the atonement

    Full text link
    Thesis (M.A.)--Boston Universit

    The effects of a prayer intervention on the process of forgiveness

    Full text link
    • 

    corecore