48,376 research outputs found

    Trusting organizations:The institutionalization of trust in interorganizational relationships

    Get PDF
    Recent research on interorganizational trust has stressed the need to develop a deeper understanding of the multi-level nature of this construct. This article focuses on trust on different analytical levels in an interorganizational context, and on the hitherto underspecified connections between these. Based on an institutionalization approach, it revisits the classic question: (how) can organizations trust each other? To do so, we consider organizations as objects of trust and reappraise the transferral from interpersonal to interorganizational trust in "facework" (Giddens, 1990). We also examine the conflicts and struggles of trust and power that can arise from this process between boundary spanners and their organizational constituents. Next, we consider organizations as subjects of trust in interorganizational relationships. We detail the institutionalization of trust and its reproduction on an organizational level, and how it can be transmitted to new generations of organizational actors, creating path-dependent histories of trust which are truly interorganizational. Taking up the theme of trust and power, we analyze ways in which the institutionalization of trust can entail that of power, too, and examine the implications of this from a critical point of view. We conclude that in interorganizational trust, both the subject and object of trust move across analytical levels, and further, that this movement demonstrates the significance of the organization as a distinct entity that can be both trusted and trusting

    Review of Research on Human Trust in Artificial Intelligence

    Get PDF
    Artificial Intelligence (AI) represents today\u27s most advanced technologies that aim to imitate human intelligence. Whether AI can successfully be integrated into society depends on whether it can gain users’ trust. We conduct a comprehensive review of recent research on human trust in AI and uncover the significant role of AI’s transparency, reliability, performance, and anthropomorphism in developing trust. We also review how trust is diversely built and calibrated, and how human and environmental factors affect human trust in AI. Based on the review, the most promising future research directions are proposed

    Exploring the Chemistry of Datafication Control – Pathways for a Trust-Enabling Use of Smart Workplace Technology

    Get PDF
    Organizations experiment with how smarttechnology can be used to manage employees since before COVID-19 and the possibilities seem almost limitless. However, the question of how this can be achieved without impairing the so-needed trust inside organizations is yet to answer. Hence, in this study, we employ a crisp-set QCA to investigate what trustenabling datafication control configurations look like. Drawing on unique survey data from Switzerland, we show that datafication control can go hand in hand with trust if organizations make efforts for employeecentricity. Further, we can reveal four distinct ways of how organizations can implement employee-centricity to mitigate possible trust-impairing signals that stem from augmented data-gathering and analysis capabilities. Our results contribute to the still heated debate on the duality of control and trust. They also help leaders to navigate through the unmanageable multitude of possible and even trust-toxic combinations

    Does A Loss of Social Credibility Impact Robot Safety?

    Get PDF
    This position paper discusses the safety-related functions performed by assistive robots and explores the relationship between trust and effective safety risk mitigation. We identify a measure of the robot’s social effectiveness, termed social credibility, and present a discussion of how social credibility may be gained and lost. This paper’s contribution is the identification of a link between social credibility and safety-related performance. Accordingly, we draw on analyses of existing systems to demonstrate how an assistive robot’s safety-critical functionality can be impaired by a loss of social credibility. In addition, we present a discussion of some of the consequences of prioritising either safety-related functionality or social engagement. We propose the identification of a mixed-criticality scheduling algorithm in order to maximise both safety-related performance and social engagement

    Humans' Perception of a Robot Moving Using a Slow in and Slow Out Velocity Profile

    Get PDF
    © 2019 IEEE - All rights reservedHumans need to understand and trust the robots they are working with. We hypothesize that how a robot moves can impact people’s perception and their trust. We present a methodology for a study to explore people’s perception of a robot using the animation principle of slow in, slow out—to change the robot’s velocity profile versus a robot moving using a linear velocity profile. Study participants will interact with the robot within a home context to complete a task while the robot moves around the house. The participants’ perceptions of the robot will be recorded using the Godspeed Questionnaire. A pilot study shows that it is possible to notice the difference between the linear and the slow in, slow out velocity profiles, so the full experiment planned with participants will allow us to compare their perceptions based on the two observable behaviors.Final Accepted Versio

    Challenges in Collaborative HRI for Remote Robot Teams

    Get PDF
    Collaboration between human supervisors and remote teams of robots is highly challenging, particularly in high-stakes, distant, hazardous locations, such as off-shore energy platforms. In order for these teams of robots to truly be beneficial, they need to be trusted to operate autonomously, performing tasks such as inspection and emergency response, thus reducing the number of personnel placed in harm's way. As remote robots are generally trusted less than robots in close-proximity, we present a solution to instil trust in the operator through a `mediator robot' that can exhibit social skills, alongside sophisticated visualisation techniques. In this position paper, we present general challenges and then take a closer look at one challenge in particular, discussing an initial study, which investigates the relationship between the level of control the supervisor hands over to the mediator robot and how this affects their trust. We show that the supervisor is more likely to have higher trust overall if their initial experience involves handing over control of the emergency situation to the robotic assistant. We discuss this result, here, as well as other challenges and interaction techniques for human-robot collaboration.Comment: 9 pages. Peer reviewed position paper accepted in the CHI 2019 Workshop: The Challenges of Working on Social Robots that Collaborate with People (SIRCHI2019), ACM CHI Conference on Human Factors in Computing Systems, May 2019, Glasgow, U

    Towards Trust-Aware Human-Automation Interaction: An Overview of the Potential of Computational Trust Models

    Get PDF
    Several computational models have been proposed to quantify trust and its relationship to other system variables. However, these models are still under-utilised in human-machine interaction settings due to the gap between modellers’ intent to capture a phenomenon and the requirements for employing the models in a practical context. Our work amalgamates insights from the system modelling, trust, and human-autonomy teaming literature to address this gap. We explore the potential of computational trust models in the development of trust-aware systems by investigating three research questions: 1- At which stages of development can trust models be used by designers? 2- how can trust models contribute to trust-aware systems? 3- which factors should be incorporated within trust models to enhance models’ effectiveness and usability? We conclude with future research directions

    A Need for Trust in Conversational Interface Research

    Full text link
    Across several branches of conversational interaction research including interactions with social robots, embodied agents, and conversational assistants, users have identified trust as a critical part of those interactions. Nevertheless, there is little agreement on what trust means within these sort of interactions or how trust can be measured. In this paper, we explore some of the dimensions of trust as it has been understood in previous work and we outline some of the ways trust has been measured in the hopes of furthering discussion of the concept across the field
    • 

    corecore