15 research outputs found

    Trusting Robots in Teams: Examining the Impacts of Trusting Robots on Team Performance and Satisfaction

    Get PDF
    Despite the widespread use of robots in teams, there is still much to learn about what facilitates better performance in these teams working with robots. Although trust has been shown to be a strong predictor of performance in all-human teams, we do not fully know if trust plays the same critical role in teams working with robots. This study examines how to facilitate trust and its importance on the performance of teams working with robots. A 2 (robot identification vs. no robot identification) × 2 (team identification vs. no team identification) between-subjects experiment with 54 teams working with robots was conducted. Results indicate that robot identification increased trust in robots and team identification increased trust in one’s teammates. Trust in robots increased team performance while trust in teammates increased satisfaction

    Trusting Robots in Teams: Examining the Impacts of Trusting Robots on Team Performance and Satisfaction

    Full text link
    Despite the widespread use of robots in teams, there is still much to learn about what facilitates better performance in these teams working with robots. Although trust has been shown to be a strong predictor of performance in all-human teams, we do not fully know if trust plays the same critical role in teams working with robots. This study examines how to facilitate trust and its importance on the performance of teams working with robots. A 2 (robot identification vs. no robot identification) × 2 (team identification vs. no team identification) between-subjects experiment with 54 teams working with robots was conducted. Results indicate that robot identification increased trust in robots and team identification increased trust in one’s teammates. Trust in robots increased team performance while trust in teammates increased satisfaction.http://doi.org/10.24251/hicss.2019.031Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/145619/1/You and Robert 2019 (Preprint).pd

    Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance

    Get PDF
    If a user is presented an AI system that portends to explain how it works, how do we know whether the explanation works and the user has achieved a pragmatic understanding of the AI? This question entails some key concepts of measurement such as explanation goodness and trust. We present methods for enabling developers and researchers to: (1) Assess the a priori goodness of explanations, (2) Assess users\u27 satisfaction with explanations, (3) Reveal user\u27s mental model of an AI system, (4) Assess user\u27s curiosity or need for explanations, (5) Assess whether the user\u27s trust and reliance on the AI are appropriate, and finally, (6) Assess how the human-XAI work system performs. The methods we present derive from our integration of extensive research literatures and our own psychometric evaluations. We point to the previous research that led to the measurement scales which we aggregated and tailored specifically for the XAI context. Scales are presented in sufficient detail to enable their use by XAI researchers. For Mental Model assessment and Work System Performance, XAI researchers have choices. We point to a number of methods, expressed in terms of methods\u27 strengths and weaknesses, and pertinent measurement issues

    Individual Differences in Attributes of Trust in Automation: Measurement and Application to System Design

    Get PDF
    Computer-based automation of sensing, analysis, memory, decision-making, and control in industrial, business, medical, scientific, and military applications is becoming increasingly sophisticated, employing various techniques of artificial intelligence for learning, pattern recognition, and computation. Research has shown that proper use of automation is highly dependent on operator trust. As a result the topic of trust has become an active subject of research and discussion in the applied disciplines of human factors and human-systems integration. While various papers have pointed to the many factors that influence trust, there currently exists no consensual definition of trust. This paper reviews previous studies of trust in automation with emphasis on its meaning and factors determining subjective assessment of trust and automation trustworthiness (which sometimes but not always are regarded as an objectively measurable properties of the automation). The paper asserts that certain attributes normally associated with human morality can usefully be applied to computer-based automation as it becomes more intelligent and more responsive to its human user. The paper goes on to suggest that the automation, based on its own experience with the user, can develop reciprocal attributes that characterize its own trust of the user and adapt accordingly. This situation can be modeled as a formal game where each of the automation user and the automation (computer) engage one another according to a payoff matrix of utilities (benefits and costs). While this is a concept paper lacking empirical data, it offers hypotheses by which future researchers can test for individual differences in the detailed attributes of trust in automation, and determine criteria for adjusting automation design to best accommodate these user differences

    MyDigitalFootprint.ORG: Young People and the Proprietary Ecology of Everyday Data

    Full text link
    Young people are the canaries in our contemporary data mine. They are at the forefront of complex negotiations over privacy, property, and security in environments saturated with information systems. The productive and entertaining promises of proprietary media have led to widespread adoption among youth whose daily activities now generate troves of data that are mined for governance and profit. As they text, email, network, and search within these proprietary ecologies, young people\u27s identity configurations link up with modes of capitalist production. The MyDigitalFootprint.ORG Project was thus initiated to unpack and engage young people\u27s material social relations with/in proprietary ecologies through participatory action design research. The project began by interviewing New Yorkers ages 14-19. Five of these interviewees then participated as co-researchers in a Youth Design and Research Collective (YDRC) to analyze interview findings through the collaborative design of an open source social network. In taking a medium as our method, co-researchers took on the role of social network producers and gained new perspectives otherwise mystified to consumers. Considering my work with the YDRC I argue that involving youth in designing information ecologies fosters critical capacities for participating in acts of research and knowledge production. More critical participation in these ecologies, even proprietary ones, is necessary for opening opaque aspects of our environment and orienting data circulation toward more equitable and just ends

    Skin lesions classification using convolutional neural networks in clinical images

    Get PDF
    Trabalho de Conclusão de Curso (graduação)—Universidade de Brasília, Faculdade UnB Gama, 2018.Lesões de pele são condições que aparecem em um paciente devido a várias razões. Uma delas pode ser por causa de um crescimento anormal no tecido da pele, definido como câncer. Essa doença aflige mais de 14,1 milhões de pacientes e tem sido a causa de mais de 8,2 milhões de mortes no mundo todo. Sendo assim, uma solução capaz de ajudar no diagnóstico precoce pode salvar vidas e diminuir custos de tratamento. Visto isso, é proposto a construção de um modelo de classificação para 12 lesões, sendo dessas 4 malignas, incluindo Melanoma Maligno e Carcinoma Basocelular. Além disso, neste trabalho é utilizado uma arquitetura ResNet-152 pré-treinada, que então foi aprimorada com 88,090 imagens aumentadas, utilizando diferentes transformações. As predições foram então analizadas com o método GradCAM para gerar explicações visuais, que foram condizentes com conhecimentos prévios e boas práticas para explicações. Finalmente, a rede foi testada com 956 imagens e alcançou a métrica de área abaixo da curva (AUC) de 0.96 para Melanoma e 0.91 para Carcinoma Basocelular, comparaveis aos resultados de estado da arte.Skin lesions are conditions that appear on a patient due to many different reasons. One of these can be because of an abnormal growth in skin tissue, defined as cancer. This disease plagues more than 14.1 million patients and had been the cause of more than 8.2 million deaths, worldwide. Furthermore, a solution capable of aiding early diagnosis may save lives and cut costs in treatment. Therefore, this work proposes the construction of a classification model for 12 lesions, being 4 of these malignant, including Malignant Melanoma and Basal Cell Carcinoma. Furthermore, we use a pre-trained ResNet-152 architecture, which then was trained over 88,090 augmented images, using different transformations. The predictions were then analyzed with GradCAM method, to generate visual explanations, which were consistent with a prior belief and general good practices for explanations. Finally, the network was tested with 956 original images and achieve an area under the curve (AUC) metric of 0.96 for Melanoma and 0.91 for Basal Cell Carcinoma, that is comparable to state-of-the-art results

    Technology with Embodied Physical Actions: Understanding Interactions and Effectiveness Gains in Teams Working with Robots

    Full text link
    Teams in different areas are increasingly adopting robots to perform various mission operations. The inclusion of robots in teams has drawn consistent attention from scholars in relevant fields such as human-computer interaction (HCI) and human-robot interaction (HRI). Yet, the current literature has not fully addressed issues regarding teamwork by mainly focusing on the collaboration between a single robot and an individual. The limited scope of human-robot collaboration in the existing research hinders uncovering the mechanism of performance gains in teams that involve multiple robots and people. This dissertation research is an effort to address the issue by achieving two goals. First, this dissertation examines the impacts of interaction between human teammates alone and interaction between humans and robots on outcomes in teams working with robots. Second, I provide insight into the development of teams working with robots by examining ways to promote a team member’s intention to work with robots. In this dissertation, I conducted three studies in an endeavor to accomplish the aforementioned goals. The first study, in Chapter 2, turns to theory trust in teams to explain outcome gains in teams working with robots. This study reports result from a lab experiment, in which two people fulfilled a collaborative task using two robots. The results show that trust in robots and trust in teammates can be enhanced by a robot-building activity and team identification, respectively. The enhanced trust revealed unique impacts on different team outcomes: trust in robots increased only team performance while trust in teammates increased only satisfaction. Theoretical and practical contributions of the findings are discussed in the chapter. The second study, in Chapter 3, uncovers how team member’s efficacy beliefs interplay with team diversity to promote performance in teams working with robots. Results from a lab experiment reveal that individual operator’s performance is enhanced by team potency perception only when the team is ethnically diverse. This study contributes to theory by identifying team diversity as a limiting condition of performance gains for robot operators in teams. The third study, in Chapter 4, focuses on factors leading to the development of teams working with robots. I conducted an online experiment to examine how surface-level and deep-level similarity contribute to trust in a robotic partner and the impact of the trust on a team member’s intention to work with the robot in varying degrees of danger. This study generally shows that the possibility of danger regulates not only the positive link between the surface-level similarity and trust in robot and but also the link between intention to work with the robot and intention to replace a human teammate with the robot. Chapter 5, as a concluding chapter of this dissertation, discusses the theoretical and practical implications drawn from the three studies.PHDInformationUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/138514/1/sangyou_1.pd
    corecore