7,386 research outputs found

    Thinking Technology as Human: Affordances, Technology Features, and Egocentric Biases in Technology Anthropomorphism

    Get PDF
    Advanced information technologies (ITs) are increasingly assuming tasks that have previously required human capabilities, such as learning and judgment. What drives this technology anthropomorphism (TA), or the attribution of humanlike characteristics to IT? What is it about users, IT, and their interactions that influences the extent to which people think of technology as humanlike? While TA can have positive effects, such as increasing user trust in technology, what are the negative consequences of TA? To provide a framework for addressing these questions, we advance a theory of TA that integrates the general three-factor anthropomorphism theory in social and cognitive psychology with the needs-affordances-features perspective from the information systems (IS) literature. The theory we construct helps to explain and predict which technological features and affordances are likely: (1) to satisfy users’ psychological needs, and (2) to lead to TA. More importantly, we problematize some negative consequences of TA. Technology features and affordances contributing to TA can intensify users’ anchoring with their elicited agent knowledge and psychological needs and also can weaken the adjustment process in TA under cognitive load. The intensified anchoring and weakened adjustment processes increase egocentric biases that lead to negative consequences. Finally, we propose a research agenda for TA and egocentric biases

    Representing and Parameterizing Agent Behaviors

    Get PDF
    The last few years have seen great maturation in understanding how to use computer graphics technology to portray 3D embodied characters or virtual humans. Unlike the off-line, animator-intensive methods used in the special effects industry, real-time embodied agents are expected to exist and interact with us live . They can be represent other people or function as autonomous helpers, teammates, or tutors enabling novel interactive educational and training applications. We should be able to interact and communicate with them through modalities we already use, such as language, facial expressions, and gesture. Various aspects and issues in real-time virtual humans will be discussed, including consistent parameterizations for gesture and facial actions using movement observation principles, and the representational basis for character believability, personality, and affect. We also describe a Parameterized Action Representation (PAR) that allows an agent to act, plan, and reason about its actions or actions of others. Besides embodying the semantics of human action, the PAR is designed for building future behaviors into autonomous agents and controlling the animation parameters that portray personality, mood, and affect in an embodied agent

    Robotic Psychology. What Do We Know about Human-Robot Interaction and What Do We Still Need to Learn?

    Get PDF
    “Robotization”, the integration of robots in human life will change human life drastically. In many situations, such as in the service sector, robots will become an integrative part of our lives. Thus, it is vital to learn from extant research on human-robot interaction (HRI). This article introduces robotic psychology that aims to bridge the gap between humans and robots by providing insights into particularities of HRI. It presents a conceptualization of robotic psychology and provides an overview of research on service-focused human-robot interaction. Theoretical concepts, relevant to understand HRI with are reviewed. Major achievements, shortcomings, and propositions for future research will be discussed

    From Affect Theoretical Foundations to Computational Models of Intelligent Affective Agents

    Full text link
    [EN] The links between emotions and rationality have been extensively studied and discussed. Several computational approaches have also been proposed to model these links. However, is it possible to build generic computational approaches and languages so that they can be "adapted " when a specific affective phenomenon is being modeled? Would these approaches be sufficiently and properly grounded? In this work, we want to provide the means for the development of these generic approaches and languages by making a horizontal analysis inspired by philosophical and psychological theories of the main affective phenomena that are traditionally studied. Unfortunately, not all the affective theories can be adapted to be used in computational models; therefore, it is necessary to perform an analysis of the most suitable theories. In this analysis, we identify and classify the main processes and concepts which can be used in a generic affective computational model, and we propose a theoretical framework that includes all these processes and concepts that a model of an affective agent with practical reasoning could use. Our generic theoretical framework supports incremental research whereby future proposals can improve previous ones. This framework also supports the evaluation of the coverage of current computational approaches according to the processes that are modeled and according to the integration of practical reasoning and affect-related issues. This framework is being used in the development of the GenIA(3) architecture.This work is partially supported by the Spanish Government projects PID2020-113416RB-I00, GVA-CEICE project PROMETEO/2018/002, and TAILOR, a project funded by EU Horizon 2020 research and innovation programme under GA No 952215.Alfonso, B.; Taverner-Aparicio, JJ.; Vivancos, E.; Botti, V. (2021). From Affect Theoretical Foundations to Computational Models of Intelligent Affective Agents. Applied Sciences. 11(22):1-29. https://doi.org/10.3390/app112210874S129112

    A matter of consequences: Understanding the effects of robot errors on people's trust in HRI

    Get PDF
    On reviewing the literature regarding acceptance and trust in human-robot interaction (HRI), there are a number of open questions that needed to be addressed in order to establish effective collaborations between humans and robots in real-world applications. In particular, we identified four principal open areas that should be investigated to create guidelines for the successful deployment of robots in the wild. These areas are focused on: 1) the robot's abilities and limitations; in particular when it makes errors with different severity of consequences, 2) individual differences, 3) the dynamics of human-robot trust, and 4) the interaction between humans and robots over time. In this paper, we present two very similar studies, one with a virtual robot with human-like abilities, and one with a Care-O-bot 4 robot. In the first study, we create an immersive narrative using an interactive storyboard to collect responses of 154 participants. In the second study, 6 participants had repeated interactions over three weeks with a physical robot. We summarise and discuss the findings of our investigations of the effects of robots' errors on people's trust in robots for designing mechanisms that allow robots to recover from a breach of trust. In particular, we observed that robots' errors had greater impact on people's trust in the robot when the errors were made at the beginning of the interaction and had severe consequences. Our results also provided insights on how these errors vary according to the individuals’ personalities, expectations and previous experiences

    Negative Consequences of Anthropomorphized Technology: A Bias-Threat-Illusion Model

    Get PDF
    Attributing human-like traits to information technology (IT) — leading to what is called anthropomorphized technology (AT)—is increasingly common by users of IT. Previous IS research has offered varying perspectives on AT, although it primarily focuses on the positive consequences. This paper aims to clarify the construct of AT and proposes a “bias–threat–illusion” model to classify the negative consequences of AT. Drawing on “three-factor theory of anthropomorphism” from social psychology and integrating self-regulation theory, we propose that failing to regulate the use of elicited agent knowledge and to control the intensified psychological needs (i.e., sociality and effectance) when interacting with AT leads to negative consequences: “transferring human bias,” “inducing threat to human agency,” and “creating illusionary relationship.” Based on this bias–threat–illusion model, we propose theory-driven remedies to attenuate negative consequences. We conclude with implications for IS theories and practice
    • …
    corecore