5,512 research outputs found

    ABC-EBDI: A cognitive-affective framework to support the modeling of believable intelligent agents.

    Get PDF
    El Grupo de Investigación de Interfaces Avanzadas (AffectiveLab), es un grupo reconocido por el Gobierno de Aragón (T60-20R) cuya actividad se enmarca en el área de la Interacción Humano-Computadora (IHC). Su actividad investigadora se ha centrado, en los últimos años, en cuatro temas principales: interacción natural, informática afectiva, accesibilidad e interfaces basadas en agentes inteligentes, siendo esta última en la que se enmarca esta tesis doctoral. Más concretamente, la realización de esta tesis doctoral se enmarca dentro de los proyectos de investigación nacionales JUGUEMOS (TIN2015-67149-C3-1R) y PERGAMEX (RTI2018-096986-B-C31). Una de sus líneas de investigación se centra en el desarrollo de arquitecturas cognitivo-afectivas para apoyar el modelado afectivo de los agentes inteligentes. El AffectiveLab tiene una sólida experiencia en el uso de agentes de interfaz incorporados que exhiben expresiones afectivas corporales y faciales (Baldassarri et al., 2008). En los últimos años, se han centrado en el modelado del comportamiento de los agentes inteligentes (Pérez et al., 2017).La definición de agente inteligente es un tema controvertido, pero se puede decir que es una entidad autónoma que recibe información dinámica del entorno a través de sensores y actúa sobre el medio ambiente a través de actuadores, mostrando un comportamiento dirigido a un objetivo (Russell et al., 2003). El modelado de los procesos cognitivos en los agentes inteligentes se basa en diferentes teorías (Moore, 1980; Newell, 1994; Bratman, 1987) que explican, desde diferentes puntos de vista, el funcionamiento de la mente humana. Los agentes inteligentes implementados sobre la base de una teoría cognitiva se conocen como agentes cognitivos. Los más desarrollados son los que se basan en arquitecturas cognitivas, como Soar (Laird et al., 1987), ACT-R (Anderson, 1993) y BDI (Rao and Georgeff, 1995). Comparado con Soar y otras arquitecturas complejas, BDI se destaca por su simplicidad y versatilidad. BDI ofrece varias características que la hacen popular, como su capacidad para explicar el comportamiento del agente en cada momento, haciendo posible una interacción dinámica con el entorno. Debido a la creciente popularidad del marco BDI se ha utilizado para apoyar el modelado de agentes inteligentes (Larsen, 2019; (Cranefield and Dignum, 2019). En los últimos años, también han aparecido propuestas de BDI que integran aspectos afectivos. Los agentes inteligentes construidos en base a la arquitectura BDI que también incorporan capacidades afectivas, se conocen como agentes EBDI (Emotional BDI) y son el foco de esta tesis. El objetivo principal de esta tesis ha sido proponer un marco cognitivo-afectivo basado en el BDI que sustente el modelado cognitivo-afectivo de los agentes inteligentes. La finalidad es ser capaz de reproducir un comportamiento humano creíble en situaciones complejas donde el comportamiento humano es variado y bastante impredecible. El objetivo propuesto se ha logrado con éxito en los términos descritos a continuación:• Se ha elaborado un exhaustivo estado del arte relacionado con los modelos afectivos más utilizados para modelar los aspectos afectivos en los agentes inteligentes.• Se han estudiado las arquitecturas de BDI y las propuestas previas de EBDI. El estudio, que dio lugar a una publicación (Sánchez-López and Cerezo, 2019), permitió detectar las cuestiones abiertas en el área, y la necesidad de considerar todos los aspectos de la afectividad (emociones, estado de ánimo, personalidad) y su influencia en todas las etapas cognitivas. El marco resultante de este trabajo doctoral incluye también el modelado de la conducta y el comportamiento comunicativo, que no habían sido considerados hasta ahora en el modelado de los agentes inteligentes. Estos aspectos colocan al marco resultante entre EBDI los más avanzados de la literatura. • Se ha diseñado e implementado un marco basado en el BDI para soportar el modelado cognitivo, afectivo y conductual de los agentes inteligentes, denominado ABC-EBDI (Sanchez et al., 2020) (Sánchez et al., 2019). Se trata de la primera aplicación de un modelo psicológico muy conocido, el modelo ABC de Ellis, a la simulación de agentes inteligentes humanos realistas. Esta aplicación implica:o La ampliación del concepto de creencias. En el marco se consideran tres tipos de creencias: creencias básicas, creencias de contexto y comportamientos operantes. Las creencias básicas representan la información general que el agente tiene sobre sí mismo y el entorno. Las conductas operantes permiten modelar la conducta reactiva del agente a través de las conductas aprendidas. Las creencias de contexto, que se representan en forma de cogniciones frías y calientes, se procesan para clasificarlas en creencias irracionales y racionales siguiendo las ideas de Ellis. Es la consideración de creencias irracionales/racionales porque abre la puerta a la simulación de reacciones humanas realistas.o La posibilidad de gestionar de forma unificada las consecuencias de los acontecimientos en términos de consecuencias afectivas y de comportamiento (conducta). Las creencias de contexto racionales conducen a emociones funcionales y a una conducta adaptativa, mientras que las creencias de contexto irracionales conducen a emociones disfuncionales y a una conducta maladaptativa. Este carácter funcional/disfuncional de las emociones no se había utilizado nunca antes en el contexto del BDI. Además, el modelado conductual se ha ampliado con el modelado de estilos comunicativos, basado en el modelo Satir, tampoco aplicado previamente al modelado de agentes inteligentes. El modelo de Satir considera gestos corporales, expresiones faciales, voz, entonación y estructuras lingüísticas.• Se ha elegido un caso de uso, "I wish a had better news" para la aplicación del marco propuesto y se han realizado dos tipos de evaluaciones, por parte de expertos y de usuarios. La evaluación ha confirmado el gran potencial del marco propuesto para reproducir un comportamiento humano realista y creíble en situaciones complejas.<br /

    Representing decision-makers using styles of behavior: an approach designed for group decision support systems

    Get PDF
    Supporting decision-making processes when the elements of a group are geographically dispersed and on a tight schedule is a complex task. Aiming to support decision-makers anytime and anywhere, Web-based group decision support systems have been studied. However, the limitations in the decision-makers’ interactions associated to this scenario bring new challenges. In this work, we propose a set of behavioral styles from which decision-makers’ intentions can be modelled into agents. The goal is that, besides having agents represent typical preferences of the decision-makers (towards alternatives and criteria), they can also represent their intentions. To do so, we conducted a survey with 64 participants in order to find homogeneous operating values so as to numerically define the proposed behavioral styles in four dimensions. In addition, we also propose a communication model that simulates the dialogues made by decision-makers in face-to-face meetings. We developed a prototype to simulate decision scenarios and found that agents are capable of acting according to the decision-makers’ intentions and fundamentally benefit from different possible behavioral styles, just as a face-to-face meeting benefits from the heterogeneity of its participants.This work was supported by COMPETE Programme (operational programme for competitiveness) within Project POCI-01-0145-FEDER-007043, by National Funds through the FCT – Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) within the Projects UID/CEC/00319/2013, UID/EEA/00760/2013, and the Ph.D. grants SFRH/BD/89697/2012 and SFRH/BD/89465/2012 attributed to João Carneiro and Pedro Saraiva, respectively.info:eu-repo/semantics/publishedVersio

    Embedding Intelligence. Designerly reflections on AI-infused products

    Get PDF
    Artificial intelligence is more-or-less covertly entering our lives and houses, embedded into products and services that are acquiring novel roles and agency on users. Products such as virtual assistants represent the first wave of materializa- tion of artificial intelligence in the domestic realm and beyond. They are new interlocutors in an emerging redefined relationship between humans and computers. They are agents, with miscommunicated or unclear proper- ties, performing actions to reach human-set goals. They embed capabilities that industrial products never had. They can learn users’ preferences and accordingly adapt their responses, but they are also powerful means to shape people’s behavior and build new practices and habits. Nevertheless, the way these products are used is not fully exploiting their potential, and frequently they entail poor user experiences, relegating their role to gadgets or toys. Furthermore, AI-infused products need vast amounts of personal data to work accurately, and the gathering and processing of this data are often obscure to end-users. As well, how, whether, and when it is preferable to implement AI in products and services is still an open debate. This condition raises critical ethical issues about their usage and may dramatically impact users’ trust and, ultimately, the quality of user experience. The design discipline and the Human-Computer Interaction (HCI) field are just beginning to explore the wicked relationship between Design and AI, looking for a definition of its borders, still blurred and ever-changing. The book approaches this issue from a human-centered standpoint, proposing designerly reflections on AI-infused products. It addresses one main guiding question: what are the design implications of embedding intelligence into everyday objects

    Uncovering Drivers for the Integration of Dark Patterns in Conversational Agents

    Get PDF
    Today, organizations increasingly utilize conversational agents (CAs), which are smart technologies that converse in a human-to-human interaction style. CAs are very effective in guiding users through digital environments. However, this makes them natural targets for dark patterns, which are user interface design elements that infringe on user autonomy by fostering uninformed decisions. Integrating dark patterns in CAs has tremendous impacts on supposedly free user choices in the digital space. Thus, we conducted a qualitative study consisting of semi-structured interviews with developers to investigate drivers of dark patterns in CAs. Our findings reveal that six drivers for the implementation of dark patterns exist. The technical drivers include heavy guidance of CAs during the conversation and the CAs\u27 data collection potential. Additionally, organizational drivers are assertive stakeholder dominance and time pressure during the development process. Team drivers incorporate a deficient user understanding and an inexperienced team

    Negative Consequences of Anthropomorphized Technology: A Bias-Threat-Illusion Model

    Get PDF
    Attributing human-like traits to information technology (IT) — leading to what is called anthropomorphized technology (AT)—is increasingly common by users of IT. Previous IS research has offered varying perspectives on AT, although it primarily focuses on the positive consequences. This paper aims to clarify the construct of AT and proposes a “bias–threat–illusion” model to classify the negative consequences of AT. Drawing on “three-factor theory of anthropomorphism” from social psychology and integrating self-regulation theory, we propose that failing to regulate the use of elicited agent knowledge and to control the intensified psychological needs (i.e., sociality and effectance) when interacting with AT leads to negative consequences: “transferring human bias,” “inducing threat to human agency,” and “creating illusionary relationship.” Based on this bias–threat–illusion model, we propose theory-driven remedies to attenuate negative consequences. We conclude with implications for IS theories and practice

    Conversational commerce: entering the next stage of AI-powered digital assistants

    Get PDF
    Digital assistant is a recent advancement benefited through data-driven innovation. Though digital assistants have become an integral member of user conversations, but there is no theory that relates user perception towards this AI powered technology. The purpose of the research is to investigate the role of technology attitude and AI attributes in enhancing purchase intention through digital assistants. A conceptual model is proposed after identifying three major AI factors namely, perceived anthropomorphism, perceived intelligence, and perceived animacy. To test the model, the study employed structural equation modeling using 440 sample. The results indicated that perceived anthropomorphism plays the most significant role in building a positive attitude and purchase intention through digital assistants. Though the study is built using technology-related variables, the hypotheses are proposed based on various psychology-related theories such as uncanny valley theory, the theory of mind, developmental psychology, and cognitive psychology theory. The study’s theoretical contributions are discussed within the scope of these theories. Besides the theoretical contribution, the study also offers illuminating practical implications for developers and marketers’ benefit

    Developing a Personality Model for Speech-based Conversational Agents Using the Psycholexical Approach

    Full text link
    We present the first systematic analysis of personality dimensions developed specifically to describe the personality of speech-based conversational agents. Following the psycholexical approach from psychology, we first report on a new multi-method approach to collect potentially descriptive adjectives from 1) a free description task in an online survey (228 unique descriptors), 2) an interaction task in the lab (176 unique descriptors), and 3) a text analysis of 30,000 online reviews of conversational agents (Alexa, Google Assistant, Cortana) (383 unique descriptors). We aggregate the results into a set of 349 adjectives, which are then rated by 744 people in an online survey. A factor analysis reveals that the commonly used Big Five model for human personality does not adequately describe agent personality. As an initial step to developing a personality model, we propose alternative dimensions and discuss implications for the design of agent personalities, personality-aware personalisation, and future research.Comment: 14 pages, 2 figures, 3 tables, CHI'2

    Conversational AI Agents: Investigating AI-Specific Characteristics that Induce Anthropomorphism and Trust in Human-AI Interaction

    Get PDF
    The investment in AI agents has steadily increased over the past few years, yet the adoption of these agents has been uneven. Industry reports show that the majority of people do not trust AI agents with important tasks. While the existing IS theories explain users’ trust in IT artifacts, several new studies have raised doubts about the applicability of current theories in the context of AI agents. At first glance, an AI agent might seem like any other technological artifact. However, a more in-depth assessment exposes some fundamental characteristics that make AI agents different from previous IT artifacts. The aim of this dissertation, therefore, is to identify the AI-specific characteristics and behaviors that hinder and contribute to trust and distrust, thereby shaping users’ behavior in human-AI interaction. Using a custom-developed conversational AI agent, this dissertation extends the human-AI literature by introducing and empirically testing six new constructs, namely, AI indeterminacy, task fulfillment indeterminacy, verbal indeterminacy, AI inheritability, AI trainability, and AI freewill

    Be a Miracle - Designing Conversational Agents to Influence Users’ Intention Regarding Organ Donation

    Get PDF
    The increasing need for organ donations remains a worldwide challenge as transplant waiting lists grow and donation rates persist at constant levels. The increasing popularity of conversational agents (CAs) has prompted new strategies for educating and persuading individuals to adjust their cognitive and behavioral beliefs and become donors. However, how CAs should be designed to modify uninformed users’ intention to donate remains unclear. Against this background, we conducted an online experiment (N=134) to examine the impact of a human-like CA design on users\u27 intention to become organ donors. Based on the three-factor theory of anthropomorphism and the elaboration likelihood model, we derive three theoretical mechanisms to understand the influence of a CAs human-like design on users’ intention to donate. The findings show that perceived anthropomorphism does not directly impact persuasion and empathy but is mediated via perceived usefulness to influence the intention to donate
    corecore