10 research outputs found

    A software framework for simulation studies of interaction models in agent teamwork.

    Get PDF
    This thesis proposes a new software framework that facilitates the study of agent interaction models in early development stages from a designer's perspective. Its purpose is to help reduced the design decision space through simulation experiments that provide early feedback on comparative performance of alternative solutions. This is achieved through interactive concurrent simulation of multiple teams in a representative microworld context. The generic simulator's architecture accommodates an open class of different microworlds and permits multiple communication mechanisms. It also supports interoperability with other software tools, distributed simulation, and various extensions. The framework was validated in the context of two different research projects on helpful behavior in agent teams: the Mutual Assistance Protocol, based on rational criteria for help, and the Empathic Help Model, based on a concept of empathy for artificial agents. The results show that the framework meets its design objectives and provides the flexibility needed for research experimentation. --Leaf i.The original print copy of this thesis may be available here: http://wizard.unbc.ca/record=b184472

    A model of empathy for artificial agent teamwork.

    Get PDF
    This thesis introduces a model of empathy as a basis for helpful behaviour in teams consisting purely of artificial agents that collaborate on practical problem-solving tasks, and investigates whether the performance of such teams can benefit from empathic help between members as the analogy with human teams might suggest. Guided by existing models of natural empathy in psychology and neuroscience, it identifies the potential empathy factors for artificial agents, as well as the mechanisms by which they produce affective and behavioural responses. The performance of empathic agent teams situated in a microworld similar to the Coloured Trails game is studied through simulation experiments, with the model parameters optimized by a genetic algorithm. For low to moderate levels of random disturbance in the environment, empathic help is superior to random help, and it outperforms rational help as rational decision complexity grows, in particular at higher levels of environmental disturbance. --P. ii.The original print copy of this thesis may be available here: http://wizard.unbc.ca/record=b180582

    Extended loneliness. When hyperconnectivity makes us feel alone

    Get PDF
    In this paper, I analyse a specific kind of loneliness that can be experienced in the networked life, namely “extended loneliness”. I claim that loneliness—conceived of as stemming from a lack of satisfying relationships to others—can arise from an abundance of connections in the online sphere. Extended loneliness, in these cases, does not result from a lack of connections to other people. On the contrary, it consists in the complex affective experience of both lacking and longing for meaningful relationships while being connected to many people online. The recursive interaction with a digital assistant in a smart flat is my key example for defining the contours of this specific kind of loneliness that emerges when hyperconnectivity becomes pervasive in the user’s daily-life. Drawing on Sherry Turkle’s work and employing the conceptual framework of the extended mind, I analyse the specific characteristics of extended loneliness and explore its phenomenology

    Influence of anthropomorphic agent on human empathy through games

    Full text link
    The social acceptance of AI agents, including intelligent virtual agents and physical robots, is becoming more important for the integration of AI into human society. Although the agents used in human society share various tasks with humans, their cooperation may frequently reduce the task performance. One way to improve the relationship between humans and AI agents is to have humans empathize with the agents. By empathizing, humans feel positively and kindly toward agents, which makes it easier to accept them. In this study, we focus on tasks in which humans and agents have various interactions together, and we investigate the properties of agents that significantly influence human empathy toward the agents. To investigate the effects of task content, difficulty, task completion, and an agent's expression on human empathy, two experiments were conducted. The results of the two experiments showed that human empathy toward the agent was difficult to maintain with only task factors, and that the agent's expression was able to maintain human empathy. In addition, a higher task difficulty reduced the decrease in human empathy, regardless of task content. These results demonstrate that an AI agent's properties play an important role in helping humans accept them.Comment: 17 pages, 12 figures, 5 tables, submitted IEEE Access. arXiv admin note: substantial text overlap with arXiv:2206.0612

    Facilitation of human empathy through self-disclosure of anthropomorphic agents

    Full text link
    As AI technologies progress, social acceptance of AI agents including intelligent virtual agents and robots is getting to be even more important for more applications of AI in human society. One way to improve the relationship between humans and anthropomorphic agents is to have humans empathize with the agents. By empathizing, humans take positive and kind actions toward agents, and emphasizing makes it easier for humans to accept agents. In this study, we focused on self-disclosure from agents to humans in order to realize anthropomorphic agents that elicit empathy from humans. Then, we experimentally investigated the possibility that an agent's self-disclosure facilitates human empathy. We formulate hypotheses and experimentally analyze and discuss the conditions in which humans have more empathy for agents. This experiment was conducted with a three-way mixed plan, and the factors were the agents' appearance (human, robot), self-disclosure (high-relevance self-disclosure, low-relevance self-disclosure, no self-disclosure), and empathy before and after a video stimulus. An analysis of variance was performed using data from 576 participants. As a result, we found that the appearance factor did not have a main effect, and self-disclosure, which is highly relevant to the scenario used, facilitated more human empathy with statistically significant difference. We also found that no self-disclosure suppressed empathy. These results support our hypotheses.Comment: 20 pages, 8 figures, 2 tables, submitted to PLOS ONE Journa

    Evaluation of a User-adaptive Light-based Interior Concept for Supporting Mobile Office Work during Highly Automated Driving

    Get PDF
    Automated driving promises that users can devote their travel time to activities like relaxing or mobile office (MO) work. We present an interior light concept for supporting MO work and evaluate it in a driving simulator study with participants. A vehicle mock-up was equipped as MO including light elements for focus and ambient illumination. Based on these, an adaptive (i.e. adapting to user activities) and an adaptable (i.e. could be changed by user according to preference) light set-up were created and compared to a baseline version. Regarding user experience, the adaptive variant was rated best on hedonic aspects, while the adaptable variant scored highest on pragmatic facets. In addition, the adaptable set-up was ranked best on preference before adaptive and baseline. This suggest that adaption of the interior light to non-driving related activities improves user experience. Future studies should evaluate combinations of the adaptive and the adaptive variants tested here

    Measuring perceived empathy in dialogue systems

    Get PDF
    Dialogue systems, from Virtual Personal Assistants such as Siri, Cortana, and Alexa to state-of-the-art systems such as BlenderBot3 and ChatGPT, are already widely available, used in a variety of applications, and are increasingly part of many people’s lives. However, the task of enabling them to use empathetic language more convincingly is still an emerging research topic. Such systems generally make use of complex neural networks to learn the patterns of typical human language use, and the interactions in which the systems participate are usually mediated either via interactive text-based or speech-based interfaces. In human–human interaction, empathy has been shown to promote prosocial behaviour and improve interaction. In the context of dialogue systems, to advance the understanding of how perceptions of empathy affect interactions, it is necessary to bring greater clarity to how empathy is measured and assessed. Assessing the way dialogue systems create perceptions of empathy brings together a range of technological, psychological, and ethical considerations that merit greater scrutiny than they have received so far. However, there is currently no widely accepted evaluation method for determining the degree of empathy that any given system possesses (or, at least, appears to possess). Currently, different research teams use a variety of automated metrics, alongside different forms of subjective human assessment such as questionnaires, self-assessment measures and narrative engagement scales. This diversity of evaluation practice means that, given two DSs, it is usually impossible to determine which of them conveys the greater degree of empathy in its dialogic exchanges with human users. Acknowledging this problem, the present article provides an overview of how empathy is measured in human–human interactions and considers some of the ways it is currently measured in human–DS interactions. Finally, it introduces a novel third-person analytical framework, called the Empathy Scale for Human–Computer Communication (ESHCC), to support greater uniformity in how perceived empathy is measured during interactions with state-of-the-art DSs

    Empatia em agentes artificiais : proposta de um novo instrumento de avaliação

    Get PDF
    Tese de Mestrado, Ciência Cognitiva, 2021, Universidade de Lisboa, Faculdade de CiênciasEste trabalho visa contribuir para as discussões sobre o desenvolvimento de agentes artificiais empáticos, especialmente no que se refere à sua avaliação. Ainda não há instrumentos validados para medir a empatia em agentes artificiais e a maioria dos estudos nessa área utilizam medidas criadas ou adaptadas para cada pesquisa específica. Medidas validadas são essenciais para gerar dados confiáveis sobre aquilo que se pretende medir. O objetivo principal deste estudo é propor um instrumento de avaliação válido que possa vir a ser aplicado tanto em agentes artificiais quanto em seres humanos. O instrumento proposto avalia a empatia percebida por uma terceira pessoa, após observação da interação, e foi construído em Português, tendo sido baseado no instrumento de Davis, no instrumento de Toronto e em medidas não validadas aplicadas em estudos sobre agentes artificiais empáticos. A coleta de dados e a aplicação do instrumento foram realizadas pela internet, por meio da plataforma Qualtrics. As interações foram apresentadas em quatro vídeos legendados, 3 em que agentes artificiais (o chatbot Wysa, o personagem virtual Autotutor, e um robô do tipo NAO) interagem com seres humanos e 1 em que dois seres humanos interagem entre si. Os participantes foram convidados para participarem voluntariamente da pesquisa, via e-mail e redes sociais. O chatbot Wysa foi avaliado por 95 pessoas, o Autotutor por 96, o robô NAO por 100, e o ser humano por 99. Todos os participantes declararam ter 18 anos ou mais e serem fluentes na língua portuguesa, sendo 132 brasileiros, 50 portugueses e um argentino. Dentre os participantes, 103 declararam-se do sexo feminino e 73, do sexo masculino. A versão final do instrumento possui oito itens, que refletem componentes cognitivos, afetivos e comportamentais da empatia, e que se aproximam da definição de empatia proposta por Hoffman (1985). A análise fatorial apontou um fator único e os coeficientes do alfa de Cronbach situaram-se sempre acima de 0,8, indicando que o instrumento se mostrou válido e confiável. Com exceção do Autotutor, todos os agentes, incluindo o ser humano, foram bem avaliados como empáticos.This research aims to contribute to the discussions on the development of empathetic artificial agents, especially regarding their evaluation. To date, there are no validated instruments to measure empathy in artificial agents and most studies in this area use measures created or adapted for each specific case. Validated measures are essential to generate reliable data on what it’s intended to measure. The main goal of this study is to propose a valid assessment instrument that can be applied to both artificial agents and human beings. The proposed instrument assesses the empathy perceived by a third person, after observing the interaction, and was built in Portuguese, having been based on the Interpersonal Reactivity Index from Davis, the Toronto Empathy Questionnaire and on non-validated measures applied in studies on empathic artificial agents. The instrument was applied through the Qualtrics platform, and all the data was collected via internet. The interactions were presented in four subtitled videos, 3 in which artificial agents (the chatbot Wysa, the virtual character Autotutor and a NAO robot) interact with human beings and 1 in which two human beings interact with each other. Participants were invited to participate voluntarily in the research, via email and social networks. The Wysa chatbot was evaluated by 95 people, the Autotutor by 96, the NAO robot by 100, and the human being by 99. All participants declared to be 18 years old or more and to be fluent in the Portuguese language, being 132 Brazilians, 50 Portuguese and one Argentine. Among the participants, 103 declared themselves to be female and 73, male. The final version of the instrument has eight items, which reflect cognitive, affective and behavioural components of empathy, and are consonant with the definition of empathy proposed by Hoffman (1985). The factor analysis pointed to a single factor and Cronbach's alpha coefficients were all above 0.8, indicating that the instrument proved to be valid and reliable. Except for the Autotutor, all agents, including the human being, were well evaluated as empathetic

    Towards a Legal end Ethical Framework for Personal Care Robots. Analysis of Person Carrier, Physical Assistant and Mobile Servant Robots.

    Get PDF
    Technology is rapidly developing, and regulators and robot creators inevitably have to come to terms with new and unexpected scenarios. A thorough analysis of this new and continuosuly evolving reality could be useful to better understand the current situation and pave the way to the future creation of a legal and ethical framework. This is clearly a wide and complex goal, considering the variety of new technologies available today and those under development. Therefore, this thesis focuses on the evaluation of the impacts of personal care robots. In particular, it analyzes how roboticists adjust their creations to the existing regulatory framework for legal compliance purposes. By carrying out an impact assessment analysis, existing regulatory gaps and lack of regulatory clarity can be highlighted. These gaps should of course be considered further on by lawmakers for a future legal framework for personal care robot. This assessment should be made first against regulations. If the creators of the robot do not encounter any limitations, they can then proceed with its development. On the contrary, if there are some limitations, robot creators will either (1) adjust the robot to comply with the existing regulatory framework; (2) start a negotiation with the regulators to change the law; or (3) carry out the original plan and risk to be non-compliant. The regulator can discuss existing (or lacking) regulations with robot developers and give a legal response accordingly. In an ideal world, robots are clear of impacts and therefore threats can be responded in terms of prevention and opportunities in form of facilitation. In reality, the impacts of robots are often uncertain and less clear, especially when they are inserted in care applications. Therefore, regulators will have to address uncertain risks, ambiguous impacts and yet unkown effects
    corecore