211 research outputs found

    Assessing Acceptance of Assistive Social Agent Technology by Older Adults: the Almere Model

    Get PDF
    This paper proposes a model of technology acceptance that is specifically developed to test the acceptance of assistive social agents by elderly users. The research in this paper develops and tests an adaptation and theoretical extension of the Unified Theory of Acceptance and Use of Technology (UTAUT) by explaining intent to use not only in terms of variables related to functional evaluation like perceived usefulness and perceived ease of use, but also variables that relate to social interaction. The new model was tested using controlled experiment and longitudinal data collected regarding three different social agents at elderly care facilities and at the homes of older adults. The model was strongly supported accounting for 59-79% of the variance in usage intentions and 49-59% of the variance in actual use. These findings contribute to our understanding of how elderly users accept assistive social agents

    Landscape of Machine Implemented Ethics

    Full text link
    This paper surveys the state-of-the-art in machine ethics, that is, considerations of how to implement ethical behaviour in robots, unmanned autonomous vehicles, or software systems. The emphasis is on covering the breadth of ethical theories being considered by implementors, as well as the implementation techniques being used. There is no consensus on which ethical theory is best suited for any particular domain, nor is there any agreement on which technique is best placed to implement a particular theory. Another unresolved problem in these implementations of ethical theories is how to objectively validate the implementations. The paper discusses the dilemmas being used as validating 'whetstones' and whether any alternative validation mechanism exists. Finally, it speculates that an intermediate step of creating domain-specific ethics might be a possible stepping stone towards creating machines that exhibit ethical behaviour.Comment: 25 page

    The influence of social presence on enjoyment and intention to use of a robot and screen agent by elderly users

    Get PDF
    When using a robot or a screen agent, elderly users might feel more enjoyment if they experience a stronger social presence. In two experiments with a robotic agent and a screen agent (both n=30) this relationship between these two concepts could be established. Besides, both studies showed that social presence correlates with the Intention to Use the system, although there were some differences between the agents. This implicates that factors that influence social presence are relevant when designing assistive agents for elderly people

    Artificial Intelligence: Robots, Avatars, and the Demise of the Human Mediator

    Get PDF
    Published in cooperation with the American Bar Association Section of Dispute Resolutio

    Robotic Psychology. What Do We Know about Human-Robot Interaction and What Do We Still Need to Learn?

    Get PDF
    “Robotization”, the integration of robots in human life will change human life drastically. In many situations, such as in the service sector, robots will become an integrative part of our lives. Thus, it is vital to learn from extant research on human-robot interaction (HRI). This article introduces robotic psychology that aims to bridge the gap between humans and robots by providing insights into particularities of HRI. It presents a conceptualization of robotic psychology and provides an overview of research on service-focused human-robot interaction. Theoretical concepts, relevant to understand HRI with are reviewed. Major achievements, shortcomings, and propositions for future research will be discussed

    Artificial Intelligence: Robots, Avatars, and the Demise of the Human Mediator

    Get PDF
    Published in cooperation with the American Bar Association Section of Dispute Resolutio

    Artificial Intelligence: Robots, Avatars and the Demise of the Human Mediator

    Get PDF
    As technology has advanced, many have wondered whether (or simply when) artificial intelligent devices will replace the humans who perform complex, interactive, interpersonal tasks such as dispute resolution. Has science now progressed to the point that artificial intelligence devices can replace human mediators, arbitrators, dispute resolvers and problem solvers? Can humanoid robots, attractive avatars and other relational agents create the requisite level of trust and elicit the truthful, perhaps intimate or painful, disclosures often necessary to resolve a dispute or solve a problem? This article will explore these questions. Regardless of whether the reader is convinced that the demise of the human mediator or arbitrator is imminent, one cannot deny that artificial intelligence now has the capability to assume many of the responsibilities currently being performed by alternative dispute resolution (ADR) practitioners. It is fascinating (and perhaps unsettling) to realize the complexity and seriousness of tasks currently delegated to avatars and robots. This article will review some of those delegations and suggest how the artificial intelligence developed to complete those assignments may be relevant to dispute resolution and problem solving. “Relational Agents,” which can have a physical presence such as a robot, be embodied in an avatar, or have no detectable form whatsoever and exist only as software, are able to create long term socio-economic relationships with users built on trust, rapport and therapeutic goals. Relational agents are interacting with humans in circumstances that have significant consequences in the physical world. These interactions provide insights as to how robots and avatars can participate productively in dispute resolution processes. Can human mediators and arbitrators be replaced by robots and avatars that not only physically resemble humans, but also act, think, and reason like humans? And to raise a particularly interesting question, can robots, avatars and other relational agents look, move, act, think, and reason even “better” than humans
    • 

    corecore