19 research outputs found

    Responses to human-like artificial agents : effects of user and agent characteristics

    Get PDF

    Defining Intended Learning Outcomes (ILO's) of inter-program CBL towards achieving constructive alignment in the context of ISBEP

    Get PDF
    We present a framework that connects identified competence areas with Indented Learning Outcomes (ILO’s). Such a framework is likely to be useful for the design of inter-program Challenge-Based Learning (CBL) in engineering education. The framework was developed out of a need to increase the constructive alignment (CA) of ILO's, learning/teaching activities, and assessment of the Innovation Space Bachelor End Project (ISBEP); an inter-program CBL initiative at Eindhoven University of Technology (TU/e). The framework was developed based on a co-creation session, and set up around the definition of ILO’s as departing point to reachCA. We contribute a comprehensive framework listing the ILO’s associated with inter-program CBL at the third-year, bachelor level, and identify three categories related to design and research processes, professional skills, and professional identity and self-directed learning. Furthermore, we illustrate our findings with practices from ISBEP, highlighting the influence of ILO’s on our efforts to reach alignment. Finally, we discuss the implications for CBL design, propose future work, and draw attention to possible limitations in the use of the framework

    Responses to human-like artificial agents : effects of user and agent characteristics

    No full text

    Dynamic perceptions of human-likeness while interacting with a social robot

    No full text
    In human-robot interaction research, much attention is given to the development of socially assistive robots that can have natural interactions with their users. One crucial aspect of such natural interactions is that the robot is perceived as human-like. Much research already exists that investigated perceptions of the human-likeness of social robots, but the duration of the interaction is often overlooked. In an experiment we show that people's human-like perceptions of social robots change substantially over time. With this we show the importance of taking multiple measurements of perceived human-likeness

    Ambiguous agents: the influence of consistency of an artificial agent’s social cues on emotion recognition, recall, and persuasiveness

    Get PDF
    This article explores the relation between consistency of social cues and persuasion by an artificial agent. Including (minimal) social cues in Persuasive Technology (PT) increases the probability that people attribute human-like characteristics to that technology, which in turn can make that technology more persuasive (see, e.g., Nass, Steuer, Tauber, & Reeder, 1993). PT in the social actor role can be equipped with a variety of social cues to create opportunities for applying social influence strategies (for an overview, see Fogg, 2003). However, multiple social cues may not always be perceived as being consistent, which could decrease their perceived human-likeness and their persuasiveness. In the current article, we investigate the relation between consistency of social cues and persuasion by an artificial agent. Findings of two studies show that consistency of social cues increases people’s recognition and recall of artificial agents’ emotional expressions, and make those agents more persuasive. These findings show the importance of the combined meaning of social cues in the design of persuasive artificial agents

    Enhancing trust in autonomous vehicles through intelligent user interfaces that mimic human behavior

    Get PDF
    Autonomous vehicles use sensors and artificial intelligence to drive themselves. Surveys indicate that people are fascinated by the idea of autonomous driving, but are hesitant to relinquish control of the vehicle. Lack of trust seems to be the core reason for these concerns. In order to address this, an intelligent agent approach was implemented, as it has been argued that human traits increase trust in interfaces. Where other approaches mainly use anthropomorphism to shape appearances, the current approach uses anthropomorphism to shape the interaction, applying Gricean maxims (i.e., guidelines for effective conversation). The contribution of this approach was tested in a simulator that employed both a graphical and a conversational user interface, which were rated on likability, perceived intelligence, trust, and anthropomorphism. Results show that the conversational interface was trusted, liked, and anthropomorphized more, and was perceived as more intelligent, than the graphical user interface. Additionally, an interface that was portrayed as more confident in making decisions scored higher on all four constructs than one that was portrayed as having low confidence. These results together indicate that equipping autonomous vehicles with interfaces that mimic human behavior may help increasing people’s trust in, and, consequently, their acceptance of them

    Bridging the gap between the home and the lab : a qualitative study of acceptance of an avatar feedback system

    No full text
    The current study provides a first step in the design and development of a persuasive agent in the natural context of the household. We developed two persuasive probe studies: one paper-based probe and one email-based probe on the use, experience, and effectiveness of persuasive agents. Participants had used these prototypes for a week, after which their experiences were explored in depth interviews and a focus group. Results indicated that a persuasive agent in the household is experienced as fairly pleasant, but important issues need to be solved before it can effectively influence behavior
    corecore