5,640 research outputs found

    No Grice: Computers that Lie, Deceive and Conceal

    Get PDF
    In the future our daily life interactions with other people, with computers, robots and smart environments will be recorded and interpreted by computers or embedded intelligence in environments, furniture, robots, displays, and wearables. These sensors record our activities, our behavior, and our interactions. Fusion of such information and reasoning about such information makes it possible, using computational models of human behavior and activities, to provide context- and person-aware interpretations of human behavior and activities, including determination of attitudes, moods, and emotions. Sensors include cameras, microphones, eye trackers, position and proximity sensors, tactile or smell sensors, et cetera. Sensors can be embedded in an environment, but they can also move around, for example, if they are part of a mobile social robot or if they are part of devices we carry around or are embedded in our clothes or body. \ud \ud Our daily life behavior and daily life interactions are recorded and interpreted. How can we use such environments and how can such environments use us? Do we always want to cooperate with these environments; do these environments always want to cooperate with us? In this paper we argue that there are many reasons that users or rather human partners of these environments do want to keep information about their intentions and their emotions hidden from these smart environments. On the other hand, their artificial interaction partner may have similar reasons to not give away all information they have or to treat their human partner as an opponent rather than someone that has to be supported by smart technology.\ud \ud This will be elaborated in this paper. We will survey examples of human-computer interactions where there is not necessarily a goal to be explicit about intentions and feelings. In subsequent sections we will look at (1) the computer as a conversational partner, (2) the computer as a butler or diary companion, (3) the computer as a teacher or a trainer, acting in a virtual training environment (a serious game), (4) sports applications (that are not necessarily different from serious game or education environments), and games and entertainment applications

    How Do You Like Me in This: User Embodiment Preferences for Companion Agents

    Get PDF
    We investigate the relationship between the embodiment of an artificial companion and user perception and interaction with it. In a Wizard of Oz study, 42 users interacted with one of two embodiments: a physical robot or a virtual agent on a screen through a role-play of secretarial tasks in an office, with the companion providing essential assistance. Findings showed that participants in both condition groups when given the choice would prefer to interact with the robot companion, mainly for its greater physical or social presence. Subjects also found the robot less annoying and talked to it more naturally. However, this preference for the robotic embodiment is not reflected in the users’ actual rating of the companion or their interaction with it. We reflect on this contradiction and conclude that in a task-based context a user focuses much more on a companion’s behaviour than its embodiment. This underlines the feasibility of our efforts in creating companions that migrate between embodiments while maintaining a consistent identity from the user’s point of view

    Combining goal inference and natural-language dialogue for human-robot joint action

    Get PDF
    We demonstrate how combining the reasoning components from two existing systems designed for human-robot joint action produces an integrated system with greater capabilities than either of the individual systems. One of the systems supports primarily non-verbal interaction and uses dynamic neural fields to infer the user’s goals and to suggest appropriate system responses; the other emphasises natural-language interaction and uses a dialogue manager to process user input and select appropriate system responses. Combining these two methods of reasoning results in a robot that is able to coordinate its actions with those of the user while employing a wide range of verbal and non-verbal communicative actions.(undefined

    A Review of Verbal and Non-Verbal Human-Robot Interactive Communication

    Get PDF
    In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects of human-robot interaction. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion

    Rules for Responsive Robots: Using Human Interactions to Build Virtual Interactions

    Get PDF
    Computers seem to be everywhere and to be able to do almost anything. Automobiles have Global Positioning Systems to give advice about travel routes and destinations. Virtual classrooms supplement and sometimes replace face-to-face classroom experiences with web-based systems (such as Blackboard) that allow postings, virtual discussion sections with virtual whiteboards, as well as continuous access to course documents, outlines, and the like. Various forms of “bots” search for information about intestinal diseases, plan airline reservations to Tucson, and inform us of the release of new movies that might fit our cinematic preferences. Instead of talking to the agent at AAA, the professor, the librarian, the travel agent, or the cinema-file two doors down, we are interacting with electronic social agents. Some entrepreneurs are even trying to create toys that are sufficiently responsive to engender emotional attachments between the toy and its owner

    The influence of visual feedback and gender dynamics on performance, perception and communication strategies in CSCW

    Get PDF
    The effects of gender in human communication and human-computer interaction are well-known, yet little is understood about how it influences performance in the complex, collaborative tasks in computer-mediated settings – referred to as Computer-Supported Collaborative Work (CSCW) – that are increasingly fundamental to the way in which people work. In such tasks, visual feedback about objects and events is particularly valuable because it facilitates joint reference and attention, and enables the monitoring of people’s actions and task progress. As such, software to support CSCW frequently provides shared visual workspace. While numerous studies describe and explain the impact of visual feedback in CSCW, research has not considered whether there are differences in how females and males use it, are aided by it, or are affected by its absence. To address these knowledge gaps, this study explores the effect of gender – and its interactions within pairs – in CSCW, with and without visual feedback. An experimental study is reported in which mixed-gender and same-gender pairs communicate to complete a collaborative navigation task, with one of the participants being under the impression that s/he is interacting with a robot (to avoid gender-related social preconceptions). The study analyses performance, perceptions and communication strategies. As predicted, there was a significant benefit associated with visual feedback in terms of language economy and efficiency. However, it was also found that visual feedback may be disruptive to task performance, because it relaxes the users’ precision criteria and inflates their assumptions of shared perspective. While no actual performance difference was found between males and females in the navigation task, females rated their own performance less positively than did males. In terms of communication strategies, males had a strong tendency to introduce novel vocabulary when communication problems occurred, while females exhibited more conservative behaviour. When visual feedback was removed, females adapted their strategies drastically and effectively, increasing the quality and specificity of the verbal interaction, repeating and re-using vocabulary, while the behaviour of males remained consistent. These results are used to produce design recommendations for CSCW systems that will suit users of both genders and enable effective collaboration
    • 

    corecore