4,571 research outputs found

    Challenging the Computational Metaphor: Implications for How We Think

    Get PDF
    This paper explores the role of the traditional computational metaphor in our thinking as computer scientists, its influence on epistemological styles, and its implications for our understanding of cognition. It proposes to replace the conventional metaphor--a sequence of steps--with the notion of a community of interacting entities, and examines the ramifications of such a shift on these various ways in which we think

    No Grice: Computers that Lie, Deceive and Conceal

    Get PDF
    In the future our daily life interactions with other people, with computers, robots and smart environments will be recorded and interpreted by computers or embedded intelligence in environments, furniture, robots, displays, and wearables. These sensors record our activities, our behavior, and our interactions. Fusion of such information and reasoning about such information makes it possible, using computational models of human behavior and activities, to provide context- and person-aware interpretations of human behavior and activities, including determination of attitudes, moods, and emotions. Sensors include cameras, microphones, eye trackers, position and proximity sensors, tactile or smell sensors, et cetera. Sensors can be embedded in an environment, but they can also move around, for example, if they are part of a mobile social robot or if they are part of devices we carry around or are embedded in our clothes or body. \ud \ud Our daily life behavior and daily life interactions are recorded and interpreted. How can we use such environments and how can such environments use us? Do we always want to cooperate with these environments; do these environments always want to cooperate with us? In this paper we argue that there are many reasons that users or rather human partners of these environments do want to keep information about their intentions and their emotions hidden from these smart environments. On the other hand, their artificial interaction partner may have similar reasons to not give away all information they have or to treat their human partner as an opponent rather than someone that has to be supported by smart technology.\ud \ud This will be elaborated in this paper. We will survey examples of human-computer interactions where there is not necessarily a goal to be explicit about intentions and feelings. In subsequent sections we will look at (1) the computer as a conversational partner, (2) the computer as a butler or diary companion, (3) the computer as a teacher or a trainer, acting in a virtual training environment (a serious game), (4) sports applications (that are not necessarily different from serious game or education environments), and games and entertainment applications

    Internet of robotic things : converging sensing/actuating, hypoconnectivity, artificial intelligence and IoT Platforms

    Get PDF
    The Internet of Things (IoT) concept is evolving rapidly and influencing newdevelopments in various application domains, such as the Internet of MobileThings (IoMT), Autonomous Internet of Things (A-IoT), Autonomous Systemof Things (ASoT), Internet of Autonomous Things (IoAT), Internetof Things Clouds (IoT-C) and the Internet of Robotic Things (IoRT) etc.that are progressing/advancing by using IoT technology. The IoT influencerepresents new development and deployment challenges in different areassuch as seamless platform integration, context based cognitive network integration,new mobile sensor/actuator network paradigms, things identification(addressing, naming in IoT) and dynamic things discoverability and manyothers. The IoRT represents new convergence challenges and their need to be addressed, in one side the programmability and the communication ofmultiple heterogeneous mobile/autonomous/robotic things for cooperating,their coordination, configuration, exchange of information, security, safetyand protection. Developments in IoT heterogeneous parallel processing/communication and dynamic systems based on parallelism and concurrencyrequire new ideas for integrating the intelligent “devices”, collaborativerobots (COBOTS), into IoT applications. Dynamic maintainability, selfhealing,self-repair of resources, changing resource state, (re-) configurationand context based IoT systems for service implementation and integrationwith IoT network service composition are of paramount importance whennew “cognitive devices” are becoming active participants in IoT applications.This chapter aims to be an overview of the IoRT concept, technologies,architectures and applications and to provide a comprehensive coverage offuture challenges, developments and applications

    Sharing emotions and space - empathy as a basis for cooperative spatial interaction

    Get PDF
    Boukricha H, Nguyen N, Wachsmuth I. Sharing emotions and space - empathy as a basis for cooperative spatial interaction. In: Kopp S, Marsella S, Thorisson K, Vilhjalmsson HH, eds. Proceedings of the 11th International Conference on Intelligent Virtual Agents (IVA 2011). LNAI. Vol 6895. Berlin, Heidelberg: Springer; 2011: 350-362.Empathy is believed to play a major role as a basis for humans’ cooperative behavior. Recent research shows that humans empathize with each other to different degrees depending on several modulation factors including, among others, their social relationships, their mood, and the situational context. In human spatial interaction, partners share and sustain a space that is equally and exclusively reachable to them, the so-called interaction space. In a cooperative interaction scenario of relocating objects in interaction space, we introduce an approach for triggering and modulating a virtual humans cooperative spatial behavior by its degree of empathy with its interaction partner. That is, spatial distances like object distances as well as distances of arm and body movements while relocating objects in interaction space are modulated by the virtual human’s degree of empathy. In this scenario, the virtual human’s empathic emotion is generated as a hypothesis about the partner’s emotional state as related to the physical effort needed to perform a goal directed spatial behavior

    The ITALK project : A developmental robotics approach to the study of individual, social, and linguistic learning

    Get PDF
    This is the peer reviewed version of the following article: Frank Broz et al, “The ITALK Project: A Developmental Robotics Approach to the Study of Individual, Social, and Linguistic Learning”, Topics in Cognitive Science, Vol 6(3): 534-544, June 2014, which has been published in final form at doi: http://dx.doi.org/10.1111/tops.12099 This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving." Copyright © 2014 Cognitive Science Society, Inc.This article presents results from a multidisciplinary research project on the integration and transfer of language knowledge into robots as an empirical paradigm for the study of language development in both humans and humanoid robots. Within the framework of human linguistic and cognitive development, we focus on how three central types of learning interact and co-develop: individual learning about one's own embodiment and the environment, social learning (learning from others), and learning of linguistic capability. Our primary concern is how these capabilities can scaffold each other's development in a continuous feedback cycle as their interactions yield increasingly sophisticated competencies in the agent's capacity to interact with others and manipulate its world. Experimental results are summarized in relation to milestones in human linguistic and cognitive development and show that the mutual scaffolding of social learning, individual learning, and linguistic capabilities creates the context, conditions, and requisites for learning in each domain. Challenges and insights identified as a result of this research program are discussed with regard to possible and actual contributions to cognitive science and language ontogeny. In conclusion, directions for future work are suggested that continue to develop this approach toward an integrated framework for understanding these mutually scaffolding processes as a basis for language development in humans and robots.Peer reviewe

    The role of trust and relationships in human-robot social interaction

    Get PDF
    Can a robot understand a human's social behavior? Moreover, how should a robot act in response to a human's behavior? If the goals of artificial intelligence are to understand, imitate, and interact with human level intelligence then researchers must also explore the social underpinnings of this intellect. Our endeavor is buttressed by work in biology, neuroscience, social psychology and sociology. Initially developed by Kelley and Thibaut, social psychology's interdependence theory serves as a conceptual skeleton for the study of social situations, a computational process of social deliberation, and relationships (Kelley&Thibaut, 1978). We extend and expand their original work to explore the challenge of interaction with an embodied, situated robot. This dissertation investigates the use of outcome matrices as a means for computationally representing a robot's interactions. We develop algorithms that allow a robot to create these outcome matrices from perceptual information and then to use them to reason about the characteristics of their interactive partner. This work goes on to introduce algorithms that afford a means for reasoning about a robot's relationships and the trustworthiness of a robot's partners. Overall, this dissertation embodies a general, principled approach to human-robot interaction which results in a novel and scientifically meaningful approach to topics such as trust and relationships.Ph.D.Committee Chair: Arkin, Ronald C.; Committee Member: Christensen, Henrik I.; Committee Member: Fisk, Arthur D.; Committee Member: Ram, Ashwin; Committee Member: Thomaz, Andre

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents
    • 

    corecore