2,269 research outputs found

    The implications of embodiment for behavior and cognition: animal and robotic case studies

    Full text link
    In this paper, we will argue that if we want to understand the function of the brain (or the control in the case of robots), we must understand how the brain is embedded into the physical system, and how the organism interacts with the real world. While embodiment has often been used in its trivial meaning, i.e. 'intelligence requires a body', the concept has deeper and more important implications, concerned with the relation between physical and information (neural, control) processes. A number of case studies are presented to illustrate the concept. These involve animals and robots and are concentrated around locomotion, grasping, and visual perception. A theoretical scheme that can be used to embed the diverse case studies will be presented. Finally, we will establish a link between the low-level sensory-motor processes and cognition. We will present an embodied view on categorization, and propose the concepts of 'body schema' and 'forward models' as a natural extension of the embodied approach toward first representations.Comment: Book chapter in W. Tschacher & C. Bergomi, ed., 'The Implications of Embodiment: Cognition and Communication', Exeter: Imprint Academic, pp. 31-5

    NASA space station automation: AI-based technology review

    Get PDF
    Research and Development projects in automation for the Space Station are discussed. Artificial Intelligence (AI) based automation technologies are planned to enhance crew safety through reduced need for EVA, increase crew productivity through the reduction of routine operations, increase space station autonomy, and augment space station capability through the use of teleoperation and robotics. AI technology will also be developed for the servicing of satellites at the Space Station, system monitoring and diagnosis, space manufacturing, and the assembly of large space structures

    Perceiving Sociable Technology: Exploring the Role of Anthropomorphism and Agency Perception on Human-Computer Interaction (HCI)

    Get PDF
    With the arrival of personal assistants and other AI-enabled autonomous technologies, social interactions with smart devices have become a part of our daily lives. Therefore, it becomes increasingly important to understand how these social interactions emerge, and why users appear to be influenced by them. For this reason, I explore questions on what the antecedents and consequences of this phenomenon, known as anthropomorphism, are as described in the extant literature from fields ranging from information systems to social neuroscience. I critically analyze those empirical studies directly measuring anthropomorphism and those referring to it without a corresponding measurement. Through a grounded theory approach, I identify common themes and use them to develop models for the antecedents and consequences of anthropomorphism. The results suggest anthropomorphism possesses both conscious and non-conscious components with varying implications. While conscious attributions are shown to vary based on individual differences, non-conscious attributions emerge whenever a technology exhibits apparent reasoning such as through non-verbal behavior like peer-to-peer mirroring or verbal paralinguistic and backchanneling cues. Anthropomorphism has been shown to affect users’ self-perceptions, perceptions of the technology, how users interact with the technology, and the users’ performance. Examples include changes in a users’ trust on the technology, conformity effects, bonding, and displays of empathy. I argue these effects emerge from changes in users’ perceived agency, and their self- and social- identity similarly to interactions between humans. Afterwards, I critically examine current theories on anthropomorphism and present propositions about its nature based on the results of the empirical literature. Subsequently, I introduce a two-factor model of anthropomorphism that proposes how an individual anthropomorphizes a technology is dependent on how the technology was initially perceived (top-down and rational or bottom-up and automatic), and whether it exhibits a capacity for agency or experience. I propose that where a technology lays along this spectrum determines how individuals relates to it, creating shared agency effects, or changing the users’ social identity. For this reason, anthropomorphism is a powerful tool that can be leveraged to support future interactions with smart technologies

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Towards a framework for socially interactive robots

    Get PDF
    250 p.En las últimas décadas, la investigación en el campo de la robótica social ha crecido considerablemente. El desarrollo de diferentes tipos de robots y sus roles dentro de la sociedad se están expandiendo poco a poco. Los robots dotados de habilidades sociales pretenden ser utilizados para diferentes aplicaciones; por ejemplo, como profesores interactivos y asistentes educativos, para apoyar el manejo de la diabetes en niños, para ayudar a personas mayores con necesidades especiales, como actores interactivos en el teatro o incluso como asistentes en hoteles y centros comerciales.El equipo de investigación RSAIT ha estado trabajando en varias áreas de la robótica, en particular,en arquitecturas de control, exploración y navegación de robots, aprendizaje automático y visión por computador. El trabajo presentado en este trabajo de investigación tiene como objetivo añadir una nueva capa al desarrollo anterior, la capa de interacción humano-robot que se centra en las capacidades sociales que un robot debe mostrar al interactuar con personas, como expresar y percibir emociones, mostrar un alto nivel de diálogo, aprender modelos de otros agentes, establecer y mantener relaciones sociales, usar medios naturales de comunicación (mirada, gestos, etc.),mostrar personalidad y carácter distintivos y aprender competencias sociales.En esta tesis doctoral, tratamos de aportar nuestro grano de arena a las preguntas básicas que surgen cuando pensamos en robots sociales: (1) ¿Cómo nos comunicamos (u operamos) los humanos con los robots sociales?; y (2) ¿Cómo actúan los robots sociales con nosotros? En esa línea, el trabajo se ha desarrollado en dos fases: en la primera, nos hemos centrado en explorar desde un punto de vista práctico varias formas que los humanos utilizan para comunicarse con los robots de una maneranatural. En la segunda además, hemos investigado cómo los robots sociales deben actuar con el usuario.Con respecto a la primera fase, hemos desarrollado tres interfaces de usuario naturales que pretenden hacer que la interacción con los robots sociales sea más natural. Para probar tales interfaces se han desarrollado dos aplicaciones de diferente uso: robots guía y un sistema de controlde robot humanoides con fines de entretenimiento. Trabajar en esas aplicaciones nos ha permitido dotar a nuestros robots con algunas habilidades básicas, como la navegación, la comunicación entre robots y el reconocimiento de voz y las capacidades de comprensión.Por otro lado, en la segunda fase nos hemos centrado en la identificación y el desarrollo de los módulos básicos de comportamiento que este tipo de robots necesitan para ser socialmente creíbles y confiables mientras actúan como agentes sociales. Se ha desarrollado una arquitectura(framework) para robots socialmente interactivos que permite a los robots expresar diferentes tipos de emociones y mostrar un lenguaje corporal natural similar al humano según la tarea a realizar y lascondiciones ambientales.La validación de los diferentes estados de desarrollo de nuestros robots sociales se ha realizado mediante representaciones públicas. La exposición de nuestros robots al público en esas actuaciones se ha convertido en una herramienta esencial para medir cualitativamente la aceptación social de los prototipos que estamos desarrollando. De la misma manera que los robots necesitan un cuerpo físico para interactuar con el entorno y convertirse en inteligentes, los robots sociales necesitan participar socialmente en tareas reales para las que han sido desarrollados, para así poder mejorar su sociabilida

    Haptic Media Scenes

    Get PDF
    The aim of this thesis is to apply new media phenomenological and enactive embodied cognition approaches to explain the role of haptic sensitivity and communication in personal computer environments for productivity. Prior theory has given little attention to the role of haptic senses in influencing cognitive processes, and do not frame the richness of haptic communication in interaction design—as haptic interactivity in HCI has historically tended to be designed and analyzed from a perspective on communication as transmissions, sending and receiving haptic signals. The haptic sense may not only mediate contact confirmation and affirmation, but also rich semiotic and affective messages—yet this is a strong contrast between this inherent ability of haptic perception, and current day support for such haptic communication interfaces. I therefore ask: How do the haptic senses (touch and proprioception) impact our cognitive faculty when mediated through digital and sensor technologies? How may these insights be employed in interface design to facilitate rich haptic communication? To answer these questions, I use theoretical close readings that embrace two research fields, new media phenomenology and enactive embodied cognition. The theoretical discussion is supported by neuroscientific evidence, and tested empirically through case studies centered on digital art. I use these insights to develop the concept of the haptic figura, an analytical tool to frame the communicative qualities of haptic media. The concept gauges rich machine- mediated haptic interactivity and communication in systems with a material solution supporting active haptic perception, and the mediation of semiotic and affective messages that are understood and felt. As such the concept may function as a design tool for developers, but also for media critics evaluating haptic media. The tool is used to frame a discussion on opportunities and shortcomings of haptic interfaces for productivity, differentiating between media systems for the hand and the full body. The significance of this investigation is demonstrating that haptic communication is an underutilized element in personal computer environments for productivity and providing an analytical framework for a more nuanced understanding of haptic communication as enabling the mediation of a range of semiotic and affective messages, beyond notification and confirmation interactivity

    Intuitive Robot Teleoperation through Multi-Sensor Informed Mixed Reality Visual Aids

    Get PDF
    © 2021 The Author(s). This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.Mobile robotic systems have evolved to include sensors capable of truthfully describing robot status and operating environment as accurately and reliably as never before. This possibility is challenged by effective sensor data exploitation, because of the cognitive load an operator is exposed to, due to the large amount of data and time-dependency constraints. This paper addresses this challenge in remote-vehicle teleoperation by proposing an intuitive way to present sensor data to users by means of using mixed reality and visual aids within the user interface. We propose a method for organizing information presentation and a set of visual aids to facilitate visual communication of data in teleoperation control panels. The resulting sensor-information presentation appears coherent and intuitive, making it easier for an operator to catch and comprehend information meaning. This increases situational awareness and speeds up decision-making. Our method is implemented on a real mobile robotic system operating outdoor equipped with on-board internal and external sensors, GPS, and a reconstructed 3D graphical model provided by an assistant drone. Experimentation verified feasibility while intuitive and comprehensive visual communication was confirmed through a qualitative assessment, which encourages further developments.Peer reviewe

    Advances in Human-Robot Interaction

    Get PDF
    Rapid advances in the field of robotics have made it possible to use robots not just in industrial automation but also in entertainment, rehabilitation, and home service. Since robots will likely affect many aspects of human existence, fundamental questions of human-robot interaction must be formulated and, if at all possible, resolved. Some of these questions are addressed in this collection of papers by leading HRI researchers
    • …
    corecore