1,508 research outputs found

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents

    Computer Simulation of Human-Robot Collaboration in the Context of Industry Revolution 4.0

    Get PDF
    The essential role of robot simulation for industrial robots, in particular the collaborative robots is presented in this chapter. We begin by discussing the robot utilization in the industry which includes mobile robots, arm robots, and humanoid robots. The author emphasizes the application of collaborative robots in regard to industry revolution 4.0. Then, we present how the collaborative robot utilization in the industry can be achieved through computer simulation by means of virtual robots in simulated environments. The robot simulation presented here is based on open dynamic engine (ODE) using anyKode Marilou. The author surveys on the use of dynamic simulations in application of collaborative robots toward industry 4.0. Due to the challenging problems which related to humanoid robots for collaborative robots and behavior in human-robot collaboration, the use of robot simulation may open the opportunities in collaborative robotic research in the context of industry 4.0. As developing a real collaborative robot is still expensive and time-consuming, while accessing commercial collaborative robots is relatively limited; thus, the development of robot simulation can be an option for collaborative robotic research and education purposes

    A Posture Sequence Learning System for an Anthropomorphic Robotic Hand

    Get PDF
    The paper presents a cognitive architecture for posture learning of an anthropomorphic robotic hand. Our approach is aimed to allow the robotic system to perform complex perceptual operations, to interact with a human user and to integrate the perceptions by a cognitive representation of the scene and the observed actions. The anthropomorphic robotic hand imitates the gestures acquired by the vision system in order to learn meaningful movements, to build its knowledge by different conceptual spaces and to perform complex interaction with the human operator

    Becoming Human with Humanoid

    Get PDF
    Nowadays, our expectations of robots have been significantly increases. The robot, which was initially only doing simple jobs, is now expected to be smarter and more dynamic. People want a robot that resembles a human (humanoid) has and has emotional intelligence that can perform action-reaction interactions. This book consists of two sections. The first section focuses on emotional intelligence, while the second section discusses the control of robotics. The contents of the book reveal the outcomes of research conducted by scholars in robotics fields to accommodate needs of society and industry

    Machine Performers: Agents in a Multiple Ontological State

    Get PDF
    In this thesis, the author explores and develops new attributes for machine performers and merges the trans-disciplinary fields of the performing arts and artificial intelligence. The main aim is to redefine the term “embodiment” for robots on the stage and to demonstrate that this term requires broadening in various fields of research. This redefining has required a multifaceted theoretical analysis of embodiment in the field of artificial intelligence (e.g. the uncanny valley), as well as the construction of new robots for the stage by the author. It is hoped that these practical experimental examples will generate more research by others in similar fields. Even though the historical lineage of robotics is engraved with theatrical strategies and dramaturgy, further application of constructive principles from the performing arts and evidence from psychology and neurology can shift the perception of robotic agents both on stage and in other cultural environments. In this light, the relation between representation, movement and behaviour of bodies has been further explored to establish links between constructed bodies (as in artificial intelligence) and perceived bodies (as performers on the theatrical stage). In the course of this research, several practical works have been designed and built, and subsequently presented to live audiences and research communities. Audience reactions have been analysed with surveys and discussions. Interviews have also been conducted with choreographers, curators and scientists about the value of machine performers. The main conclusions from this study are that fakery and mystification can be used as persuasive elements to enhance agency. Morphologies can also be applied that tightly couple brain and sensorimotor actions and lead to a stronger stage presence. In fact, if this lack of presence is left out of human replicants, it causes an “uncanny” lack of agency. Furthermore, the addition of stage presence leads to stronger identification from audiences, even for bodies dissimilar to their own. The author demonstrates that audience reactions are enhanced by building these effects into machine body structures: rather than identification through mimicry, this causes them to have more unambiguously biological associations. Alongside these traits, atmospheres such as those created by a cast of machine performers tend to cause even more intensely visceral responses. In this thesis, “embodiment” has emerged as a paradigm shift – as well as within this shift – and morphological computing has been explored as a method to deepen this visceral immersion. Therefore, this dissertation considers and builds machine performers as “true” performers for the stage, rather than mere objects with an aura. Their singular and customized embodiment can enable the development of non-anthropocentric performances that encompass the abstract and conceptual patterns in motion and generate – as from human performers – empathy, identification and experiential reactions in live audiences

    A Survey of Tactile Human-Robot Interactions

    Get PDF
    Robots come into physical contact with humans in both experimental and operational settings. Many potential factors motivate the detection of human contact, ranging from safe robot operation around humans, to robot behaviors that depend on human guidance. This article presents a review of current research within the field of Tactile Human–Robot Interactions (Tactile HRI), where physical contact from a human is detected by a robot during the execution or development of robot behaviors. Approaches are presented from two viewpoints: the types of physical interactions that occur between the human and robot, and the types of sensors used to detect these interactions. We contribute a structure for the categorization of Tactile HRI research within each viewpoint. Tactile sensing techniques are grouped into three categories, according to what covers the sensors: (i) a hard shell, (ii) a flexible substrate or (iii) no covering. Three categories of physical HRI likewise are identified, consisting of contact that (i) interferes with robot behavior execution, (ii) contributes to behavior execution and (iii) contributes to behavior development. We populate each category with the current literature, and furthermore identify the state-of-the-art within categories and promising areas for future research

    Migration from Teleoperation to Autonomy via Modular Sensor and Mobility Bricks

    Get PDF
    In this thesis, the teleoperated communications of a Remotec ANDROS robot have been reverse engineered. This research has used the information acquired through the reverse engineering process to enhance the teleoperation and add intelligence to the initially automated robot. The main contribution of this thesis is the implementation of the mobility brick paradigm, which enables autonomous operations, using the commercial teleoperated ANDROS platform. The brick paradigm is a generalized architecture for a modular approach to robotics. This architecture and the contribution of this thesis are a paradigm shift from the proprietary commercial models that exist today. The modular system of sensor bricks integrates the transformed mobility platform and defines it as a mobility brick. In the wall following application implemented in this work, the mobile robotic system acquires intelligence using the range sensor brick. This application illustrates a way to alleviate the burden on the human operator and delegate certain tasks to the robot. Wall following is one among several examples of giving a degree of autonomy to an essentially teleoperated robot through the Sensor Brick System. Indeed once the proprietary robot has been altered into a mobility brick; the possibilities for autonomy are numerous and vary with different sensor bricks. The autonomous system implemented is not a fixed-application robot but rather a non-specific autonomy capable platform. Meanwhile the native controller and the computer-interfaced teleoperation are still available when necessary. Rather than trading off by switching from teleoperation to autonomy, this system provides the flexibility to switch between the two at the operator’s command. The contributions of this thesis reside in the reverse engineering of the original robot, its upgrade to a computer-interfaced teleoperated system, the mobility brick paradigm and the addition of autonomy capabilities. The application of a robot autonomously following a wall is subsequently implemented, tested and analyzed in this work. The analysis provides the programmer with information on controlling the robot and launching the autonomous function. The results are conclusive and open up the possibilities for a variety of autonomous applications for mobility platforms using modular sensor bricks

    Robot Learning from Human Demonstration: Interpretation, Adaptation, and Interaction

    Get PDF
    Robot Learning from Demonstration (LfD) is a research area that focuses on how robots can learn new skills by observing how people perform various activities. As humans, we have a remarkable ability to imitate other human’s behaviors and adapt to new situations. Endowing robots with these critical capabilities is a significant but very challenging problem considering the complexity and variation of human activities in highly dynamic environments. This research focuses on how robots can learn new skills by interpreting human activities, adapting the learned skills to new situations, and naturally interacting with humans. This dissertation begins with a discussion of challenges in each of these three problems. A new unified representation approach is introduced to enable robots to simultaneously interpret the high-level semantic meanings and generalize the low-level trajectories of a broad range of human activities. An adaptive framework based on feature space decomposition is then presented for robots to not only reproduce skills, but also autonomously and efficiently adjust the learned skills to new environments that are significantly different from demonstrations. To achieve natural Human Robot Interaction (HRI), this dissertation presents a Recurrent Neural Network based deep perceptual control approach, which is capable of integrating multi-modal perception sequences with actions for robots to interact with humans in long-term tasks. Overall, by combining the above approaches, an autonomous system is created for robots to acquire important skills that can be applied to human-centered applications. Finally, this dissertation concludes with a discussion of future directions that could accelerate the upcoming technological revolution of robot learning from human demonstration

    Towards adaptive and autonomous humanoid robots: from vision to actions

    Get PDF
    Although robotics research has seen advances over the last decades robots are still not in widespread use outside industrial applications. Yet a range of proposed scenarios have robots working together, helping and coexisting with humans in daily life. In all these a clear need to deal with a more unstructured, changing environment arises. I herein present a system that aims to overcome the limitations of highly complex robotic systems, in terms of autonomy and adaptation. The main focus of research is to investigate the use of visual feedback for improving reaching and grasping capabilities of complex robots. To facilitate this a combined integration of computer vision and machine learning techniques is employed. From a robot vision point of view the combination of domain knowledge from both imaging processing and machine learning techniques, can expand the capabilities of robots. I present a novel framework called Cartesian Genetic Programming for Image Processing (CGP-IP). CGP-IP can be trained to detect objects in the incoming camera streams and successfully demonstrated on many different problem domains. The approach requires only a few training images (it was tested with 5 to 10 images per experiment) is fast, scalable and robust yet requires very small training sets. Additionally, it can generate human readable programs that can be further customized and tuned. While CGP-IP is a supervised-learning technique, I show an integration on the iCub, that allows for the autonomous learning of object detection and identification. Finally this dissertation includes two proof-of-concepts that integrate the motion and action sides. First, reactive reaching and grasping is shown. It allows the robot to avoid obstacles detected in the visual stream, while reaching for the intended target object. Furthermore the integration enables us to use the robot in non-static environments, i.e. the reaching is adapted on-the- fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. The second integration highlights the capabilities of these frameworks, by improving the visual detection by performing object manipulation actions
    • …
    corecore