23 research outputs found

    Prediction of intent in robotics and multi-agent systems.

    Get PDF
    Moving beyond the stimulus contained in observable agent behaviour, i.e. understanding the underlying intent of the observed agent is of immense interest in a variety of domains that involve collaborative and competitive scenarios, for example assistive robotics, computer games, robot-human interaction, decision support and intelligent tutoring. This review paper examines approaches for performing action recognition and prediction of intent from a multi-disciplinary perspective, in both single robot and multi-agent scenarios, and analyses the underlying challenges, focusing mainly on generative approaches

    The effects of perspective-taking on perceptual learning

    Get PDF
    Research in perceptual psychology and anthropology has demonstrated that experts will literally see objects and events in their domain differently than non-experts. Experts can make distinctions and notice subtleties that a novice does not perceive. Experts also have strategies for looking at data and artifacts in a domain; they know where to look so that they can answer the important questions. An expert perspective can be described as the ways of seeing and experiencing phenomena that are influenced by the specialized knowledge that an expert has. The present paper will survey the existing literature on perspective-taking and learning, with a short discussion at the end of some of the ways that existing technologies have been used to support the sharing of perspectives. Of particular interest in this paper is the potential to use new media technologies to convey the perspective of someone with specialized knowledge or insider information on an important event - a viewpoint that could be termed an "expert perspective"

    ‘Give me a hug': the effects of touch and autonomy on people's responses to embodied social agents

    Get PDF
    Embodied social agents are programmed to display human-like social behaviour to increase intuitiveness of interacting with these agents. It is not yet clear to what extent people respond to agents’ social behaviours. One example is touch. Despite robots’ embodiment and increasing autonomy, the effect of communicative touch has been a mostly overlooked aspect of human-robot interaction. This video-based, 2x2 betweensubject survey experiment (N=119) found that the combination of touch and proactivity influenced whether people saw the robot as machine-like and dependable. Participants’ attitude towards robots in general also influenced perceived closeness between humans and robots. Results show that communicative touch is considered a more appropriate behaviour for proactive agents rather than reactive agents. Also, people that are generally more positive towards robots find robots that interact by touch less machine-like. These effects illustrate that careful consideration is necessary when incorporating social behaviours in agents’ physical interaction design

    Cognitive Architecture for Mutual Modelling

    Get PDF
    In social robotics, robots needs to be able to be understood by humans. Especially in collaborative tasks where they have to share mutual knowledge. For instance, in an educative scenario, learners share their knowledge and they must adapt their behaviour in order to make sure they are understood by others. Learners display behaviours in order to show their understanding and teachers adapt in order to make sure that the learners' knowledge is the required one. This ability requires a model of their own mental states perceived by others: ``has the human understood that I(robot) need this object for the task or should I explain it once again ?" In this paper, we discuss the importance of a cognitive architecture enabling second-order Mutual Modelling for Human-Robot Interaction in educative context

    ‘Give me a hug’: the effects of touch and autonomy on people's responses to embodied social agents

    Get PDF
    Embodied social agents are programmed to display human-like social behaviour to increase intuitiveness of interacting with these agents. It is not yet clear to what extent people respond to agents’ social behaviours. One example is touch. Despite robots’ embodiment and increasing autonomy, the effect of communicative touch has been a mostly overlooked aspect of human-robot interaction. This video-based, 2x2 betweensubject survey experiment (N=119) found that the combination of touch and proactivity influenced whether people saw the robot as machine-like and dependable. Participants’ attitude towards robots in general also influenced perceived closeness between humans and robots. Results show that communicative touch is considered a more appropriate behaviour for proactive agents rather than reactive agents. Also, people that are generally more positive towards robots find robots that interact by touch less machine-like. These effects illustrate that careful consideration is necessary when incorporating social behaviours in agents’ physical interaction design

    Mutual Modelling in Robotics: Inspirations for the Next Steps

    Get PDF
    Mutual modelling, the reciprocal ability to establish a mental model of the other, plays a fundamental role in human interactions. This complex cognitive skill is however difficult to fully apprehend as it encompasses multiple neuronal, psychological and social mechanisms that are generally not easily turned into computational models suitable for robots. This article presents several perspectives on mutual modelling from a range of disciplines, and reflects on how these perspectives can be beneficial to the advancement of social cognition in robotics. We gather here both basic tools (concepts, formalisms, models) and exemplary experimental settings and methods that are of relevance to robotics. This contribution is expected to consolidate the corpus of knowledge readily available to human-robot interaction research, and to foster interest for this fundamentally cross-disciplinary field

    Artificial Cognition for Social Human-Robot Interaction: An Implementation

    Get PDF
    © 2017 The Authors Human–Robot Interaction challenges Artificial Intelligence in many regards: dynamic, partially unknown environments that were not originally designed for robots; a broad variety of situations with rich semantics to understand and interpret; physical interactions with humans that requires fine, low-latency yet socially acceptable control strategies; natural and multi-modal communication which mandates common-sense knowledge and the representation of possibly divergent mental models. This article is an attempt to characterise these challenges and to exhibit a set of key decisional issues that need to be addressed for a cognitive robot to successfully share space and tasks with a human. We identify first the needed individual and collaborative cognitive skills: geometric reasoning and situation assessment based on perspective-taking and affordance analysis; acquisition and representation of knowledge models for multiple agents (humans and robots, with their specificities); situated, natural and multi-modal dialogue; human-aware task planning; human–robot joint task achievement. The article discusses each of these abilities, presents working implementations, and shows how they combine in a coherent and original deliberative architecture for human–robot interaction. Supported by experimental results, we eventually show how explicit knowledge management, both symbolic and geometric, proves to be instrumental to richer and more natural human–robot interactions by pushing for pervasive, human-level semantics within the robot's deliberative system

    Survey: Robot Programming by Demonstration

    Get PDF
    Robot PbD started about 30 years ago, growing importantly during the past decade. The rationale for moving from purely preprogrammed robots to very flexible user-based interfaces for training the robot to perform a task is three-fold. First and foremost, PbD, also referred to as {\em imitation learning} is a powerful mechanism for reducing the complexity of search spaces for learning. When observing either good or bad examples, one can reduce the search for a possible solution, by either starting the search from the observed good solution (local optima), or conversely, by eliminating from the search space what is known as a bad solution. Imitation learning is, thus, a powerful tool for enhancing and accelerating learning in both animals and artifacts. Second, imitation learning offers an implicit means of training a machine, such that explicit and tedious programming of a task by a human user can be minimized or eliminated (Figure \ref{fig:what-how}). Imitation learning is thus a ``natural'' means of interacting with a machine that would be accessible to lay people. And third, studying and modeling the coupling of perception and action, which is at the core of imitation learning, helps us to understand the mechanisms by which the self-organization of perception and action could arise during development. The reciprocal interaction of perception and action could explain how competence in motor control can be grounded in rich structure of perceptual variables, and vice versa, how the processes of perception can develop as means to create successful actions. PbD promises were thus multiple. On the one hand, one hoped that it would make the learning faster, in contrast to tedious reinforcement learning methods or trials-and-error learning. On the other hand, one expected that the methods, being user-friendly, would enhance the application of robots in human daily environments. Recent progresses in the field, which we review in this chapter, show that the field has make a leap forward the past decade toward these goals and that these promises may be fulfilled very soon

    Robot Learning From Human Observation Using Deep Neural Networks

    Get PDF
    Industrial robots have gained traction in the last twenty years and have become an integral component in any sector empowering automation. Specifically, the automotive industry implements a wide range of industrial robots in a multitude of assembly lines worldwide. These robots perform tasks with the utmost level of repeatability and incomparable speed. It is that speed and consistency that has always made the robotic task an upgrade over the same task completed by a human. The cost savings is a great return on investment causing corporations to automate and deploy robotic solutions wherever feasible. The cost to commission and set up is the largest deterring factor in any decision regarding robotics and automation. Currently, robots are traditionally programmed by robotic technicians, and this function is carried out in a manual process in a well-structured environment. This thesis dives into the option of eliminating the programming and commissioning portion of the robotic integration. If the environment is dynamic and can undergo various iterations of parts, changes in lighting, and part placement in the cell, then the robot will struggle to function because it is not capable of adapting to these variables. If a couple of cameras can be introduced to help capture the operator’s motions and part variability, then Learning from Demonstration (LfD) can be implemented to potentially solve this prevalent issue in today’s automotive culture. With assistance from machine learning algorithms, deep neural networks, and transfer learning technology, LfD can strive and become a viable solution. This system was developed with a robotic cell that can learn from demonstration (LfD). The proposed approach is based on computer vision to observe human actions and deep learning to perceive the demonstrator’s actions and manipulated objects
    corecore