103 research outputs found

    POMDP Model Learning for Human Robot Collaboration

    Full text link
    Recent years have seen human robot collaboration (HRC) quickly emerged as a hot research area at the intersection of control, robotics, and psychology. While most of the existing work in HRC focused on either low-level human-aware motion planning or HRC interface design, we are particularly interested in a formal design of HRC with respect to high-level complex missions, where it is of critical importance to obtain an accurate and meanwhile tractable human model. Instead of assuming the human model is given, we ask whether it is reasonable to learn human models from observed perception data, such as the gesture, eye movements, head motions of the human in concern. As our initial step, we adopt a partially observable Markov decision process (POMDP) model in this work as mounting evidences have suggested Markovian properties of human behaviors from psychology studies. In addition, POMDP provides a general modeling framework for sequential decision making where states are hidden and actions have stochastic outcomes. Distinct from the majority of POMDP model learning literature, we do not assume that the state, the transition structure or the bound of the number of states in POMDP model is given. Instead, we use a Bayesian non-parametric learning approach to decide the potential human states from data. Then we adopt an approach inspired by probably approximately correct (PAC) learning to obtain not only an estimation of the transition probability but also a confidence interval associated to the estimation. Then, the performance of applying the control policy derived from the estimated model is guaranteed to be sufficiently close to the true model. Finally, data collected from a driver-assistance test-bed are used to train the model, which illustrates the effectiveness of the proposed learning method

    FABRIC: A Framework for the Design and Evaluation of Collaborative Robots with Extended Human Adaptation

    Full text link
    A limitation for collaborative robots (cobots) is their lack of ability to adapt to human partners, who typically exhibit an immense diversity of behaviors. We present an autonomous framework as a cobot's real-time decision-making mechanism to anticipate a variety of human characteristics and behaviors, including human errors, toward a personalized collaboration. Our framework handles such behaviors in two levels: 1) short-term human behaviors are adapted through our novel Anticipatory Partially Observable Markov Decision Process (A-POMDP) models, covering a human's changing intent (motivation), availability, and capability; 2) long-term changing human characteristics are adapted by our novel Adaptive Bayesian Policy Selection (ABPS) mechanism that selects a short-term decision model, e.g., an A-POMDP, according to an estimate of a human's workplace characteristics, such as her expertise and collaboration preferences. To design and evaluate our framework over a diversity of human behaviors, we propose a pipeline where we first train and rigorously test the framework in simulation over novel human models. Then, we deploy and evaluate it on our novel physical experiment setup that induces cognitive load on humans to observe their dynamic behaviors, including their mistakes, and their changing characteristics such as their expertise. We conduct user studies and show that our framework effectively collaborates non-stop for hours and adapts to various changing human behaviors and characteristics in real-time. That increases the efficiency and naturalness of the collaboration with a higher perceived collaboration, positive teammate traits, and human trust. We believe that such an extended human adaptation is key to the long-term use of cobots.Comment: The article is in review for publication in International Journal of Robotics Researc

    Symbol Emergence in Robotics: A Survey

    Full text link
    Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form a symbol system and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment and other systems. Understanding human social interactions and developing a robot that can smoothly communicate with human users in the long term, requires an understanding of the dynamics of symbol systems and is crucially important. The embodied cognition and social interaction of participants gradually change a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER is a constructive approach towards an emergent symbol system. The emergent symbol system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e., humans and developmental robots. Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and their embodied meanings from raw sensory--motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions of research in SER.Comment: submitted to Advanced Robotic

    Optimising Outcomes of Human-Agent Collaboration using Trust Calibration

    Full text link
    As collaborative agents are implemented within everyday environments and the workforce, user trust in these agents becomes critical to consider. Trust affects user decision making, rendering it an essential component to consider when designing for successful Human-Agent Collaboration (HAC). The purpose of this work is to investigate the relationship between user trust and decision making with the overall aim of providing a trust calibration methodology to achieve the goals and optimise the outcomes of HAC. Recommender systems are used as a testbed for investigation, offering insight on human collaboration with dyadic decision domains. Four studies are conducted and include in-person, online, and simulation experiments. The first study provides evidence of a relationship between user perception of a collaborative agent and trust. Outcomes of the second study demonstrate that initial trust can be used to predict task outcome during HAC, with Signal Detection Theory (SDT) introduced as a method to interpret user decision making in-task. The third study provides evidence to suggest that the implementation of different features within a single agent's interface influences user perception and trust, subsequently impacting outcomes of HAC. Finally, a computational trust calibration methodology harnessing a Partially Observable Markov Decision Process (POMDP) model and SDT is presented and assessed, providing an improved understanding of the mechanisms governing user trust and its relationship with decision making and collaborative task performance during HAC. The contributions from this work address important gaps within the HAC literature. The implications of the proposed methodology and its application to alternative domains are identified and discussed

    A systematic literature review of decision-making and control systems for autonomous and social robots

    Get PDF
    In the last years, considerable research has been carried out to develop robots that can improve our quality of life during tedious and challenging tasks. In these contexts, robots operating without human supervision open many possibilities to assist people in their daily activities. When autonomous robots collaborate with humans, social skills are necessary for adequate communication and cooperation. Considering these facts, endowing autonomous and social robots with decision-making and control models is critical for appropriately fulfiling their initial goals. This manuscript presents a systematic review of the evolution of decision-making systems and control architectures for autonomous and social robots in the last three decades. These architectures have been incorporating new methods based on biologically inspired models and Machine Learning to enhance these systems’ possibilities to developed societies. The review explores the most novel advances in each application area, comparing their most essential features. Additionally, we describe the current challenges of software architecture devoted to action selection, an analysis not provided in similar reviews of behavioural models for autonomous and social robots. Finally, we present the future directions that these systems can take in the future.The research leading to these results has received funding from the projects: Robots Sociales para Estimulación Física, Cognitiva y Afectiva de Mayores (ROSES), RTI2018-096338-B-I00, funded by the Ministerio de Ciencia, Innovación y Universidades; Robots sociales para mitigar la soledad y el aislamiento en mayores (SOROLI), PID2021-123941OA-I00, funded by Agencia Estatal de Investigación (AEI), Spanish Ministerio de Ciencia e Innovación. This publication is part of the R&D&I project PLEC2021-007819 funded by MCIN/AEI/10.13039/501100011033 and by the European Union NextGenerationEU/PRTR

    Whole brain Probabilistic Generative Model toward Realizing Cognitive Architecture for Developmental Robots

    Get PDF
    Building a humanlike integrative artificial cognitive system, that is, an artificial general intelligence, is one of the goals in artificial intelligence and developmental robotics. Furthermore, a computational model that enables an artificial cognitive system to achieve cognitive development will be an excellent reference for brain and cognitive science. This paper describes the development of a cognitive architecture using probabilistic generative models (PGMs) to fully mirror the human cognitive system. The integrative model is called a whole-brain PGM (WB-PGM). It is both brain-inspired and PGMbased. In this paper, the process of building the WB-PGM and learning from the human brain to build cognitive architectures is described.Comment: 55 pages, 8 figures, submitted to Neural Network

    World model learning and inference

    Get PDF
    Understanding information processing in the brain-and creating general-purpose artificial intelligence-are long-standing aspirations of scientists and engineers worldwide. The distinctive features of human intelligence are high-level cognition and control in various interactions with the world including the self, which are not defined in advance and are vary over time. The challenge of building human-like intelligent machines, as well as progress in brain science and behavioural analyses, robotics, and their associated theoretical formalisations, speaks to the importance of the world-model learning and inference. In this article, after briefly surveying the history and challenges of internal model learning and probabilistic learning, we introduce the free energy principle, which provides a useful framework within which to consider neuronal computation and probabilistic world models. Next, we showcase examples of human behaviour and cognition explained under that principle. We then describe symbol emergence in the context of probabilistic modelling, as a topic at the frontiers of cognitive robotics. Lastly, we review recent progress in creating human-like intelligence by using novel probabilistic programming languages. The striking consensus that emerges from these studies is that probabilistic descriptions of learning and inference are powerful and effective ways to create human-like artificial intelligent machines and to understand intelligence in the context of how humans interact with their world

    Do You Feel Me?: Learning Language from Humans with Robot Emotional Displays

    Get PDF
    In working towards accomplishing a human-level acquisition and understanding of language, a robot must meet two requirements: the ability to learn words from interactions with its physical environment, and the ability to learn language from people in settings for language use, such as spoken dialogue. The second requirement poses a problem: If a robot is capable of asking a human teacher well-formed questions, it will lead the teacher to provide responses that are too advanced for a robot, which requires simple inputs and feedback to build word-level comprehension. In a live interactive study, we tested the hypothesis that emotional displays are a viable solution to this problem of how to communicate without relying on language the robot doesn\u27t--indeed, cannot--actually know. Emotional displays can relate the robot\u27s state of understanding to its human teacher, and are developmentally appropriate for the most common language acquisition setting: an adult interacting with a child. For our study, we programmed a robot to independently explore the world and elicit relevant word references and feedback from the participants who are confronted with two robot settings: a setting in which the robot displays emotions, and a second setting where the robot focuses on the task without displaying emotions, which also tests if emotional displays lead a participant to make incorrect assumptions regarding the robot\u27s understanding. Analyzing the results from the surveys and the Grounded Semantics classifiers, we discovered that the use of emotional displays increases the number of inputs provided to the robot, an effect that\u27s modulated by the ratio of positive to negative emotions that were displayed
    corecore