2,647 research outputs found

    Interaction and Task Patterns in Symbiotic, Mixed-Initiative Interaction

    Get PDF
    In this paper we explain our concept of Interaction and Task Patterns, and discuss how such patterns can be applied to support mixed-initiative in symbiotic human-robot interaction both with service and industrial robotic systems

    Nonstrict hierarchical reinforcement learning for interactive systems and robots

    Get PDF
    Conversational systems and robots that use reinforcement learning for policy optimization in large domains often face the problem of limited scalability. This problem has been addressed either by using function approximation techniques that estimate the approximate true value function of a policy or by using a hierarchical decomposition of a learning task into subtasks. We present a novel approach for dialogue policy optimization that combines the benefits of both hierarchical control and function approximation and that allows flexible transitions between dialogue subtasks to give human users more control over the dialogue. To this end, each reinforcement learning agent in the hierarchy is extended with a subtask transition function and a dynamic state space to allow flexible switching between subdialogues. In addition, the subtask policies are represented with linear function approximation in order to generalize the decision making to situations unseen in training. Our proposed approach is evaluated in an interactive conversational robot that learns to play quiz games. Experimental results, using simulation and real users, provide evidence that our proposed approach can lead to more flexible (natural) interactions than strict hierarchical control and that it is preferred by human users

    Dynamic bayesian networks for learning interactions between assistive robotic walker and human users

    Full text link
    Detection of individuals intentions and actions from a stream of human behaviour is an open problem. Yet for robotic agents to be truly perceived as human-friendly entities they need to respond naturally to the physical interactions with the surrounding environment, most notably with the user. This paper proposes a generative probabilistic approach in the form of Dynamic Bayesian Networks (DBN) to seamlessly account for users attitudes. A model is presented which can learn to recognize a subset of possible actions by the user of a gait stability support power rollator walker, such as standing up, sitting down or assistive strolling, and adapt the behaviour of the device accordingly. The communication between the user and the device is implicit, without any explicit intention such as a keypad or voice.The end result is a decision making mechanism that best matches the users cognitive attitude towards a set of assistive tasks, effectively incorporating the evolving activity model of the user in the process. The proposed framework is evaluated in real-life condition. © 2010 Springer-Verlag Berlin Heidelberg

    A probabilistic approach to learn activities of daily living of a mobility aid device user

    Full text link
    © 2014 IEEE. The problem of inferring human behaviour is naturally complex: people interact with the environment and each other in many different ways, and dealing with the often incomplete and uncertain sensed data by which the actions are perceived only compounds the difficulty of the problem. In this paper, we propose a framework whereby these elaborate behaviours can be naturally simplified by decomposing them into smaller activities, whose temporal dependencies can be more efficiently represented via probabilistic hierarchical learning models. In this regard, patterns of a number of activities typically carried out by users of an ambulatory aid device have been identified with the aid of a Hierarchical Hidden Markov Model (HHMM) framework. By decomposing the complex behaviours into multiple layers of abstraction the approach is shown capable of modelling and learning these tightly coupled human-machine interactions. The inference accuracy of the proposed model is proven to compare favourably against more traditional discriminative models, as well as other compatible generative strategies to provide a complete picture that highlights the benefits of the proposed approach, and opens the door to more intelligent assistance with a robotic mobility aid

    Low-level grounding in a multimodal mobile service robot conversational system using graphical models

    Get PDF
    The main task of a service robot with a voice-enabled communication interface is to engage a user in dialogue providing an access to the services it is designed for. In managing such interaction, inferring the user goal (intention) from the request for a service at each dialogue turn is the key issue. In service robot deployment conditions speech recognition limitations with noisy speech input and inexperienced users may jeopardize user goal identification. In this paper, we introduce a grounding state-based model motivated by reducing the risk of communication failure due to incorrect user goal identification. The model exploits the multiple modalities available in the service robot system to provide evidence for reaching grounding states. In order to handle the speech input as sufficiently grounded (correctly understood) by the robot, four proposed states have to be reached. Bayesian networks combining speech and non-speech modalities during user goal identification are used to estimate probability that each grounding state has been reached. These probabilities serve as a base for detecting whether the user is attending to the conversation, as well as for deciding on an alternative input modality (e.g., buttons) when the speech modality is unreliable. The Bayesian networks used in the grounding model are specially designed for modularity and computationally efficient inference. The potential of the proposed model is demonstrated comparing a conversational system for the mobile service robot RoboX employing only speech recognition for user goal identification, and a system equipped with multimodal grounding. The evaluation experiments use component and system level metrics for technical (objective) and user-based (subjective) evaluation with multimodal data collected during the conversations of the robot RoboX with user

    World model learning and inference

    Get PDF
    Understanding information processing in the brain-and creating general-purpose artificial intelligence-are long-standing aspirations of scientists and engineers worldwide. The distinctive features of human intelligence are high-level cognition and control in various interactions with the world including the self, which are not defined in advance and are vary over time. The challenge of building human-like intelligent machines, as well as progress in brain science and behavioural analyses, robotics, and their associated theoretical formalisations, speaks to the importance of the world-model learning and inference. In this article, after briefly surveying the history and challenges of internal model learning and probabilistic learning, we introduce the free energy principle, which provides a useful framework within which to consider neuronal computation and probabilistic world models. Next, we showcase examples of human behaviour and cognition explained under that principle. We then describe symbol emergence in the context of probabilistic modelling, as a topic at the frontiers of cognitive robotics. Lastly, we review recent progress in creating human-like intelligence by using novel probabilistic programming languages. The striking consensus that emerges from these studies is that probabilistic descriptions of learning and inference are powerful and effective ways to create human-like artificial intelligent machines and to understand intelligence in the context of how humans interact with their world

    Probabilistic Human-Robot Information Fusion

    Get PDF
    This thesis is concerned with combining the perceptual abilities of mobile robots and human operators to execute tasks cooperatively. It is generally agreed that a synergy of human and robotic skills offers an opportunity to enhance the capabilities of today’s robotic systems, while also increasing their robustness and reliability. Systems which incorporate both human and robotic information sources have the potential to build complex world models, essential for both automated and human decision making. In this work, humans and robots are regarded as equal team members who interact and communicate on a peer-to-peer basis. Human-robot communication is addressed using probabilistic representations common in robotics. While communication can in general be bidirectional, this work focuses primarily on human-to-robot information flow. More specifically, the approach advocated in this thesis is to let robots fuse their sensor observations with observations obtained from human operators. While robotic perception is well-suited for lower level world descriptions such as geometric properties, humans are able to contribute perceptual information on higher abstraction levels. Human input is translated into the machine representation via Human Sensor Models. A common mathematical framework for humans and robots reinforces the notion of true peer-to-peer interaction. Human-robot information fusion is demonstrated in two application domains: (1) scalable information gathering, and (2) cooperative decision making. Scalable information gathering is experimentally demonstrated on a system comprised of a ground vehicle, an unmanned air vehicle, and two human operators in a natural environment. Information from humans and robots was fused in a fully decentralised manner to build a shared environment representation on multiple abstraction levels. Results are presented in the form of information exchange patterns, qualitatively demonstrating the benefits of human-robot information fusion. The second application domain adds decision making to the human-robot task. Rational decisions are made based on the robots’ current beliefs which are generated by fusing human and robotic observations. Since humans are considered a valuable resource in this context, operators are only queried for input when the expected benefit of an observation exceeds the cost of obtaining it. The system can be seen as adjusting its autonomy at run-time based on the uncertainty in the robots’ beliefs. A navigation task is used to demonstrate the adjustable autonomy system experimentally. Results from two experiments are reported: a quantitative evaluation of human-robot team effectiveness, and a user study to compare the system to classical teleoperation. Results show the superiority of the system with respect to performance, operator workload, and usability

    A proposal for a global task planning architecture using the RoboEarth cloud based framework

    Get PDF
    As robotic systems become more and more capable of assisting in human domains, methods are sought to compose robot executable plans from abstract human instructions. To cope with the semantically rich and highly expressive nature of human instructions, Hierarchical Task Network planning is often being employed along with domain knowledge to solve planning problems in a pragmatic way. Commonly, the domain knowledge is specific to the planning problem at hand, impeding re-use. Therefore this paper conceptualizes a global planning architecture, based on the worldwide accessible RoboEarth cloud framework. This architecture allows environmental state inference and plan monitoring on a global level. To enable plan re-use for future requests, the RoboEarth action language has been adapted to allow semantic matching of robot capabilities with previously composed plans

    Whole brain Probabilistic Generative Model toward Realizing Cognitive Architecture for Developmental Robots

    Get PDF
    Building a humanlike integrative artificial cognitive system, that is, an artificial general intelligence, is one of the goals in artificial intelligence and developmental robotics. Furthermore, a computational model that enables an artificial cognitive system to achieve cognitive development will be an excellent reference for brain and cognitive science. This paper describes the development of a cognitive architecture using probabilistic generative models (PGMs) to fully mirror the human cognitive system. The integrative model is called a whole-brain PGM (WB-PGM). It is both brain-inspired and PGMbased. In this paper, the process of building the WB-PGM and learning from the human brain to build cognitive architectures is described.Comment: 55 pages, 8 figures, submitted to Neural Network

    Human-Intelligence and Machine-Intelligence Decision Governance Formal Ontology

    Get PDF
    Since the beginning of the human race, decision making and rational thinking played a pivotal role for mankind to either exist and succeed or fail and become extinct. Self-awareness, cognitive thinking, creativity, and emotional magnitude allowed us to advance civilization and to take further steps toward achieving previously unreachable goals. From the invention of wheels to rockets and telegraph to satellite, all technological ventures went through many upgrades and updates. Recently, increasing computer CPU power and memory capacity contributed to smarter and faster computing appliances that, in turn, have accelerated the integration into and use of artificial intelligence (AI) in organizational processes and everyday life. Artificial intelligence can now be found in a wide range of organizational systems including healthcare and medical diagnosis, automated stock trading, robotic production, telecommunications, space explorations, and homeland security. Self-driving cars and drones are just the latest extensions of AI. This thrust of AI into organizations and daily life rests on the AI community’s unstated assumption of its ability to completely replicate human learning and intelligence in AI. Unfortunately, even today the AI community is not close to completely coding and emulating human intelligence into machines. Despite the revolution of digital and technology in the applications level, there has been little to no research in addressing the question of decision making governance in human-intelligent and machine-intelligent (HI-MI) systems. There also exists no foundational, core reference, or domain ontologies for HI-MI decision governance systems. Further, in absence of an expert reference base or body of knowledge (BoK) integrated with an ontological framework, decision makers must rely on best practices or standards that differ from organization to organization and government to government, contributing to systems failure in complex mission critical situations. It is still debatable whether and when human or machine decision capacity should govern or when a joint human-intelligence and machine-intelligence (HI-MI) decision capacity is required in any given decision situation. To address this deficiency, this research establishes a formal, top level foundational ontology of HI-MI decision governance in parallel with a grounded theory based body of knowledge which forms the theoretical foundation of a systemic HI-MI decision governance framework
    corecore