33 research outputs found

    Development of Cognitive Capabilities in Humanoid Robots

    Get PDF
    Merged with duplicate record 10026.1/645 on 03.04.2017 by CS (TIS)Building intelligent systems with human level of competence is the ultimate grand challenge for science and technology in general, and especially for the computational intelligence community. Recent theories in autonomous cognitive systems have focused on the close integration (grounding) of communication with perception, categorisation and action. Cognitive systems are essential for integrated multi-platform systems that are capable of sensing and communicating. This thesis presents a cognitive system for a humanoid robot that integrates abilities such as object detection and recognition, which are merged with natural language understanding and refined motor controls. The work includes three studies; (1) the use of generic manipulation of objects using the NMFT algorithm, by successfully testing the extension of the NMFT to control robot behaviour; (2) a study of the development of a robotic simulator; (3) robotic simulation experiments showing that a humanoid robot is able to acquire complex behavioural, cognitive, and linguistic skills through individual and social learning. The robot is able to learn to handle and manipulate objects autonomously, to cooperate with human users, and to adapt its abilities to changes in internal and environmental conditions. The model and the experimental results reported in this thesis, emphasise the importance of embodied cognition, i.e. the humanoid robot's physical interaction between its body and the environment

    Longterm Generalized Actions for Smart, Autonomous Robot Agents

    Get PDF
    Creating intelligent artificial systems, and in particular robots, that improve themselves just like humans do is one of the most ambitious goals in robotics and machine learning. The concept of robot experience exists for some time now, but has up to now not fully found its way into autonomous robots. This thesis is devoted to both, analyzing the underlying requirements for enabling robot learning from experience and actually implementing it on real robot hardware. For effective robot learning from experience I present and discuss three main requirements: (a ) Clearly expressing what a robot should do, on a vague, abstract level I introduce Generalized Plans as a means to express the intention rather than the actual action sequence of a task, removing as much task specific knowledge as possible. (a ) Defining, collecting, and analyzing robot experiences to enable robots to improve I present Episodic Memories as a container for all collected robot experiences for any arbitrary task and create sophisticated action (effect) prediction models from them, allowing robots to make better decisions. (a ) Properly abstracting from reality and dealing with failures in the domain they occurred in I propose failure handling strategies, a failure taxonomy extensible through experience, and discuss the relationship between symbolic/discrete and subsymbolic/continuous systems in terms of robot plans interacting with real world sensors and actuators. I concentrate on the domain of human-scale robot activities, specifically on doing household chores. Tasks in this domain offer many repeating patterns and are ideal candidates for abstracting, encapsulating, and modularizing robot plans into a more general form. This way, very similar plan structures are transformed into parameters that change the behavior of the robot while performing the task, making the plans more flexible. While performing tasks, robots encounter the same or similar situations over and over again. Albeit humans are able to benefit from this and improve at what they do, robots in general lack this ability. This thesis presents techniques for collecting and making robot experiences accessible to robots and outside observers alike, answering high level questions such as What are good spots to stand at for grasping objects from the fridge? or Which objects are especially difficult to grasp with two hands while they are in the oven? . By structuring and tapping into a robot's memory, it can make more informed decisions that are not based on manually encoded information, but self-improved behavior. To this end, I present several experience-based approaches to improve a robot's autonomous decisions, such as parameter choices, during execution time. Robots that interact with the real world are bound to deal with unexpected events and must properly react to failures of any kind of action. I present an extensible failure model that suits the structure of Generalized Plans and Episodic Memories and make clear how each module should deal with their own failures rather than directly handing them up to a governing cognitive architecture. In addition, I make a distinction between discrete parametrizations of Generalized Plans and continuous low level components, and how to translate between the two

    Robots as Powerful Allies for the Study of Embodied Cognition from the Bottom Up

    Get PDF
    A large body of compelling evidence has been accumulated demonstrating that embodiment – the agent’s physical setup, including its shape, materials, sensors and actuators – is constitutive for any form of cognition and as a consequence, models of cognition need to be embodied. In contrast to methods from empirical sciences to study cognition, robots can be freely manipulated and virtually all key variables of their embodiment and control programs can be systematically varied. As such, they provide an extremely powerful tool of investigation. We present a robotic bottom-up or developmental approach, focusing on three stages: (a) low-level behaviors like walking and reflexes, (b) learning regularities in sensorimotor spaces, and (c) human-like cognition. We also show that robotic based research is not only a productive path to deepening our understanding of cognition, but that robots can strongly benefit from human-like cognition in order to become more autonomous, robust, resilient, and safe

    Anthropomorphic robot finger with multi-point tactile sensation

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.Includes bibliographical references (p. 84-95).The goal of this research is to develop the prototype of a tactile sensing platform for anthropomorphic manipulation research. We investigate this problem through the fabrication and simple control of a planar 2-DOF robotic finger inspired by anatomic consistency, self-containment, and adaptability. The robot is equipped with a tactile sensor array based on optical transducer technology whereby localized changes in light intensity within an illuminated foam substrate correspond to the distribution and magnitude of forces applied to the sensor surface plane [58]. The integration of tactile perception is a key component in realizing robotic systems which organically interact with the world. Such natural behavior is characterized by compliant performance that can initiate internal, and respond to external, force application in a dynamic environment. However, most of the current manipulators that support some form of haptic feedback, either solely derive proprioceptive sensation or only limit tactile sensors to the mechanical fingertips. These constraints are due to the technological challenges involved in high resolution, multi-point tactile perception. In this work, however, we take the opposite approach, emphasizing the role of full-finger tactile feedback in the refinement of manual capabilities. To this end, we propose and implement a control framework for sensorimotor coordination analogous to infant-level grasping and fixturing reflexes. This thesis details the mechanisms used to achieve these sensory, actuation, and control objectives, along with the design philosophies and biological influences behind them. The results of behavioral experiments with the tactilely-modulated control scheme are also described. The hope is to integrate the modular finger into an engineered analog of the human hand with a complete haptic system.by Jessica Lauren Banks.S.M

    Cognitive-developmental learning for a humanoid robot : a caregiver's gift

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.Includes bibliographical references (p. 319-341).(cont.) which are then applied to developmentally acquire new object representations. The humanoid robot therefore sees the world through the caregiver's eyes. Building an artificial humanoid robot's brain, even at an infant's cognitive level, has been a long quest which still lies only in the realm of our imagination. Our efforts towards such a dimly imaginable task are developed according to two alternate and complementary views: cognitive and developmental.The goal of this work is to build a cognitive system for the humanoid robot, Cog, that exploits human caregivers as catalysts to perceive and learn about actions, objects, scenes, people, and the robot itself. This thesis addresses a broad spectrum of machine learning problems across several categorization levels. Actions by embodied agents are used to automatically generate training data for the learning mechanisms, so that the robot develops categorization autonomously. Taking inspiration from the human brain, a framework of algorithms and methodologies was implemented to emulate different cognitive capabilities on the humanoid robot Cog. This framework is effectively applied to a collection of AI, computer vision, and signal processing problems. Cognitive capabilities of the humanoid robot are developmentally created, starting from infant-like abilities for detecting, segmenting, and recognizing percepts over multiple sensing modalities. Human caregivers provide a helping hand for communicating such information to the robot. This is done by actions that create meaningful events (by changing the world in which the robot is situated) thus inducing the "compliant perception" of objects from these human-robot interactions. Self-exploration of the world extends the robot's knowledge concerning object properties. This thesis argues for enculturating humanoid robots using infant development as a metaphor for building a humanoid robot's cognitive abilities. A human caregiver redesigns a humanoid's brain by teaching the humanoid robot as she would teach a child, using children's learning aids such as books, drawing boards, or other cognitive artifacts. Multi-modal object properties are learned using these tools and inserted into several recognition schemes,by Artur Miguel Do Amaral Arsenio.Ph.D

    Teaching and old robot new tricks : learning novel tasks via interaction with people and things

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.Includes bibliographical references (p. 169-173).As AI has begun to reach out beyond its symbolic, objectivist roots into the embodied, experientialist realm, many projects are exploring different aspects of creating machines which interact with and respond to the world as humans do. Techniques for visual processing, object recognition, emotional response, gesture production and recognition, etc., are necessary components of a complete humanoid robot. However, most projects invariably concentrate on developing a few of these individual components, neglecting the issue of how all of these pieces would eventually fit together. The focus of the work in this dissertation is on creating a framework into which such specific competencies can be embedded, in a way that they can interact with each other and build layers of new functionality. To be of any practical value, such a framework must satisfy the real-world constraints of functioning in real-time with noisy sensors and actuators. The humanoid robot Cog provides an unapologetically adequate platform from which to take on such a challenge. This work makes three contributions to embodied AI. First, it offers a general-purpose architecture for developing behavior-based systems distributed over networks of PC's. Second, it provides a motor-control system that simulates several biological features which impact the development of motor behavior. Third, it develops a framework for a system which enables a robot to learn new behaviors via interacting with itself and the outside world. A few basic functional modules are built into this framework, enough to demonstrate the robot learning some very simple behaviors taught by a human trainer.(cont.) A primary motivation for this project is the notion that it is practically impossible to build an "intelligent" machine unless it is designed partly to build itself. This work is a proof-of-concept of such an approach to integrating multiple perceptual and motor systems into a complete learning agent.by Matthew J. MarjanoviÄ.Ph.D

    A Robotic System for Learning Visually-Driven Grasp Planning (Dissertation Proposal)

    Get PDF
    We use findings in machine learning, developmental psychology, and neurophysiology to guide a robotic learning system\u27s level of representation both for actions and for percepts. Visually-driven grasping is chosen as the experimental task since it has general applicability and it has been extensively researched from several perspectives. An implementation of a robotic system with a gripper, compliant instrumented wrist, arm and vision is used to test these ideas. Several sensorimotor primitives (vision segmentation and manipulatory reflexes) are implemented in this system and may be thought of as the innate perceptual and motor abilities of the system. Applying empirical learning techniques to real situations brings up such important issues as observation sparsity in high-dimensional spaces, arbitrary underlying functional forms of the reinforcement distribution and robustness to noise in exemplars. The well-established technique of non-parametric projection pursuit regression (PPR) is used to accomplish reinforcement learning by searching for projections of high-dimensional data sets that capture task invariants. We also pursue the following problem: how can we use human expertise and insight into grasping to train a system to select both appropriate hand preshapes and approaches for a wide variety of objects, and then have it verify and refine its skills through trial and error. To accomplish this learning we propose a new class of Density Adaptive reinforcement learning algorithms. These algorithms use statistical tests to identify possibly interesting regions of the attribute space in which the dynamics of the task change. They automatically concentrate the building of high resolution descriptions of the reinforcement in those areas, and build low resolution representations in regions that are either not populated in the given task or are highly uniform in outcome. Additionally, the use of any learning process generally implies failures along the way. Therefore, the mechanics of the untrained robotic system must be able to tolerate mistakes during learning and not damage itself. We address this by the use of an instrumented, compliant robot wrist that controls impact forces

    A trajectory and force dual-incremental robot skill learning and generalization framework using improved dynamical movement primitives and adaptive neural network control

    Get PDF
    Due to changes in the environment and errors that occurred during skill initialization, the robot's operational skills should be modified to adapt to new tasks. As such, skills learned by the methods with fixed features, such as the classical Dynamical Movement Primitive (DMP), are difficult to use when the using cases are significantly different from the demonstrations. In this work, we propose an incremental robot skill learning and generalization framework including an incremental DMP (IDMP) for robot trajectory learning and an adaptive neural network (NN) control method, which are incrementally updated to enable robots to adapt to new cases. IDMP uses multi-mapping feature vectors to rebuild the forcing function of DMP, which are extended based on the original feature vector. In order to maintain the original skills and represent skill changes in a new task, the new feature vector consists of three parts with different usages. Therefore, the trajectories are gradually changed by expanding the feature and weight vectors, and all transition states are also easily recovered. Then, an adaptive NN controller with performance constraints is proposed to compensate dynamics errors and changed trajectories after using the IDMP. The new controller is also incrementally updated and can accumulate and reuse the learned knowledge to improve the learning efficiency. Compared with other methods, the proposed framework achieves higher tracking accuracy, realizes incremental skill learning and modification, achieves multiple stylistic skills, and is used for obstacle avoidance with different heights, which are verified in three comparative experiments

    From locomotion to cognition: Bridging the gap between reactive and cognitive behavior in a quadruped robot

    Full text link
    The cognitivistic paradigm, which states that cognition is a result of computation with symbols that represent the world, has been challenged by many. The opponents have primarily criticized the detachment from direct interaction with the world and pointed to some fundamental problems (for instance the symbol grounding problem). Instead, they emphasized the constitutive role of embodied interaction with the environment. This has motivated the advancement of synthetic methodologies: the phenomenon of interest (cognition) can be studied by building and investigating whole brain-body-environment systems. Our work is centered around a compliant quadruped robot equipped with a multimodal sensory set. In a series of case studies, we investigate the structure of the sensorimotor space that the application of different actions in different environments by the robot brings about. Then, we study how the agent can autonomously abstract the regularities that are induced by the different conditions and use them to improve its behavior. The agent is engaged in path integration, terrain discrimination and gait adaptation, and moving target following tasks. The nature of the tasks forces the robot to leave the ``here-and-now'' time scale of simple reactive stimulus-response behaviors and to learn from its experience, thus creating a ``minimally cognitive'' setting. Solutions to these problems are developed by the agent in a bottom-up fashion. The complete scenarios are then used to illuminate the concepts that are believed to lie at the basis of cognition: sensorimotor contingencies, body schema, and forward internal models. Finally, we discuss how the presented solutions are relevant for applications in robotics, in particular in the area of autonomous model acquisition and adaptation, and, in mobile robots, in dead reckoning and traversability detection
    corecore