10 research outputs found

    Symbol acquisition for probabilistic high-level planning

    Get PDF
    We introduce a framework that enables an agent to autonomously learn its own symbolic representation of a low-level, continuous environment. Propositional symbols are formalized as names for probability distributions, providing a natural means of dealing with uncertain representations and probabilistic plans. We determine the symbols that are sufficient for computing the probability with which a plan will succeed, and demonstrate the acquisition of a symbolic representation in a computer game domain.National Science Foundation (U.S.) (grant 1420927)United States. Office of Naval Research (grant N00014-14-1-0486)United States. Air Force. Office of Scientific Research (grant FA23861014135)United States. Army Research Office (grant W911NF1410433)MIT Intelligence Initiativ

    DeepSym: Deep Symbol Generation and Rule Learning from Unsupervised Continuous Robot Interaction for Planning

    Full text link
    Autonomous discovery of discrete symbols and rules from continuous interaction experience is a crucial building block of robot AI, but remains a challenging problem. Solving it will overcome the limitations in scalability, flexibility, and robustness of manually-designed symbols and rules, and will constitute a substantial advance towards autonomous robots that can learn and reason at abstract levels in open-ended environments. Towards this goal, we propose a novel and general method that finds action-grounded, discrete object and effect categories and builds probabilistic rules over them that can be used in complex action planning. Our robot interacts with single and multiple objects using a given action repertoire and observes the effects created in the environment. In order to form action-grounded object, effect, and relational categories, we employ a binarized bottleneck layer of a predictive, deep encoder-decoder network that takes as input the image of the scene and the action applied, and generates the resulting object displacements in the scene (action effects) in pixel coordinates. The binary latent vector represents a learned, action-driven categorization of objects. To distill the knowledge represented by the neural network into rules useful for symbolic reasoning, we train a decision tree to reproduce its decoder function. From its branches we extract probabilistic rules and represent them in PPDDL, allowing off-the-shelf planners to operate on the robot's sensorimotor experience. Our system is verified in a physics-based 3d simulation environment where a robot arm-hand system learned symbols that can be interpreted as 'rollable', 'insertable', 'larger-than' from its push and stack actions; and generated effective plans to achieve goals such as building towers from given cubes, balls, and cups using off-the-shelf probabilistic planners

    Abstracting Probabilistic Models: Relations, Constraints and Beyond

    Get PDF

    Classical Planning in Deep Latent Space

    Full text link
    Current domain-independent, classical planners require symbolic models of the problem domain and instance as input, resulting in a knowledge acquisition bottleneck. Meanwhile, although deep learning has achieved significant success in many fields, the knowledge is encoded in a subsymbolic representation which is incompatible with symbolic systems such as planners. We propose Latplan, an unsupervised architecture combining deep learning and classical planning. Given only an unlabeled set of image pairs showing a subset of transitions allowed in the environment (training inputs), Latplan learns a complete propositional PDDL action model of the environment. Later, when a pair of images representing the initial and the goal states (planning inputs) is given, Latplan finds a plan to the goal state in a symbolic latent space and returns a visualized plan execution. We evaluate Latplan using image-based versions of 6 planning domains: 8-puzzle, 15-Puzzle, Blocksworld, Sokoban and Two variations of LightsOut.Comment: Under review at Journal of Artificial Intelligence Research (JAIR

    Physical Reasoning for Intelligent Agent in Simulated Environments

    No full text
    Developing Artificial Intelligence (AI) that is capable of understanding and interacting with the real world in a sophisticated way has long been a grand vision of AI. There is an increasing number of AI agents coming into our daily lives and assisting us with various daily tasks ranging from house cleaning to serving food in restaurants. While different tasks have different goals, the domains of the tasks all obey the physical rules (classic Newtonian physics) of the real world. To successfully interact with the physical world, an agent needs to be able to understand its surrounding environment, to predict the consequences of its actions and to draw plans that can achieve a goal without causing any unintended outcomes. Much of AI research over the past decades has been dedicated to specific sub-problems such as machine learning and computer vision, etc. Simply plugging in techniques from these subfields is far from creating a comprehensive AI agent that can work well in a physical environment. Instead, it requires an integration of methods from different AI areas that considers specific conditions and requirements of the physical environment. In this thesis, we identified several capabilities that are essential for AI to interact with the physical world, namely, visual perception, object detection, object tracking, action selection, and structure planning. As the real world is a highly complex environment, we started with developing these capabilities in virtual environments with realistic physics simulations. The central part of our methods is the combination of qualitative reasoning and standard techniques from different AI areas. For the visual perception capability, we developed a method that can infer spatial properties of rectangular objects from their minimum bounding rectangles. For the object detection capability, we developed a method that can detect unknown objects in a structure by reasoning about the stability of the structure. For the object tracking capability, we developed a method that can match perceptually indistinguishable objects in visual observations made before and after a physical impact. This method can identify spatial changes of objects in the physical event, and the result of matching can be used for learning the consequence of the impact. For the action selection capability, we developed a method that solves a hole-in-one problem that requires selecting an action out of an infinite number of actions with unknown consequences. For the structure planning capability, we developed a method that can arrange objects to form a stable and robust structure by reasoning about structural stability and robustness
    corecore