7 research outputs found

    Toward a Memory Model for Autonomous Topological Mapping and Navigation: the Case of Binary Sensors and Discrete Actions

    Get PDF
    We propose a self-organizing database for per- ceptual experience capable of supporting autonomous goal- directed planning. The main contributions are: (i) a formal demonstration that the database is complex enough in principle to represent the homotopy type of the sensed environment; (ii) some initial steps toward a formal demonstration that the database offers a computationally effective, contractible approximation suitable for motion planning that can be ac- cumulated purely from autonomous sensory experience. The provable properties of an effectively trained data-base exploit certain notions of convexity that have been recently generalized for application to a symbolic (discrete) representation of subset nesting relations. We conclude by introducing a learning scheme that we conjecture (but cannot yet prove) will be capable of achieving the required training, assuming a rich enough exposure to the environment. For more information: Kod*La

    Sensor Interpretation and Task-Directed Planning Using Perceptual Equivalence Classes

    Full text link
    We consider how a robot may interpret its sensors and direct its actions so as to gain more information about the world, and to accomplish manipulation tasks. The key difficulty is uncertainty, in the form of noise in sensors, error in control, and unmodelled or unknown aspects of the environment. Our research focuses on general techniques for coping with uncertainty, specifically, to sense ther state of the task, adapt to changes, and reason to select actions to gain information and achieve the goal. Sensors yield partial information about the world. When we interrogate the environment through our sensors, we in effect view a projection of the world onto the space of possible sensor values. We investigate the structure of this sensor space and its relationship to the world. We observe that sensors partition the world into perceptual equivalence classes, that can serve as natural "landmarks." By analyzing the properties of these equivalence classes we develop a "lattice" and a "bundle" structure for the information available to the robot through sensing and action. This yields a framework in which we develop and characterize algorithms for sensor-based planning and reasoning

    Algorithmic Robot Design: Label Maps, Procrustean Graphs, and the Boundary of Non-Destructiveness

    Get PDF
    This dissertation is focused on the problem of algorithmic robot design. The process of designing a robot or a team of robots that can reliably accomplish a task in an environment requires several key elements. How the problem is formulated can play a big role in the design process. The ability of the model to correctly reflect the environment, the events, and different pieces of the problem is crucial. Another key element is the ability of the model to show the relationship between different designs of a single system. These two elements can enable design algorithms to navigate through the space of all possible designs, and find a set of solutions. In this dissertation, we introduce procrustean graphs, a model for encoding the robot-environment interactions. We also provide a model for navigating through the space of all possible designs, called label maps. Using these models, we focus on answering the following questions: What degradations to the set of sensors or actuators of a robotic system can be tolerated? How different degradations affect the cost of doing a given task? What sets of resources β€” that is, sensors and actuators β€” are minimal for accomplishing a specific given job? And how to find such a set? To this end, our general approach is to sample, using a variety of sampling methods, over the space of all maps for a given problem, and use different techniques for answering these questions. We use decision tree classifiers to determine the crucial sensors and actuators required for a robotic system to accomplish its job. We present an algorithm based on space bisection to find the boundary between the feasible and infeasible subspaces of possible designs. We present an algorithm to measure the cost of doing a given task, and another algorithm to find the relationship between different degradation of a robotic system and the cost of doing the task. In all these solutions, we use a variety of techniques to scale up each approach to enable it to solve real world problems. Our experiments show the efficiency of the presented approach

    Contact sensing--a sequential decision approach to sensing manipulation contact features

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 1995.Includes bibliographical references (p. 179-186).by Brian Scott Eberman.Ph.D

    Contact Sensing: A Sequential Decision Approach to Sensing Manipulation Contact

    Get PDF
    This paper describes a new statistical, model-based approach to building a contact state observer. The observer uses measurements of the contact force and position, and prior information about the task encoded in a graph, to determine the current location of the robot in the task configuration space. Each node represents what the measurements will look like in a small region of configuration space by storing a predictive, statistical, measurement model. This approach assumes that the measurements are statistically block independent conditioned on knowledge of the model, which is a fairly good model of the actual process. Arcs in the graph represent possible transitions between models. Beam Viterbi search is used to match measurement history against possible paths through the model graph in order to estimate the most likely path for the robot. The resulting approach provides a new decision process that can be use as an observer for event driven manipulation programming. The decision procedure is significantly more robust than simple threshold decisions because the measurement history is used to make decisions. The approach can be used to enhance the capabilities of autonomous assembly machines and in quality control applications

    Constructive Recognizability for Task-Directed Robot Programming

    Full text link
    The primary goal of our research is task-level planning. We approach this goal by utilizing a blend of theory, implementation, and experimentation. We investigate task-level planning for autonomous agents, such as mobile robots, that function in an uncertain environment. These robots typically have very approximate, inaccurate, or minimal models of the environment. For example, although the geometry of its environment is crucial to determining its performance, [footnote:I.e., what the geometry is will have a key role in determining the robot's actions or behavior.] a mobile robot might only have a partial, or local "map" of the world. Similarly, the expected effects of a robot's actuators critically influence its selection of actions to accomplish a goal, but a robot may have only a very approximate, or local predictive ability with regard to forward-simulation of a control strategy. While mobile robots are typically equipped with sensors in order to gain information about the world, and to compensate for errors in actuation and prediction, these sensors are noisy, and in turn provide inaccurate information. We investigate an approach whereby the robot attempts to acquire the necessary information about the world by planning a series of experiments [footnote: The robot (not the researchers!) performs the experiments, to gain information about the world.] using the robot's sensors and actuators, and building data-structures based on the robot's observations of these experiments. A key feature of this approach is that the experiments the robot performs should be driven by the information demands of the task. That is, in performing some task, the robot may enter a state in which making progress towards a goal requires more information about the world (or its own state). In this case, the robot should plan experiments which can disambiguate the situation. When this process is driven by the information demands of the task, we believe it is an important algorithmic technique to effect task-directed sensing. This introductory survey article discusses: 1. A theory of sensor interpretation and task-directed planning using perceptual equivalence classes, intended to be applicable in highly uncertain or unmodelled environments, such as for a mobile robot. 2. Algorithmic techniques for modelling geometric constraints on recognizability, and the building of internal representations (such as maps) using these constraints. 3. Explicit encoding of the information requirements of a task using a lattice (information hierarchy) of recognizable sets, which allows the robot to perform experiments to recognize a situation or a landmark. 4. The synthesis of robust mobot programs using the geometric constraints, constructive recognizability experiments, and uncertainty models imposed by the task. We discuss how to extend our theory and the geometric theory of planning to overcome challenges of the autonomous mobile robot domain. One of our most important goals is to show how our theory can be made constructive and algorithmic. We propose a framework for mobot programming based on constructive recognizability, and discuss why it should be robust in uncertain environments. Our objective is to demonstrate the following: When recognizability is thusly constructive, we naturally obtain task-directed sensing strategies, driven by the information demands encoded in the structure of the recognizable sets. A principled theory of sensing and action is crucial in developing task-level programming for autonomous mobile robots. We propose a framework for such a theory, providing both a precise vocabulary and also appropriate computational machinery for working with issues of information flow in and through a robot system equipped with various types of sensors and operating in a dynamic, unstructured environment. We are implementing the theory and testing it on mobile robots in our laboratory

    Task-Level Planning and Task-Directed Sensing for Robots in Uncertain Environments

    Full text link
    The primary goal of our research is task-level planning. We approach this goal by utilizing a blend of theory, implementation, and experimentation. We propose to investigate task-level planning for autonomous agents, such as mobile robots, that function in an uncertain environment. These robots typically have very approximate, inaccurate, or minimal models of the environment. For example, although the geometry of its environment is crucial to determining its performance, a mobile robot might only have a partial, or local "map" of the world. Similarly, the expected effects of a robot's actuators critically influence its selection of actions to accomplish a goal, but a robot may have only a very approximate, or local predictive ability with regard to forward-simulation of a control strategy. While mobile robots are typically equipped with sensors in order to gain information about the world, and to compensate for errors in actuation and prediction, these sensors are noisy, and in turn provide inaccurate information. We propose to investigate an approach whereby the robot attempts to acquire the necessary information about the world by planning a series of experiments using the robot's sensors and actuators, and building data-structures based on the robot's observations of these experiments. A key feature of this approach is that the experiments the robot performs should be driven by the information demands of the task. That is, in performing some task, the robot may enter a state in which making progress towards a goal requires more information about the world (or its own state). In this case, the robot should plan experiments which can disambiguate the situation. When this process is driven by the information demands of the task, we believe it is an important algorithmic technique to effect task-directed sensing. Plan projects focus on: 1. A theory of sensor interpretation and task-directed planning using perceptual equivalence classes, intended to be applicable in highly uncertain or unmodelled environments, such as for a mobile robot. 2. Algorithmic techniques for modelling geometric constraints on recognizability, and the building of internal representations (such as maps) using these constraints. 3. Explicit encoding of the information requirements of a task using a lattice (information hierarchy) of recognizable sets, which allows the robot to perform experiments to recognize a situation or a landmark. 4. The synthesis of robust mobot programs using the geometric constraints, constructive recognizability experiments, and uncertainty models imposed by the task. We propose to (a) continue our research and develop the theory fully, (b) use tools and concepts from the geometric theory of planning where appropriate, and (c) extend our theory and the geometric theory of planning where necessary to overcome challenges of the autonomous mobile robot domain. One of our most important goals is to show how our theory can be made constructive and algorithmic. We propose a framework for mobot programming based on constructive recognizability, and discuss why it should be robust in uncertain environments. Our objective is to demonstrate the following: When recognizability is thusly constructive, we naturally obtain task-directed sensing strategies, driven by the information demands encoded in the structure of the recognizable sets. A principled theory of sensing and action is crucial in developing task-level programming for autonomous mobile robots. We propose a framework for such a theory, providing both a precise vocabulary and also appropriate computational machinery for working with issues of information flow in and through a robot system equipped with various types of sensors and operating in a dynamic, unstructured environment. We will implement the theory and test it on mobile robots in our laboratory
    corecore