392 research outputs found

    Specifying Meta-Level Architectures for Rule-Based Systems

    Get PDF
    Explicit and declarative representation of control knowledge and well-structured knowledge bases are crucial requirements for efficient development and maintenance of rule-based systems. The CATWEAZLE rule interpreter allows knowledge engineers to meet these requirements by partitioning rule bases and specifying meta-level architectures for control. Among others the following problems arise when providing tools for specifying meta-level architectures for control: 1. What is a suitable language to specify meta-level architectures for control? 2. How can a general and declarative language for meta-level architectures be efficiently interpreted? The thesis outlines solutions to both research questions provided by the CATWEAZLE rule interpreter: 1. CATWEAZLE provides a small set of concepts based on a separation of control knowledge in control strategies and control tactics and a further categorization of control strategies. 2. For rule-based systems it is efficient to extend the RETE algorithm such that control knowledge can be processed, too

    Furniture models learned from the WWW: using web catalogs to locate and categorize unknown furniture pieces in 3D laser scans

    Get PDF
    In this article, we investigate how autonomous robots can exploit the high quality information already available from the WWW concerning 3-D models of office furniture. Apart from the hobbyist effort in Google 3-D Warehouse, many companies providing office furnishings already have the models for considerable portions of the objects found in our workplaces and homes. In particular, we present an approach that allows a robot to learn generic models of typical office furniture using examples found in the Web. These generic models are then used by the robot to locate and categorize unknown furniture in real indoor environments

    Envisioning the qualitative effects of robot manipulation actions using simulation-based projections

    Get PDF
    Autonomous robots that are to perform complex everyday tasks such as making pancakes have to understand how the effects of an action depend on the way the action is executed. Within Artificial Intelligence, classical planning reasons about whether actions are executable, but makes the assumption that the actions will succeed (with some probability). In this work, we have designed, implemented, and analyzed a framework that allows us to envision the physical effects of robot manipulation actions. We consider envisioning to be a qualitative reasoning method that reasons about actions and their effects based on simulation-based projections. Thereby it allows a robot to infer what could happen when it performs a task in a certain way. This is achieved by translating a qualitative physics problem into a parameterized simulation problem; performing a detailed physics-based simulation of a robot plan; logging the state evolution into appropriate data structures; and then translating these sub-symbolic data structures into interval-based first-order symbolic, qualitative representations, called timelines. The result of the envisioning is a set of detailed narratives represented by timelines which are then used to infer answers to qualitative reasoning problems. By envisioning the outcome of actions before committing to them, a robot is able to reason about physical phenomena and can therefore prevent itself from ending up in unwanted situations. Using this approach, robots can perform manipulation tasks more efficiently, robustly, and flexibly, and they can even successfully accomplish previously unknown variations of tasks

    Cumulative object categorization in clutter

    Get PDF
    In this paper we present an approach based on scene- or part-graphs for geometrically categorizing touching and occluded objects. We use additive RGBD feature descriptors and hashing of graph configuration parameters for describing the spatial arrangement of constituent parts. The presented experiments quantify that this method outperforms our earlier part-voting and sliding window classification. We evaluated our approach on cluttered scenes, and by using a 3D dataset containing over 15000 Kinect scans of over 100 objects which were grouped into general geometric categories. Additionally, color, geometric, and combined features were compared for categorization tasks
    corecore