4 research outputs found

    Decomposing CAD models of objects of daily use and reasoning about their functional parts

    Full text link
    Abstract — Today’s robots are still lacking comprehensive knowledge bases about objects and their properties. Yet, a lot of knowledge is required when performing manipulation tasks to identify abstract concepts like a “handle ” or the “blade of a spatula ” and to ground them into concrete coordinate frames that can be used to parametrize the robot’s actions. In this paper, we present a system that enables robots to use CAD models of objects as a knowledge source and to perform logical inference about object components that have automatically been identified in these models. The system includes several algorithms for mesh segmentation and geometric primitive fitting which are integrated into the robot’s knowledge base as procedural attachments to the semantic representation. Bottom-up segmentation methods are complemented by top-down, knowledge-based analysis of the identified components. The evaluation on a diverse set of object models, downloaded from the Internet, shows that the algorithms are able to reliably detect several kinds of object parts. I

    Behavior-grounded multi-sensory object perception and exploration by a humanoid robot

    Get PDF
    Infants use exploratory behaviors to learn about the objects around them. Psychologists have theorized that behaviors such as touching, pressing, lifting, and dropping enable infants to form grounded object representations. For example, scratching an object can provide information about its roughness, while lifting it can provide information about its weight. In a sense, the exploratory behavior acts as a ``question\u27\u27 to the object, which is subsequently ``answered by the sensory stimuli produced during the execution of the behavior. In contrast, most object representations used by robots today rely solely on computer vision or laser scan data, gathered through passive observation. Such disembodied approaches to robotic perception may be useful for recognizing an object using a 3D model database, but nevertheless, will fail to infer object properties that cannot be detected using vision alone. To bridge this gap, this dissertation introduces a framework for object perception and exploration in which the robot\u27s representation of objects is grounded in its own sensorimotor experience with them. In this framework, an object is represented by sensorimotor contingencies that span a diverse set of exploratory behaviors and sensory modalities. The results from several large-scale experimental studies show that the behavior-grounded object representation enables a robot to solve a wide variety of tasks including recognition of objects based on the stimuli that they produce, object grouping and sorting, and learning category labels that describe objects and their properties

    Learning affordances for categorizing objects and their properties

    No full text
    In this paper, we demonstrate that simple interactions with objects in the environment leads to a manifestation of the perceptual properties of objects. This is achieved by deriving a condensed representation of the effects of actions (called effect prototypes in the paper), and investigating the relevance between perceptual features extracted from the objects and the actions that can be applied to them. With this at hand, we show that the agent can categorize (i.e., partition) its raw sensory perceptual feature vector, extracted from the environment, which is an important step for development of concepts and language. Moreover, after learning how to predict the effect prototypes of objects, the agent can categorize objects based on the predicted effects of actions that can be applied on them
    corecore