8,924 research outputs found

    Robot-object contact perception using symbolic temporal pattern learning

    No full text
    This paper investigates application of machine learning to the problem of contact perception between a robots gripper and an object. The input data comprises a multidimensional time-series produced by a force/torque sensor at the robots wrist, the robots proprioceptive information, namely, the position of the end-effector, as well as the robots control command. These data are used to train a hidden Markov model (HMM) classifier. The output of the classifier is a prediction of the contact state, which includes no contact, a contact aligned with the central axis of the valve, and an edge contact. To distinguish between contact states, the robot performs exploratory behaviors that produce distinct patterns in the time-series data. The patterns are discovered by first analyzing the data using a probabilistic clustering algorithm that transforms the multidimensional data into a one-dimensional sequence of symbols. The symbols produced by the clustering algorithm are used to train the HMM classifier. We examined two exploratory behaviors: a rotation around the x-axis, and a rotation around the y-axis of the gripper. We show that using these two exploratory behaviors we can successfully predict a contact state with an accuracy of 88 ± 5 % and 81 ± 10 %, respectively

    The Mechanics of Embodiment: A Dialogue on Embodiment and Computational Modeling

    Get PDF
    Embodied theories are increasingly challenging traditional views of cognition by arguing that conceptual representations that constitute our knowledge are grounded in sensory and motor experiences, and processed at this sensorimotor level, rather than being represented and processed abstractly in an amodal conceptual system. Given the established empirical foundation, and the relatively underspecified theories to date, many researchers are extremely interested in embodied cognition but are clamouring for more mechanistic implementations. What is needed at this stage is a push toward explicit computational models that implement sensory-motor grounding as intrinsic to cognitive processes. In this article, six authors from varying backgrounds and approaches address issues concerning the construction of embodied computational models, and illustrate what they view as the critical current and next steps toward mechanistic theories of embodiment. The first part has the form of a dialogue between two fictional characters: Ernest, the �experimenter�, and Mary, the �computational modeller�. The dialogue consists of an interactive sequence of questions, requests for clarification, challenges, and (tentative) answers, and touches the most important aspects of grounded theories that should inform computational modeling and, conversely, the impact that computational modeling could have on embodied theories. The second part of the article discusses the most important open challenges for embodied computational modelling

    Metric Learning for Generalizing Spatial Relations to New Objects

    Full text link
    Human-centered environments are rich with a wide variety of spatial relations between everyday objects. For autonomous robots to operate effectively in such environments, they should be able to reason about these relations and generalize them to objects with different shapes and sizes. For example, having learned to place a toy inside a basket, a robot should be able to generalize this concept using a spoon and a cup. This requires a robot to have the flexibility to learn arbitrary relations in a lifelong manner, making it challenging for an expert to pre-program it with sufficient knowledge to do so beforehand. In this paper, we address the problem of learning spatial relations by introducing a novel method from the perspective of distance metric learning. Our approach enables a robot to reason about the similarity between pairwise spatial relations, thereby enabling it to use its previous knowledge when presented with a new relation to imitate. We show how this makes it possible to learn arbitrary spatial relations from non-expert users using a small number of examples and in an interactive manner. Our extensive evaluation with real-world data demonstrates the effectiveness of our method in reasoning about a continuous spectrum of spatial relations and generalizing them to new objects.Comment: Accepted at the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems. The new Freiburg Spatial Relations Dataset and a demo video of our approach running on the PR-2 robot are available at our project website: http://spatialrelations.cs.uni-freiburg.d
    • …
    corecore