2,034 research outputs found

    A future of living machines? International trends and prospects in biomimetic and biohybrid systems

    Get PDF
    Research in the fields of biomimetic and biohybrid systems is developing at an accelerating rate. Biomimetics can be understood as the development of new technologies using principles abstracted from the study of biological systems, however, biomimetics can also be viewed from an alternate perspective as an important methodology for improving our understanding of the world we live in and of ourselves as biological organisms. A biohybrid entity comprises at least one artificial (engineered) component combined with a biological one. With technologies such as microscale mobile computing, prosthetics and implants, humankind is moving towards a more biohybrid future in which biomimetics helps us to engineer biocompatible technologies. This paper reviews recent progress in the development of biomimetic and biohybrid systems focusing particularly on technologies that emulate living organisms—living machines. Based on our recent bibliographic analysis [1] we examine how biomimetics is already creating life-like robots and identify some key unresolved challenges that constitute bottlenecks for the field. Drawing on our recent research in biomimetic mammalian robots, including humanoids, we review the future prospects for such machines and consider some of their likely impacts on society, including the existential risk of creating artifacts with significant autonomy that could come to match or exceed humankind in intelligence. We conclude that living machines are more likely to be a benefit than a threat but that we should also ensure that progress in biomimetics and biohybrid systems is made with broad societal consent. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only

    Learning Sensor Feedback Models from Demonstrations via Phase-Modulated Neural Networks

    Full text link
    In order to robustly execute a task under environmental uncertainty, a robot needs to be able to reactively adapt to changes arising in its environment. The environment changes are usually reflected in deviation from expected sensory traces. These deviations in sensory traces can be used to drive the motion adaptation, and for this purpose, a feedback model is required. The feedback model maps the deviations in sensory traces to the motion plan adaptation. In this paper, we develop a general data-driven framework for learning a feedback model from demonstrations. We utilize a variant of a radial basis function network structure --with movement phases as kernel centers-- which can generally be applied to represent any feedback models for movement primitives. To demonstrate the effectiveness of our framework, we test it on the task of scraping on a tilt board. In this task, we are learning a reactive policy in the form of orientation adaptation, based on deviations of tactile sensor traces. As a proof of concept of our method, we provide evaluations on an anthropomorphic robot. A video demonstrating our approach and its results can be seen in https://youtu.be/7Dx5imy1KcwComment: 8 pages, accepted to be published at the International Conference on Robotics and Automation (ICRA) 201

    TactileGCN: A Graph Convolutional Network for Predicting Grasp Stability with Tactile Sensors

    Get PDF
    Tactile sensors provide useful contact data during the interaction with an object which can be used to accurately learn to determine the stability of a grasp. Most of the works in the literature represented tactile readings as plain feature vectors or matrix-like tactile images, using them to train machine learning models. In this work, we explore an alternative way of exploiting tactile information to predict grasp stability by leveraging graph-like representations of tactile data, which preserve the actual spatial arrangement of the sensor's taxels and their locality. In experimentation, we trained a Graph Neural Network to binary classify grasps as stable or slippery ones. To train such network and prove its predictive capabilities for the problem at hand, we captured a novel dataset of approximately 5000 three-fingered grasps across 41 objects for training and 1000 grasps with 10 unknown objects for testing. Our experiments prove that this novel approach can be effectively used to predict grasp stability

    Probabilistic movement modeling for intention inference in human-robot interaction.

    No full text
    Intention inference can be an essential step toward efficient humanrobot interaction. For this purpose, we propose the Intention-Driven Dynamics Model (IDDM) to probabilistically model the generative process of movements that are directed by the intention. The IDDM allows to infer the intention from observed movements using Bayes ’ theorem. The IDDM simultaneously finds a latent state representation of noisy and highdimensional observations, and models the intention-driven dynamics in the latent states. As most robotics applications are subject to real-time constraints, we develop an efficient online algorithm that allows for real-time intention inference. Two human-robot interaction scenarios, i.e., target prediction for robot table tennis and action recognition for interactive humanoid robots, are used to evaluate the performance of our inference algorithm. In both intention inference tasks, the proposed algorithm achieves substantial improvements over support vector machines and Gaussian processes.
    • …
    corecore