818 research outputs found

    A future of living machines? International trends and prospects in biomimetic and biohybrid systems

    Get PDF
    Research in the fields of biomimetic and biohybrid systems is developing at an accelerating rate. Biomimetics can be understood as the development of new technologies using principles abstracted from the study of biological systems, however, biomimetics can also be viewed from an alternate perspective as an important methodology for improving our understanding of the world we live in and of ourselves as biological organisms. A biohybrid entity comprises at least one artificial (engineered) component combined with a biological one. With technologies such as microscale mobile computing, prosthetics and implants, humankind is moving towards a more biohybrid future in which biomimetics helps us to engineer biocompatible technologies. This paper reviews recent progress in the development of biomimetic and biohybrid systems focusing particularly on technologies that emulate living organisms—living machines. Based on our recent bibliographic analysis [1] we examine how biomimetics is already creating life-like robots and identify some key unresolved challenges that constitute bottlenecks for the field. Drawing on our recent research in biomimetic mammalian robots, including humanoids, we review the future prospects for such machines and consider some of their likely impacts on society, including the existential risk of creating artifacts with significant autonomy that could come to match or exceed humankind in intelligence. We conclude that living machines are more likely to be a benefit than a threat but that we should also ensure that progress in biomimetics and biohybrid systems is made with broad societal consent. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only

    DAC-h3: A Proactive Robot Cognitive Architecture to Acquire and Express Knowledge About the World and the Self

    Get PDF
    This paper introduces a cognitive architecture for a humanoid robot to engage in a proactive, mixed-initiative exploration and manipulation of its environment, where the initiative can originate from both the human and the robot. The framework, based on a biologically-grounded theory of the brain and mind, integrates a reactive interaction engine, a number of state-of-the art perceptual and motor learning algorithms, as well as planning abilities and an autobiographical memory. The architecture as a whole drives the robot behavior to solve the symbol grounding problem, acquire language capabilities, execute goal-oriented behavior, and express a verbal narrative of its own experience in the world. We validate our approach in human-robot interaction experiments with the iCub humanoid robot, showing that the proposed cognitive architecture can be applied in real time within a realistic scenario and that it can be used with naive users

    Locomotion through morphology, evolution and learning for legged and limbless robots

    Get PDF
    Mención Internacional en el título de doctorRobot locomotion is concerned with providing autonomous locomotion capabilities to mobile robots. Most current day robots feature some form of locomotion for navigating in their environment. Modalities of robot locomotion includes: (i) aerial locomotion, (ii) terrestrial locomotion, and (iii) aquatic locomotion (on or under water). Three main forms of terrestrial locomotion are, legged locomotion, limbless locomotion and wheel-based locomotion. A Modular Robot (MR), on the other hand, is a robotic system composed of several independent unit modules, where, each module is a robot by itself. The objective in this thesis is to develop legged locomotion in a humanoid robot, as well as, limbless locomotion in modular robotic configurations. Taking inspiration from biology, robot locomotion from the perspective of robot’s morphology, through evolution, and through learning are investigated in this thesis. Locomotion is one of the key distinguishing characteristics of a zoological organism. Almost all animal species, and even some plant species, produce some form of locomotion. In the past few years, robots have been “moving out” of the factory floor and research labs, and are becoming increasingly common in everyday life. So, providing stable and agile locomotion capabilities for robots to navigate a wide range of environments becomes pivotal. Developing locomotion in robots through biologically inspired methods, also facilitates furthering our understanding on how biological processes may function. Connected modules in a configuration, exert force on each other as a result of interaction between each other and their environment. This phenomenon is studied and quantified, and then used as implicit communication between robot modules for producing locomotion coordination in MRs. Through this, a strong link between robot morphology and the gait that emerge in it is established. A variety of locomotion controller, some periodic-function based and some morphology based, are developed for MR locomotion and bipedal gait generation. A hybrid Evolutionary Algorithm (EA) is implemented for evolving gaits, both in simulation as well as in the real-world on a physical modular robotic configuration. Limbless gaits in MRs are also learnt by learning optimal control policies, through Reinforcement Learning (RL).En robótica, la locomoción trata de proporcionar capacidades de locomoción autónoma a robots móviles. La mayoría de los robots actuales tiene alguna forma de locomoción para navegar en su entorno. Los modos de locomoción robótica se pueden repartir entre: (i) locomoción aérea, (ii) locomoción terrestre, y (iii) locomoción acuática (sobre o bajo el agua). Las tres formas básicas de locomoción terrestre son la locomoción mediante piernas, la locomoción sin miembros, y la locomoción basada en ruedas. Un Robot Modular, por otra parte, es un sistema robótico compuesto por varios módulos independientes, donde cada módulo es un robot en sí mismo. El objetivo de esta tesis es el desarrollo de la locomoción mediante piernas para un robot humanoide, así como el de la locomoción sin miembros para varias configuraciones de robots modulares. Inspirándose en la biología, también se investiga en esta tesis el desarrollo de la locomoción del robot según su morfología, gracias a técnicas de evolución y de aprendizaje. La locomoción es una de las características distintivas de un organismo zoológico. Casi todas las especies animales, e incluso algunas especies de plantas, poseen algún tipo de locomoción. En los últimos años, los robots han “migrado” desde las fábricas y los laboratorios de investigación, y se están integrando cada vez más en nuestra vida diaria. Por estas razones, es crucial proporcionar capacidades de locomoción estables y ágiles a los robots para que puedan navegar por todo tipo de entornos. El uso de métodos de inspiración biológica para alcanzar esta meta también nos ayuda a entender mejor cómo pueden funcionar los procesos biológicos equivalentes. En una configuración de módulos conectados, puesto que cada uno interacciona con su entorno, los módulos ejercen fuerza los unos sobre los otros. Este fenómeno se ha estudiado y cuantificado, y luego se ha usado como comunicación implícita entre los módulos para producir la coordinación en la locomoción de este robot. De esta manera, se establece un fuerte vínculo entre la morfología de un robot y el modo de andar que este desarrolla. Se han desarrollado varios controladores de locomoción para robots modulares y robots bípedos, algunos basados en funciones periódicas, otros en la morfología del robot. Un algoritmo evolutivo híbrido se ha implementado para la evolución de locomociones, tanto en simulación como en el mundo real en una configuración física de robot modular. También se pueden generar locomociones sin miembros para robots modulares, determinando las políticas de control óptimo gracias a técnicas de aprendizaje por refuerzo. Se presenta en primer lugar en esta tesis el estado del arte de la robótica modular, enfocándose en la locomoción de robots modulares, los controladores, la locomoción bípeda y la computación morfológica. A continuación se describen cinco configuraciones diferentes de robot modular que se utilizan en esta tesis, seguido de cuatro controladores de locomoción. Estos controladores son el controlador heterogéneo, el controlador basado en funciones periódicas, el controlador homogéneo y el controlador basado en la morfología del robot. Se desarrolla como parte de este trabajo un controlador de locomoción lineal, periódico, basado en features, para la locomoción bípeda de robots humanoides. Los parámetros de control se ajustan primero a mano para reproducir un modelo cart-table, y el controlador se evalúa en un robot humanoide simulado. A continuación, gracias a un algoritmo evolutivo, la optimización de los parámetros de control permite desarrollar una locomoción sin modelo predeterminado. Se desarrolla como parte de esta tesis un enfoque sobre algoritmos de Embodied Evolución, en otras palabras el uso de robots modulares físicos en la fase de evolución. La implementación material, la configuración experimental, y el Algoritmo Evolutivo implementado para Embodied Evolución, se explican detalladamente. El trabajo también incluye una visión general de las técnicas de aprendizaje por refuerzo y de los Procesos de Decisión de Markov. A continuación se presenta un algoritmo popular de aprendizaje por refuerzo, llamado Q-Learning, y su adaptación para aprender locomociones de robots modulares. Se proporcionan una implementación del algoritmo de aprendizaje y la evaluación experimental de la locomoción generada.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Antonio Barrientos Cruz.- Secretario: Luis Santiago Garrido Bullón.- Vocal: Giuseppe Carbon

    Bilinear Time Delay Neural Network System for Humanoid Robot Software

    Get PDF

    An integrated probabilistic framework for robot perception, learning and memory

    Get PDF
    Learning and perception from multiple sensory modalities are crucial processes for the development of intelligent systems capable of interacting with humans. We present an integrated probabilistic framework for perception, learning and memory in robotics. The core component of our framework is a computational Synthetic Autobiographical Memory model which uses Gaussian Processes as a foundation and mimics the functionalities of human memory. Our memory model, that operates via a principled Bayesian probabilistic framework, is capable of receiving and integrating data flows from multiple sensory modalities, which are combined to improve perception and understanding of the surrounding environment. To validate the model, we implemented our framework in the iCub humanoid robotic, which was able to learn and recognise human faces, arm movements and touch gestures through interaction with people. Results demonstrate the flexibility of our method to successfully integrate multiple sensory inputs, for accurate learning and recognition. Thus, our integrated probabilistic framework offers a promising core technology for robust intelligent systems, which are able to perceive, learn and interact with people and their environments

    What Can I Do Around Here? Deep Functional Scene Understanding for Cognitive Robots

    Full text link
    For robots that have the capability to interact with the physical environment through their end effectors, understanding the surrounding scenes is not merely a task of image classification or object recognition. To perform actual tasks, it is critical for the robot to have a functional understanding of the visual scene. Here, we address the problem of localizing and recognition of functional areas from an arbitrary indoor scene, formulated as a two-stage deep learning based detection pipeline. A new scene functionality testing-bed, which is complied from two publicly available indoor scene datasets, is used for evaluation. Our method is evaluated quantitatively on the new dataset, demonstrating the ability to perform efficient recognition of functional areas from arbitrary indoor scenes. We also demonstrate that our detection model can be generalized onto novel indoor scenes by cross validating it with the images from two different datasets

    Social signs processing in a cognitive architecture for an humanoid robot

    Get PDF
    Abstract A social robot has to recognize human social intention in order to fully interact with him/her. People intention can be inferred by processing verbal and non-verbal communicative signs. In this work we describe an actions classification module embedded into a robot's cognitive architecture, contributing to the interpretation of users behavior
    corecore