35 research outputs found

    Muscleless Motor synergies and actions without movements : From Motor neuroscience to cognitive robotics

    Get PDF
    Emerging trends in neurosciences are providing converging evidence that cortical networks in predominantly motor areas are activated in several contexts related to ‘action’ that do not cause any overt movement. Indeed for any complex body, human or embodied robot inhabiting unstructured environments, the dual processes of shaping motor output during action execution and providing the self with information related to feasibility, consequence and understanding of potential actions (of oneself/others) must seamlessly alternate during goal-oriented behaviors, social interactions. While prominent approaches like Optimal Control, Active Inference converge on the role of forward models, they diverge on the underlying computational basis. In this context, revisiting older ideas from motor control like the Equilibrium Point Hypothesis and synergy formation, this article offers an alternative perspective emphasizing the functional role of a ‘plastic, configurable’ internal representation of the body (body-schema) as a critical link enabling the seamless continuum between motor control and imagery. With the central proposition that both “real and imagined” actions are consequences of an internal simulation process achieved though passive goal-oriented animation of the body schema, the computational/neural basis of muscleless motor synergies (and ensuing simulated actions without movements) is explored. The rationale behind this perspective is articulated in the context of several interdisciplinary studies in motor neurosciences (for example, intracranial depth recordings from the parietal cortex, FMRI studies highlighting a shared cortical basis for action ‘execution, imagination and understanding’), animal cognition (in particular, tool-use and neuro-rehabilitation experiments, revealing how coordinated tools are incorporated as an extension to the body schema) and pertinent challenges towards building cognitive robots that can seamlessly “act, interact, anticipate and understand” in unstructured natural living spaces

    Social Cognition for Human-Robot Symbiosis—Challenges and Building Blocks

    Get PDF
    The next generation of robot companions or robot working partners will need to satisfy social requirements somehow similar to the famous laws of robotics envisaged by Isaac Asimov time ago (Asimov, 1942). The necessary technology has almost reached the required level, including sensors and actuators, but the cognitive organization is still in its infancy and is only partially supported by the current understanding of brain cognitive processes. The brain of symbiotic robots will certainly not be a “positronic” replica of the human brain: probably, the greatest part of it will be a set of interacting computational processes running in the cloud. In this article, we review the challenges that must be met in the design of a set of interacting computational processes as building blocks of a cognitive architecture that may give symbiotic capabilities to collaborative robots of the next decades: (1) an animated body-schema; (2) an imitation machinery; (3) a motor intentions machinery; (4) a set of physical interaction mechanisms; and (5) a shared memory system for incremental symbiotic development. We would like to stress that our approach is totally un-hierarchical: the five building blocks of the shared cognitive architecture are fully bi-directionally connected. For example, imitation and intentional processes require the “services” of the animated body schema which, on the other hand, can run its simulations if appropriately prompted by imitation and/or intention, with or without physical interaction. Successful experiences can leave a trace in the shared memory system and chunks of memory fragment may compete to participate to novel cooperative actions. And so on and so forth. At the heart of the system is lifelong training and learning but, different from the conventional learning paradigms in neural networks, where learning is somehow passively imposed by an external agent, in symbiotic robots there is an element of free choice of what is worth learning, driven by the interaction between the robot and the human partner. The proposed set of building blocks is certainly a rough approximation of what is needed by symbiotic robots but we believe it is a useful starting point for building a computational framework

    Morphological Development in robotic learning: A survey

    Get PDF

    Biologically inspired robotic perception-action for soft fruit harvesting in vertical growing environments

    Get PDF
    Multiple interlinked factors like demographics, migration patterns, and economics are presently leading to the critical shortage of labour available for low-skilled, physically demanding tasks like soft fruit harvesting. This paper presents a biomimetic robotic solution covering the full ‘Perception-Action’ loop targeting harvesting of strawberries in a state-of-the-art vertical growing environment. The novelty emerges from both dealing with crop/environment variance as well as configuring the robot action system to deal with a range of runtime task constraints. Unlike the commonly used deep neural networks, the proposed perception system uses conditional Generative Adversarial Networks to identify the ripe fruit using synthetic data. The network can effectively train the synthetic data using the image-to-image translation concept, thereby avoiding the tedious work of collecting and labelling the real dataset. Once the harvest-ready fruit is localised using point cloud data generated by a stereo camera, our platform’s action system can coordinate the arm to reach/cut the stem using the Passive Motion Paradigm framework inspired by studies on neural control of movement in the brain. Results from field trials for strawberry detection, reaching/cutting the stem of the fruit, and extension to analysing complex canopy structures/bimanual coordination (searching/picking) are presented. While this article focuses on strawberry harvesting, ongoing research towards adaptation of the architecture to other crops such as tomatoes and sweet peppers is briefly described

    ABC: Adaptive, Biomimetic, Configurable Robots for Smart Farms - From Cereal Phenotyping to Soft Fruit Harvesting

    Get PDF
    Currently, numerous factors, such as demographics, migration patterns, and economics, are leading to the critical labour shortage in low-skilled and physically demanding parts of agriculture. Thus, robotics can be developed for the agricultural sector to address these shortages. This study aims to develop an adaptive, biomimetic, and configurable modular robotics architecture that can be applied to multiple tasks (e.g., phenotyping, cutting, and picking), various crop varieties (e.g., wheat, strawberry, and tomato) and growing conditions. These robotic solutions cover the entire perception–action–decision-making loop targeting the phenotyping of cereals and harvesting fruits in a natural environment. The primary contributions of this thesis are as follows. a) A high-throughput method for imaging field-grown wheat in three dimensions, along with an accompanying unsupervised measuring method for obtaining individual wheat spike data are presented. The unsupervised method analyses the 3D point cloud of each trial plot, containing hundreds of wheat spikes, and calculates the average size of the wheat spike and total spike volume per plot. Experimental results reveal that the proposed algorithm can effectively identify spikes from wheat crops and individual spikes. b) Unlike cereal, soft fruit is typically harvested by manual selection and picking. To enable robotic harvesting, the initial perception system uses conditional generative adversarial networks to identify ripe fruits using synthetic data. To determine whether the strawberry is surrounded by obstacles, a cluster complexity-based perception system is further developed to classify the harvesting complexity of ripe strawberries. c) Once the harvest-ready fruit is localised using point cloud data generated by a stereo camera, the platform’s action system can coordinate the arm to reach/cut the stem using the passive motion paradigm framework, as inspired by studies on neural control of movement in the brain. Results from field trials for strawberry detection, reaching/cutting the stem of the fruit with a mean error of less than 3 mm, and extension to analysing complex canopy structures/bimanual coordination (searching/picking) are presented. Although this thesis focuses on strawberry harvesting, ongoing research is heading toward adapting the architecture to other crops. The agricultural food industry remains a labour-intensive sector with a low margin, and cost- and time-efficiency business model. The concepts presented herein can serve as a reference for future agricultural robots that are adaptive, biomimetic, and configurable

    The Future of Humanoid Robots

    Get PDF
    This book provides state of the art scientific and engineering research findings and developments in the field of humanoid robotics and its applications. It is expected that humanoids will change the way we interact with machines, and will have the ability to blend perfectly into an environment already designed for humans. The book contains chapters that aim to discover the future abilities of humanoid robots by presenting a variety of integrated research in various scientific and engineering fields, such as locomotion, perception, adaptive behavior, human-robot interaction, neuroscience and machine learning. The book is designed to be accessible and practical, with an emphasis on useful information to those working in the fields of robotics, cognitive science, artificial intelligence, computational methods and other fields of science directly or indirectly related to the development and usage of future humanoid robots. The editor of the book has extensive R&D experience, patents, and publications in the area of humanoid robotics, and his experience is reflected in editing the content of the book

    Planning and estimation algorithms for human-like grasping

    Get PDF
    Mención Internacional en el título de doctorThe use of robots in human-like environments requires them to be able to sense and model unstructured scenarios. Thus, their success will depend on their versatility for interacting with the surroundings. This interaction often includes manipulation of objects for accomplishing common daily tasks. Therefore, robots need to sense, understand, plan and perform; and this has to be a continuous loop. This thesis presents a framework which covers most of the phases encountered in a common manipulation pipeline. First, it is shown how to use the Fast Marching Squared algorithm and a leader-followers strategy to control a formation of robots, simplifying a high dimensional path-planning problem. This approach is evaluated with simulations in complex environments in which the formation control technique is applied. Results are evaluated in terms of distance to obstacles (safety) and the needed deformation. Then, a framework to perform the grasping action is presented. The necessary techniques for environment modelling and grasp synthesis and path planning and control are presented. For the motion planning part, the formation concept from the previous chapter is recycled. This technique is applied to the planning and control of the movement of a complex hand-arm system. Tests using robot Manfred show the possibilities of the framework when performing in real scenarios. Finally, under the assumption that the grasping actions may not always result as it was previously planned, a Bayesian-based state-estimation process is introduced to estimate the final in-hand object pose after a grasping action is done, based on the measurements of proprioceptive and tactile sensors. This approach is evaluated in real experiments with Reex Takktile hand. Results show good performance in general terms, while suggest the need of a vision system for a more precise outcome.La investigación en robótica avanza con la intención de evolucionar hacia el uso de los robots en entornos humanos. A día de hoy, su uso está prácticamente limitado a las fábricas, donde trabajan en entornos controlados realizando tareas repetitivas. Sin embargo, estos robots son incapaces de reaccionar antes los más mínimos cambios en el entorno o en la tarea a realizar. En el grupo de investigación del Roboticslab se ha construido un manipulador móvil, llamado Manfred, en el transcurso de los últimos 15 años. Su objetivo es conseguir realizar tareas de navegación y manipulación en entornos diseñados para seres humanos. Para las tareas de manipulación y agarre, se ha adquirido recientemente una mano robótica diseñada en la universidad de Gifu, Japón. Sin embargo, al comienzo de esta tesis, no se había realzado ningún trabajo destinado a la manipulación o el agarre de objetos. Por lo tanto, existe una motivación clara para investigar en este campo y ampliar las capacidades del robot, aspectos tratados en esta tesis. La primera parte de la tesis muestra la aplicación de un sistema de control de formaciones de robots en 3 dimensiones. El sistema explicado utiliza un esquema de tipo líder-seguidores, y se basa en la utilización del algoritmo Fast Marching Square para el cálculo de la trayectoria del líder. Después, mientras el líder recorre el camino, la formación se va adaptando al entorno para evitar la colisión de los robots con los obstáculos. El esquema de deformación presentado se basa en la información sobre el entorno previamente calculada con Fast Marching Square. El algoritmo es probado a través de distintas simulaciones en escenarios complejos. Los resultados son analizados estudiando principalmente dos características: cantidad de deformación necesaria y seguridad de los caminos de los robots. Aunque los resultados son satisfactorios en ambos aspectos, es deseable que en un futuro se realicen simulaciones más realistas y, finalmente, se implemente el sistema en robots reales. El siguiente capítulo nace de la misma idea, el control de formaciones de robots. Este concepto es usado para modelar el sistema brazo-mano del robot Manfred. Al igual que en el caso de una formación de robots, el sistema al completo incluye un número muy elevado de grados de libertad que dificulta la planificación de trayectorias. Sin embargo, la adaptación del esquema de control de formaciones para el brazo-mano robótico nos permite reducir la complejidad a la hora de hacer la planificación de trayectorias. Al igual que antes, el sistema se basa en el uso de Fast Marching Square. Además, se ha construido un esquema completo que permite modelar el entorno, calcular posibles posiciones para el agarre, y planificar los movimientos para realizarlo. Todo ello ha sido implementado en el robot Manfred, realizando pruebas de agarre con objetos reales. Los resultados muestran el potencial del uso de este esquema de control, dejando lugar para mejoras, fundamentalmente en el apartado de la modelización de objetos y en el cálculo y elección de los posibles agarres. A continuación, se trata de cerrar el lazo de control en el agarre de objetos. Una vez un sistema robótico ha realizado los movimientos necesarios para obtener un agarre estable, la posición final del objeto dentro de la mano resulta, en la mayoría de las ocasiones, distinta de la que se había planificado. Este hecho es debido a la acumulación de fallos en los sistemas de percepción y modelado del entorno, y los de planificación y ejecución de movimientos. Por ello, se propone un sistema Bayesiano basado en un filtro de partículas que, teniendo en cuenta la posición de la palma y los dedos de la mano, los datos de sensores táctiles y la forma del objeto, estima la posición del objeto dentro de la mano. El sistema parte de una posición inicial conocida, y empieza a ejecutarse después del primer contacto entre los dedos y el objeto, de manera que sea capaz de detectar los movimientos que se producen al realizar la fuerza necesaria para estabilizar el agarre. Los resultados muestran la validez del método. Sin embargo, también queda claro que, usando únicamente la información táctil y de posición, hay grados de libertad que no se pueden determinar, por lo que, para el futuro, resultaría aconsejable la combinación de este sistema con otro basado en visión. Finalmente se incluyen 2 anexos que profundizan en la implementación de la solución del algoritmo de Fast Marching y la presentación de los sistemas robóticos reales que se han usado en las distintas pruebas de la tesis.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Carlos Balaguer Bernaldo de Quirós.- Secretario: Raúl Suárez Feijoo.- Vocal: Pedro U. Lim
    corecore