2,723 research outputs found

    Muscleless Motor synergies and actions without movements : From Motor neuroscience to cognitive robotics

    Get PDF
    Emerging trends in neurosciences are providing converging evidence that cortical networks in predominantly motor areas are activated in several contexts related to ‘action’ that do not cause any overt movement. Indeed for any complex body, human or embodied robot inhabiting unstructured environments, the dual processes of shaping motor output during action execution and providing the self with information related to feasibility, consequence and understanding of potential actions (of oneself/others) must seamlessly alternate during goal-oriented behaviors, social interactions. While prominent approaches like Optimal Control, Active Inference converge on the role of forward models, they diverge on the underlying computational basis. In this context, revisiting older ideas from motor control like the Equilibrium Point Hypothesis and synergy formation, this article offers an alternative perspective emphasizing the functional role of a ‘plastic, configurable’ internal representation of the body (body-schema) as a critical link enabling the seamless continuum between motor control and imagery. With the central proposition that both “real and imagined” actions are consequences of an internal simulation process achieved though passive goal-oriented animation of the body schema, the computational/neural basis of muscleless motor synergies (and ensuing simulated actions without movements) is explored. The rationale behind this perspective is articulated in the context of several interdisciplinary studies in motor neurosciences (for example, intracranial depth recordings from the parietal cortex, FMRI studies highlighting a shared cortical basis for action ‘execution, imagination and understanding’), animal cognition (in particular, tool-use and neuro-rehabilitation experiments, revealing how coordinated tools are incorporated as an extension to the body schema) and pertinent challenges towards building cognitive robots that can seamlessly “act, interact, anticipate and understand” in unstructured natural living spaces

    Causative role of left aIPS in coding shared goals during human-avatar complementary joint actions

    Get PDF
    Successful motor interactions require agents to anticipate what a partner is doing in order to predictively adjust their own movements. Although the neural underpinnings of the ability to predict others' action goals have been well explored during passive action observation, no study has yet clarified any critical neural substrate supporting interpersonal coordination during active, non-imitative (complementary) interactions. Here, we combine non-invasive inhibitory brain stimulation (continuous Theta Burst Stimulation) with a novel human-avatar interaction task to investigate a causal role for higher-order motor cortical regions in supporting the ability to predict and adapt to others' actions. We demonstrate that inhibition of left anterior intraparietal sulcus (aIPS), but not ventral premotor cortex, selectively impaired individuals' performance during complementary interactions. Thus, in addition to coding observed and executed action goals, aIPS is crucial in coding 'shared goals', that is, integrating predictions about one's and others' complementary actions

    Passive Motion Paradigm: An Alternative to Optimal Control

    Get PDF
    In the last years, optimal control theory (OCT) has emerged as the leading approach for investigating neural control of movement and motor cognition for two complementary research lines: behavioral neuroscience and humanoid robotics. In both cases, there are general problems that need to be addressed, such as the “degrees of freedom (DoFs) problem,” the common core of production, observation, reasoning, and learning of “actions.” OCT, directly derived from engineering design techniques of control systems quantifies task goals as “cost functions” and uses the sophisticated formal tools of optimal control to obtain desired behavior (and predictions). We propose an alternative “softer” approach passive motion paradigm (PMP) that we believe is closer to the biomechanics and cybernetics of action. The basic idea is that actions (overt as well as covert) are the consequences of an internal simulation process that “animates” the body schema with the attractor dynamics of force fields induced by the goal and task-specific constraints. This internal simulation offers the brain a way to dynamically link motor redundancy with task-oriented constraints “at runtime,” hence solving the “DoFs problem” without explicit kinematic inversion and cost function computation. We argue that the function of such computational machinery is not only restricted to shaping motor output during action execution but also to provide the self with information on the feasibility, consequence, understanding and meaning of “potential actions.” In this sense, taking into account recent developments in neuroscience (motor imagery, simulation theory of covert actions, mirror neuron system) and in embodied robotics, PMP offers a novel framework for understanding motor cognition that goes beyond the engineering control paradigm provided by OCT. Therefore, the paper is at the same time a review of the PMP rationale, as a computational theory, and a perspective presentation of how to develop it for designing better cognitive architectures

    Computational Methods for Cognitive and Cooperative Robotics

    Get PDF
    In the last decades design methods in control engineering made substantial progress in the areas of robotics and computer animation. Nowadays these methods incorporate the newest developments in machine learning and artificial intelligence. But the problems of flexible and online-adaptive combinations of motor behaviors remain challenging for human-like animations and for humanoid robotics. In this context, biologically-motivated methods for the analysis and re-synthesis of human motor programs provide new insights in and models for the anticipatory motion synthesis. This thesis presents the author’s achievements in the areas of cognitive and developmental robotics, cooperative and humanoid robotics and intelligent and machine learning methods in computer graphics. The first part of the thesis in the chapter “Goal-directed Imitation for Robots” considers imitation learning in cognitive and developmental robotics. The work presented here details the author’s progress in the development of hierarchical motion recognition and planning inspired by recent discoveries of the functions of mirror-neuron cortical circuits in primates. The overall architecture is capable of ‘learning for imitation’ and ‘learning by imitation’. The complete system includes a low-level real-time capable path planning subsystem for obstacle avoidance during arm reaching. The learning-based path planning subsystem is universal for all types of anthropomorphic robot arms, and is capable of knowledge transfer at the level of individual motor acts. Next, the problems of learning and synthesis of motor synergies, the spatial and spatio-temporal combinations of motor features in sequential multi-action behavior, and the problems of task-related action transitions are considered in the second part of the thesis “Kinematic Motion Synthesis for Computer Graphics and Robotics”. In this part, a new approach of modeling complex full-body human actions by mixtures of time-shift invariant motor primitives in presented. The online-capable full-body motion generation architecture based on dynamic movement primitives driving the time-shift invariant motor synergies was implemented as an online-reactive adaptive motion synthesis for computer graphics and robotics applications. The last chapter of the thesis entitled “Contraction Theory and Self-organized Scenarios in Computer Graphics and Robotics” is dedicated to optimal control strategies in multi-agent scenarios of large crowds of agents expressing highly nonlinear behaviors. This last part presents new mathematical tools for stability analysis and synthesis of multi-agent cooperative scenarios.In den letzten Jahrzehnten hat die Forschung in den Bereichen der Steuerung und Regelung komplexer Systeme erhebliche Fortschritte gemacht, insbesondere in den Bereichen Robotik und Computeranimation. Die Entwicklung solcher Systeme verwendet heutzutage neueste Methoden und Entwicklungen im Bereich des maschinellen Lernens und der künstlichen Intelligenz. Die flexible und echtzeitfähige Kombination von motorischen Verhaltensweisen ist eine wesentliche Herausforderung für die Generierung menschenähnlicher Animationen und in der humanoiden Robotik. In diesem Zusammenhang liefern biologisch motivierte Methoden zur Analyse und Resynthese menschlicher motorischer Programme neue Erkenntnisse und Modelle für die antizipatorische Bewegungssynthese. Diese Dissertation präsentiert die Ergebnisse der Arbeiten des Autors im Gebiet der kognitiven und Entwicklungsrobotik, kooperativer und humanoider Robotersysteme sowie intelligenter und maschineller Lernmethoden in der Computergrafik. Der erste Teil der Dissertation im Kapitel “Zielgerichtete Nachahmung für Roboter” behandelt das Imitationslernen in der kognitiven und Entwicklungsrobotik. Die vorgestellten Arbeiten beschreiben neue Methoden für die hierarchische Bewegungserkennung und -planung, die durch Erkenntnisse zur Funktion der kortikalen Spiegelneuronen-Schaltkreise bei Primaten inspiriert wurden. Die entwickelte Architektur ist in der Lage, ‘durch Imitation zu lernen’ und ‘zu lernen zu imitieren’. Das komplette entwickelte System enthält ein echtzeitfähiges Pfadplanungssubsystem zur Hindernisvermeidung während der Durchführung von Armbewegungen. Das lernbasierte Pfadplanungssubsystem ist universell und für alle Arten von anthropomorphen Roboterarmen in der Lage, Wissen auf der Ebene einzelner motorischer Handlungen zu übertragen. Im zweiten Teil der Arbeit “Kinematische Bewegungssynthese für Computergrafik und Robotik” werden die Probleme des Lernens und der Synthese motorischer Synergien, d.h. von räumlichen und räumlich-zeitlichen Kombinationen motorischer Bewegungselemente bei Bewegungssequenzen und bei aufgabenbezogenen Handlungs übergängen behandelt. Es wird ein neuer Ansatz zur Modellierung komplexer menschlicher Ganzkörperaktionen durch Mischungen von zeitverschiebungsinvarianten Motorprimitiven vorgestellt. Zudem wurde ein online-fähiger Synthesealgorithmus für Ganzköperbewegungen entwickelt, der auf dynamischen Bewegungsprimitiven basiert, die wiederum auf der Basis der gelernten verschiebungsinvarianten Primitive konstruiert werden. Dieser Algorithmus wurde für verschiedene Probleme der Bewegungssynthese für die Computergrafik- und Roboteranwendungen implementiert. Das letzte Kapitel der Dissertation mit dem Titel “Kontraktionstheorie und selbstorganisierte Szenarien in der Computergrafik und Robotik” widmet sich optimalen Kontrollstrategien in Multi-Agenten-Szenarien, wobei die Agenten durch eine hochgradig nichtlineare Kinematik gekennzeichnet sind. Dieser letzte Teil präsentiert neue mathematische Werkzeuge für die Stabilitätsanalyse und Synthese von kooperativen Multi-Agenten-Szenarien

    Dynamic Visuomotor Transformation Involved with Remote Flying of a Plane Utilizes the ‘Mirror Neuron’ System

    Get PDF
    Brain regions involved with processing dynamic visuomotor representational transformation are investigated using fMRI. The perceptual-motor task involved flying (or observing) a plane through a simulated Red Bull Air Race course in first person and third person chase perspective. The third person perspective is akin to remote operation of a vehicle. The ability for humans to remotely operate vehicles likely has its roots in neural processes related to imitation in which visuomotor transformation is necessary to interpret the action goals in an egocentric manner suitable for execution. In this experiment for 3rd person perspective the visuomotor transformation is dynamically changing in accordance to the orientation of the plane. It was predicted that 3rd person remote flying, over 1st, would utilize brain regions composing the ‘Mirror Neuron’ system that is thought to be intimately involved with imitation for both execution and observation tasks. Consistent with this prediction differential brain activity was present for 3rd person over 1st person perspectives for both execution and observation tasks in left ventral premotor cortex, right dorsal premotor cortex, and inferior parietal lobule bilaterally (Mirror Neuron System) (Behaviorally: 1st>3rd). These regions additionally showed greater activity for flying (execution) over watching (observation) conditions. Even though visual and motor aspects of the tasks were controlled for, differential activity was also found in brain regions involved with tool use, motion perception, and body perspective including left cerebellum, temporo-occipital regions, lateral occipital cortex, medial temporal region, and extrastriate body area. This experiment successfully demonstrates that a complex perceptual motor real-world task can be utilized to investigate visuomotor processing. This approach (Aviation Cerebral Experimental Sciences ACES) focusing on direct application to lab and field is in contrast to standard methodology in which tasks and conditions are reduced to their simplest forms that are remote from daily life experience

    Modeling the neural correlates of imitation from a neuropsychological perspective

    Get PDF
    Imitation is a fundamental mechanism by which humans learn and understand the actions of others. This thesis addresses the low-level neural mechanisms underlying the imitation of meaningless gestures, using tools from computational neuroscience. We investigate how the human brain perceives these gestures and translates them into appropriate motor commands. In addition, we take a relatively unexplored neuropsychological perspective, which looks at imitation following a brain lesion. The analysis of how imitation breaks down in apraxia, a complex disorder of voluntary movement, enables us to reverse engineer brain function through the identification of those building blocks that are preserved. To better understand the phenomenon of apraxia, we develop a neurocomputational model of imitation that proposes potential neuroanatomical correlates, such as the flow of information across the two brain hemispheres. The model accounts for the pattern of errors observed in apraxic patients with disconnected brain hemispheres. To validate the predictions of our model, we further analyze the experimental errors and uncover a goal-dissociation, where a goal is defined as the spatial relation between two body parts. The experimental observations suggest that the imitation deficit in apraxia arises from an incorrect coordination between the reproductions of multiple goals. A prediction of this hypothesis was validated on three apraxic patients. The collected body of kinematic and neuropsychological data allowed us to refine our neurocomputational model of imitation, and to propose a biologically plausible mathematical model for the execution stage of the imitation. The model controls movement by following nonlinear dynamics, and precisely reproduces both the spatial and temporal aspects of unconstrained and natural three-dimensional reaching movements. Importantly, the model is stable and robust against external perturbations. Overall, our computational models and neuropsychological experiments contribute to a better understanding of how the brain performs the imitation of meaningless gestures; that is, by first decomposing the gesture into imitation goals, and then reproducing these goals through the association of different sensory modalities

    A Posture Sequence Learning System for an Anthropomorphic Robotic Hand

    Get PDF
    The paper presents a cognitive architecture for posture learning of an anthropomorphic robotic hand. Our approach is aimed to allow the robotic system to perform complex perceptual operations, to interact with a human user and to integrate the perceptions by a cognitive representation of the scene and the observed actions. The anthropomorphic robotic hand imitates the gestures acquired by the vision system in order to learn meaningful movements, to build its knowledge by different conceptual spaces and to perform complex interaction with the human operator

    Robot learning from demonstration of force-based manipulation tasks

    Get PDF
    One of the main challenges in Robotics is to develop robots that can interact with humans in a natural way, sharing the same dynamic and unstructured environments. Such an interaction may be aimed at assisting, helping or collaborating with a human user. To achieve this, the robot must be endowed with a cognitive system that allows it not only to learn new skills from its human partner, but also to refine or improve those already learned. In this context, learning from demonstration appears as a natural and userfriendly way to transfer knowledge from humans to robots. This dissertation addresses such a topic and its application to an unexplored field, namely force-based manipulation tasks learning. In this kind of scenarios, force signals can convey data about the stiffness of a given object, the inertial components acting on a tool, a desired force profile to be reached, etc. Therefore, if the user wants the robot to learn a manipulation skill successfully, it is essential that its cognitive system is able to deal with force perceptions. The first issue this thesis tackles is to extract the input information that is relevant for learning the task at hand, which is also known as the what to imitate? problem. Here, the proposed solution takes into consideration that the robot actions are a function of sensory signals, in other words the importance of each perception is assessed through its correlation with the robot movements. A Mutual Information analysis is used for selecting the most relevant inputs according to their influence on the output space. In this way, the robot can gather all the information coming from its sensory system, and the perception selection module proposed here automatically chooses the data the robot needs to learn a given task. Having selected the relevant input information for the task, it is necessary to represent the human demonstrations in a compact way, encoding the relevant characteristics of the data, for instance, sequential information, uncertainty, constraints, etc. This issue is the next problem addressed in this thesis. Here, a probabilistic learning framework based on hidden Markov models and Gaussian mixture regression is proposed for learning force-based manipulation skills. The outstanding features of such a framework are: (i) it is able to deal with the noise and uncertainty of force signals because of its probabilistic formulation, (ii) it exploits the sequential information embedded in the model for managing perceptual aliasing and time discrepancies, and (iii) it takes advantage of task variables to encode those force-based skills where the robot actions are modulated by an external parameter. Therefore, the resulting learning structure is able to robustly encode and reproduce different manipulation tasks. After, this thesis goes a step forward by proposing a novel whole framework for learning impedance-based behaviors from demonstrations. The key aspects here are that this new structure merges vision and force information for encoding the data compactly, and it allows the robot to have different behaviors by shaping its compliance level over the course of the task. This is achieved by a parametric probabilistic model, whose Gaussian components are the basis of a statistical dynamical system that governs the robot motion. From the force perceptions, the stiffness of the springs composing such a system are estimated, allowing the robot to shape its compliance. This approach permits to extend the learning paradigm to other fields different from the common trajectory following. The proposed frameworks are tested in three scenarios, namely, (a) the ball-in-box task, (b) drink pouring, and (c) a collaborative assembly, where the experimental results evidence the importance of using force perceptions as well as the usefulness and strengths of the methods

    Neuro-cognitive and social components of dyadic motor interactions revealed by the kinematics of a joint-grasping task

    Get PDF
    This thesis describes a PhD project is based on the notion that we live our whole life dipped into an interactive social environment where we observe and act together with others and where our behavior is influenced by first sight impressions, social categorizations and stereotypes which automatically and unavoidably arise during interactions. Nevertheless, the bidirectional impact of interpersonal coding on dyadic motor interactions has never been directly investigated. Moreover, the neurocognitive bases of social interaction are still poorly understood. In particular, in every-day dyadic encounters we usually interact with others in non-imitative fashions (Sebanz et al. 2006), challenging the hypothesis of a direct matching between action observation and action execution within one system (“common coding approach”, Prinz 1997), which is instead supported by neurophysiological data on the so called “mirror neurons”(Rizzolatti and Sinigaglia 2010) which fire both during action execution and observation of similar actions performed by others. Suggestion is made that what characterizes joint action is the presence of a common goal (i.e. the “shared” goal, Butterfill 2012) which organizes individuals’ behaviour and channel simulative processes. During her PhD, Lucia Sacheli developed a novel interactive scenario able to investigate face-to-face dyadic interactions within a naturalistic and yet controlled experimental environment, with the aim to build a more coherent model of the role of simulative mechanisms during social interaction and on the role of socio-emotional variables in modulating these processes. This scenario required pairs of participants to reciprocally coordinate their reach-to-grasp movements and perform on-line mutual adjustments in time and space in order to fulfill a common (motor) goal. So far, she demonstrated by means of kinematic data analysis that simulation of the partner’s movement is task-dependent (Sacheli et al. 2013) and modulated by the interpersonal relationship linking co-agents (Sacheli et al. 2012) and by social stereotypes as ethnic biases (Sacheli et al. under review). Moreover, she used the same scenario to investigate the different contribution of the parietal and frontal nodes of the fronto-parietal “mirror” network during joint-action by means of Transcranial Magnetic Stimulation combined with analysis of kinematics

    Neuro-cognitive and social components of dyadic motor interactions revealed by the kinematics of a joint-grasping task

    Get PDF
    This thesis describes a PhD project is based on the notion that we live our whole life dipped into an interactive social environment where we observe and act together with others and where our behavior is influenced by first sight impressions, social categorizations and stereotypes which automatically and unavoidably arise during interactions. Nevertheless, the bidirectional impact of interpersonal coding on dyadic motor interactions has never been directly investigated. Moreover, the neurocognitive bases of social interaction are still poorly understood. In particular, in every-day dyadic encounters we usually interact with others in non-imitative fashions (Sebanz et al. 2006), challenging the hypothesis of a direct matching between action observation and action execution within one system (“common coding approach”, Prinz 1997), which is instead supported by neurophysiological data on the so called “mirror neurons”(Rizzolatti and Sinigaglia 2010) which fire both during action execution and observation of similar actions performed by others. Suggestion is made that what characterizes joint action is the presence of a common goal (i.e. the “shared” goal, Butterfill 2012) which organizes individuals’ behaviour and channel simulative processes. During her PhD, Lucia Sacheli developed a novel interactive scenario able to investigate face-to-face dyadic interactions within a naturalistic and yet controlled experimental environment, with the aim to build a more coherent model of the role of simulative mechanisms during social interaction and on the role of socio-emotional variables in modulating these processes. This scenario required pairs of participants to reciprocally coordinate their reach-to-grasp movements and perform on-line mutual adjustments in time and space in order to fulfill a common (motor) goal. So far, she demonstrated by means of kinematic data analysis that simulation of the partner’s movement is task-dependent (Sacheli et al. 2013) and modulated by the interpersonal relationship linking co-agents (Sacheli et al. 2012) and by social stereotypes as ethnic biases (Sacheli et al. under review). Moreover, she used the same scenario to investigate the different contribution of the parietal and frontal nodes of the fronto-parietal “mirror” network during joint-action by means of Transcranial Magnetic Stimulation combined with analysis of kinematics
    corecore