2,905 research outputs found

    Muscleless Motor synergies and actions without movements : From Motor neuroscience to cognitive robotics

    Get PDF
    Emerging trends in neurosciences are providing converging evidence that cortical networks in predominantly motor areas are activated in several contexts related to ‘action’ that do not cause any overt movement. Indeed for any complex body, human or embodied robot inhabiting unstructured environments, the dual processes of shaping motor output during action execution and providing the self with information related to feasibility, consequence and understanding of potential actions (of oneself/others) must seamlessly alternate during goal-oriented behaviors, social interactions. While prominent approaches like Optimal Control, Active Inference converge on the role of forward models, they diverge on the underlying computational basis. In this context, revisiting older ideas from motor control like the Equilibrium Point Hypothesis and synergy formation, this article offers an alternative perspective emphasizing the functional role of a ‘plastic, configurable’ internal representation of the body (body-schema) as a critical link enabling the seamless continuum between motor control and imagery. With the central proposition that both “real and imagined” actions are consequences of an internal simulation process achieved though passive goal-oriented animation of the body schema, the computational/neural basis of muscleless motor synergies (and ensuing simulated actions without movements) is explored. The rationale behind this perspective is articulated in the context of several interdisciplinary studies in motor neurosciences (for example, intracranial depth recordings from the parietal cortex, FMRI studies highlighting a shared cortical basis for action ‘execution, imagination and understanding’), animal cognition (in particular, tool-use and neuro-rehabilitation experiments, revealing how coordinated tools are incorporated as an extension to the body schema) and pertinent challenges towards building cognitive robots that can seamlessly “act, interact, anticipate and understand” in unstructured natural living spaces

    Fast Damage Recovery in Robotics with the T-Resilience Algorithm

    Full text link
    Damage recovery is critical for autonomous robots that need to operate for a long time without assistance. Most current methods are complex and costly because they require anticipating each potential damage in order to have a contingency plan ready. As an alternative, we introduce the T-resilience algorithm, a new algorithm that allows robots to quickly and autonomously discover compensatory behaviors in unanticipated situations. This algorithm equips the robot with a self-model and discovers new behaviors by learning to avoid those that perform differently in the self-model and in reality. Our algorithm thus does not identify the damaged parts but it implicitly searches for efficient behaviors that do not use them. We evaluate the T-Resilience algorithm on a hexapod robot that needs to adapt to leg removal, broken legs and motor failures; we compare it to stochastic local search, policy gradient and the self-modeling algorithm proposed by Bongard et al. The behavior of the robot is assessed on-board thanks to a RGB-D sensor and a SLAM algorithm. Using only 25 tests on the robot and an overall running time of 20 minutes, T-Resilience consistently leads to substantially better results than the other approaches

    Motion representation with spiking neural networks for grasping and manipulation

    Get PDF
    Die Natur bedient sich Millionen von Jahren der Evolution, um adaptive physikalische Systeme mit effizienten Steuerungsstrategien zu erzeugen. Im Gegensatz zur konventionellen Robotik plant der Mensch nicht einfach eine Bewegung und führt sie aus, sondern es gibt eine Kombination aus mehreren Regelkreisen, die zusammenarbeiten, um den Arm zu bewegen und ein Objekt mit der Hand zu greifen. Mit der Forschung an humanoiden und biologisch inspirierten Robotern werden komplexe kinematische Strukturen und komplizierte Aktor- und Sensorsysteme entwickelt. Diese Systeme sind schwierig zu steuern und zu programmieren, und die klassischen Methoden der Robotik können deren Stärken nicht immer optimal ausnutzen. Die neurowissenschaftliche Forschung hat große Fortschritte beim Verständnis der verschiedenen Gehirnregionen und ihrer entsprechenden Funktionen gemacht. Dennoch basieren die meisten Modelle auf groß angelegten Simulationen, die sich auf die Reproduktion der Konnektivität und der statistischen neuronalen Aktivität konzentrieren. Dies öffnet eine Lücke bei der Anwendung verschiedener Paradigmen, um Gehirnmechanismen und Lernprinzipien zu validieren und Funktionsmodelle zur Steuerung von Robotern zu entwickeln. Ein vielversprechendes Paradigma ist die ereignis-basierte Berechnung mit SNNs. SNNs fokussieren sich auf die biologischen Aspekte von Neuronen und replizieren deren Arbeitsweise. Sie sind für spike- basierte Kommunikation ausgelegt und ermöglichen die Erforschung von Mechanismen des Gehirns für das Lernen mittels neuronaler Plastizität. Spike-basierte Kommunikation nutzt hoch parallelisierten Hardware-Optimierungen mittels neuromorpher Chips, die einen geringen Energieverbrauch und schnelle lokale Operationen ermöglichen. In dieser Arbeit werden verschiedene SNNs zur Durchführung von Bewegungss- teuerung für Manipulations- und Greifaufgaben mit einem Roboterarm und einer anthropomorphen Hand vorgestellt. Diese basieren auf biologisch inspirierten funktionalen Modellen des menschlichen Gehirns. Ein Motor-Primitiv wird auf parametrische Weise mit einem Aktivierungsparameter und einer Abbildungsfunktion auf die Roboterkinematik übertragen. Die Topologie des SNNs spiegelt die kinematische Struktur des Roboters wider. Die Steuerung des Roboters erfolgt über das Joint Position Interface. Um komplexe Bewegungen und Verhaltensweisen modellieren zu können, werden die Primitive in verschiedenen Schichten einer Hierarchie angeordnet. Dies ermöglicht die Kombination und Parametrisierung der Primitiven und die Wiederverwendung von einfachen Primitiven für verschiedene Bewegungen. Es gibt verschiedene Aktivierungsmechanismen für den Parameter, der ein Motorprimitiv steuert — willkürliche, rhythmische und reflexartige. Außerdem bestehen verschiedene Möglichkeiten neue Motorprimitive entweder online oder offline zu lernen. Die Bewegung kann entweder als Funktion modelliert oder durch Imitation der menschlichen Ausführung gelernt werden. Die SNNs können in andere Steuerungssysteme integriert oder mit anderen SNNs kombiniert werden. Die Berechnung der inversen Kinematik oder die Validierung von Konfigurationen für die Planung ist nicht erforderlich, da der Motorprimitivraum nur durchführbare Bewegungen hat und keine ungültigen Konfigurationen enthält. Für die Evaluierung wurden folgende Szenarien betrachtet, das Zeigen auf verschiedene Ziele, das Verfolgen einer Trajektorie, das Ausführen von rhythmischen oder sich wiederholenden Bewegungen, das Ausführen von Reflexen und das Greifen von einfachen Objekten. Zusätzlich werden die Modelle des Arms und der Hand kombiniert und erweitert, um die mehrbeinige Fortbewegung als Anwendungsfall der Steuerungsarchitektur mit Motorprimitiven zu modellieren. Als Anwendungen für einen Arm (3 DoFs) wurden die Erzeugung von Zeigebewegungen und das perzeptionsgetriebene Erreichen von Zielen modelliert. Zur Erzeugung von Zeigebewegun- gen wurde ein Basisprimitiv, das auf den Mittelpunkt einer Ebene zeigt, offline mit vier Korrekturprimitiven kombiniert, die eine neue Trajektorie erzeugen. Für das wahrnehmungsgesteuerte Erreichen eines Ziels werden drei Primitive online kombiniert unter Verwendung eines Zielsignals. Als Anwendungen für eine Fünf-Finger-Hand (9 DoFs) wurden individuelle Finger-aktivierungen und Soft-Grasping mit nachgiebiger Steuerung modelliert. Die Greif- bewegungen werden mit Motor-Primitiven in einer Hierarchie modelliert, wobei die Finger-Primitive die Synergien zwischen den Gelenken und die Hand-Primitive die unterschiedlichen Affordanzen zur Koordination der Finger darstellen. Für jeden Finger werden zwei Reflexe hinzugefügt, zum Aktivieren oder Stoppen der Bewegung bei Kontakt und zum Aktivieren der nachgiebigen Steuerung. Dieser Ansatz bietet enorme Flexibilität, da Motorprimitive wiederverwendet, parametrisiert und auf unterschiedliche Weise kombiniert werden können. Neue Primitive können definiert oder gelernt werden. Ein wichtiger Aspekt dieser Arbeit ist, dass im Gegensatz zu Deep Learning und End-to-End-Lernmethoden, keine umfangreichen Datensätze benötigt werden, um neue Bewegungen zu lernen. Durch die Verwendung von Motorprimitiven kann der gleiche Modellierungsansatz für verschiedene Roboter verwendet werden, indem die Abbildung der Primitive auf die Roboterkinematik neu definiert wird. Die Experimente zeigen, dass durch Motor- primitive die Motorsteuerung für die Manipulation, das Greifen und die Lokomotion vereinfacht werden kann. SNNs für Robotikanwendungen ist immer noch ein Diskussionspunkt. Es gibt keinen State-of-the-Art-Lernalgorithmus, es gibt kein Framework ähnlich dem für Deep Learning, und die Parametrisierung von SNNs ist eine Kunst. Nichtsdestotrotz können Robotikanwendungen - wie Manipulation und Greifen - Benchmarks und realistische Szenarien liefern, um neurowissenschaftliche Modelle zu validieren. Außerdem kann die Robotik die Möglichkeiten der ereignis- basierten Berechnung mit SNNs und neuromorpher Hardware nutzen. Die physikalis- che Nachbildung eines biologischen Systems, das vollständig mit SNNs implementiert und auf echten Robotern evaluiert wurde, kann neue Erkenntnisse darüber liefern, wie der Mensch die Motorsteuerung und Sensorverarbeitung durchführt und wie diese in der Robotik angewendet werden können. Modellfreie Bewegungssteuerungen, inspiriert von den Mechanismen des menschlichen Gehirns, können die Programmierung von Robotern verbessern, indem sie die Steuerung adaptiver und flexibler machen

    Recolha e conceptualização de experiências de atividades robóticas baseadas em planos para melhoria de competências no longo prazo

    Get PDF
    Robot learning is a prominent research direction in intelligent robotics. Robotics involves dealing with the issue of integration of multiple technologies, such as sensing, planning, acting, and learning. In robot learning, the long term goal is to develop robots that learn to perform tasks and continuously improve their knowledge and skills through observation and exploration of the environment and interaction with users. While significant research has been performed in the area of learning motor behavior primitives, the topic of learning high-level representations of activities and classes of activities that, decompose into sequences of actions, has not been sufficiently addressed. Learning at the task level is key to increase the robots’ autonomy and flexibility. High-level task knowledge is essential for intelligent robotics since it makes robot programs less dependent on the platform and eases knowledge exchange between robots with different kinematics. The goal of this thesis is to contribute to the development of cognitive robotic capabilities, including supervised experience acquisition through human-robot interaction, high-level task learning from the acquired experiences, and task planning using the acquired task knowledge. A framework containing the required cognitive functions for learning and reproduction of high-level aspects of experiences is proposed. In particular, we propose and formalize the notion of Experience-Based Planning Domains (EBPDs) for long-term learning and planning. A human-robot interaction interface is used to provide a robot with step-by-step instructions on how to perform tasks. Approaches to recording plan-based robot activity experiences including relevant perceptions of the environment and actions taken by the robot are presented. A conceptualization methodology is presented for acquiring task knowledge in the form of activity schemata from experiences. The conceptualization approach is a combination of different techniques including deductive generalization, different forms of abstraction and feature extraction. Conceptualization includes loop detection, scope inference and goal inference. Problem solving in EBPDs is achieved using a two-layer problem solver comprising an abstract planner, to derive an abstract solution for a given task problem by applying a learned activity schema, and a concrete planner, to refine the abstract solution towards a concrete solution. The architecture and the learning and planning methods are applied and evaluated in several real and simulated world scenarios. Finally, the developed learning methods are compared, and conditions where each of them has better applicability are discussed.Aprendizagem de robôs é uma direção de pesquisa proeminente em robótica inteligente. Em robótica, é necessário lidar com a questão da integração de várias tecnologias, como percepção, planeamento, atuação e aprendizagem. Na aprendizagem de robôs, o objetivo a longo prazo é desenvolver robôs que aprendem a executar tarefas e melhoram continuamente os seus conhecimentos e habilidades através da observação e exploração do ambiente e interação com os utilizadores. A investigação tem-se centrado na aprendizagem de comportamentos básicos, ao passo que a aprendizagem de representações de atividades de alto nível, que se decompõem em sequências de ações, e de classes de actividades, não tem sido suficientemente abordada. A aprendizagem ao nível da tarefa é fundamental para aumentar a autonomia e a flexibilidade dos robôs. O conhecimento de alto nível permite tornar o software dos robôs menos dependente da plataforma e facilita a troca de conhecimento entre robôs diferentes. O objetivo desta tese é contribuir para o desenvolvimento de capacidades cognitivas para robôs, incluindo aquisição supervisionada de experiência através da interação humano-robô, aprendizagem de tarefas de alto nível com base nas experiências acumuladas e planeamento de tarefas usando o conhecimento adquirido. Propõe-se uma abordagem que integra diversas funcionalidades cognitivas para aprendizagem e reprodução de aspetos de alto nível detetados nas experiências acumuladas. Em particular, nós propomos e formalizamos a noção de Domínio de Planeamento Baseado na Experiência (Experience-Based Planning Domain, or EBPD) para aprendizagem e planeamento num âmbito temporal alargado. Uma interface para interação humano-robô é usada para fornecer ao robô instruções passo-a-passo sobre como realizar tarefas. Propõe-se uma abordagem para extrair experiências de atividades baseadas em planos, incluindo as percepções relevantes e as ações executadas pelo robô. Uma metodologia de conceitualização é apresentada para a aquisição de conhecimento de tarefa na forma de schemata a partir de experiências. São utilizadas diferentes técnicas, incluindo generalização dedutiva, diferentes formas de abstracção e extração de características. A metodologia inclui detecção de ciclos, inferência de âmbito de aplicação e inferência de objetivos. A resolução de problemas em EBPDs é alcançada usando um sistema de planeamento com duas camadas, uma para planeamento abstrato, aplicando um schema aprendido, e outra para planeamento detalhado. A arquitetura e os métodos de aprendizagem e planeamento são aplicados e avaliados em vários cenários reais e simulados. Finalmente, os métodos de aprendizagem desenvolvidos são comparados e as condições onde cada um deles tem melhor aplicabilidade são discutidos.Programa Doutoral em Informátic

    Embodied Gesture Processing: Motor-Based Integration of Perception and Action in Social Artificial Agents

    Get PDF
    A close coupling of perception and action processes is assumed to play an important role in basic capabilities of social interaction, such as guiding attention and observation of others’ behavior, coordinating the form and functions of behavior, or grounding the understanding of others’ behavior in one’s own experiences. In the attempt to endow artificial embodied agents with similar abilities, we present a probabilistic model for the integration of perception and generation of hand-arm gestures via a hierarchy of shared motor representations, allowing for combined bottom-up and top-down processing. Results from human-agent interactions are reported demonstrating the model’s performance in learning, observation, imitation, and generation of gestures

    Passive Motion Paradigm: An Alternative to Optimal Control

    Get PDF
    In the last years, optimal control theory (OCT) has emerged as the leading approach for investigating neural control of movement and motor cognition for two complementary research lines: behavioral neuroscience and humanoid robotics. In both cases, there are general problems that need to be addressed, such as the “degrees of freedom (DoFs) problem,” the common core of production, observation, reasoning, and learning of “actions.” OCT, directly derived from engineering design techniques of control systems quantifies task goals as “cost functions” and uses the sophisticated formal tools of optimal control to obtain desired behavior (and predictions). We propose an alternative “softer” approach passive motion paradigm (PMP) that we believe is closer to the biomechanics and cybernetics of action. The basic idea is that actions (overt as well as covert) are the consequences of an internal simulation process that “animates” the body schema with the attractor dynamics of force fields induced by the goal and task-specific constraints. This internal simulation offers the brain a way to dynamically link motor redundancy with task-oriented constraints “at runtime,” hence solving the “DoFs problem” without explicit kinematic inversion and cost function computation. We argue that the function of such computational machinery is not only restricted to shaping motor output during action execution but also to provide the self with information on the feasibility, consequence, understanding and meaning of “potential actions.” In this sense, taking into account recent developments in neuroscience (motor imagery, simulation theory of covert actions, mirror neuron system) and in embodied robotics, PMP offers a novel framework for understanding motor cognition that goes beyond the engineering control paradigm provided by OCT. Therefore, the paper is at the same time a review of the PMP rationale, as a computational theory, and a perspective presentation of how to develop it for designing better cognitive architectures

    Unified Behavior Framework for Reactive Robot Control

    Get PDF
    Behavior-based systems form the basis of autonomous control for many robots. In this article, we demonstrate that a single software framework can be used to represent many existing behavior based approaches. The unified behavior framework presented, incorporates the critical ideas and concepts of the existing reactive controllers. Additionally, the modular design of the behavior framework: (1) simplifies development and testing; (2) promotes the reuse of code; (3) supports designs that scale easily into large hierarchies while restricting code complexity; and (4) allows the behavior based system developer the freedom to use the behavior system they feel will function the best. When a hybrid or three layer control architecture includes the unified behavior framework, a common interface is shared by all behaviors, leaving the higher order planning and sequencing elements free to interchange behaviors during execution to achieve high level goals and plans. The framework\u27s ability to compose structures from independent elements encourages experimentation and reuse while isolating the scope of troubleshooting to the behavior composition. The ability to use elemental components to build and evaluate behavior structures is demonstrated using the Robocode simulation environment. Additionally, the ability of a reactive controller to change its active behavior during execution is shown in a goal seeking robot implementation
    corecore