251 research outputs found

    Goal-Directed Reasoning and Cooperation in Robots in Shared Workspaces: an Internal Simulation Based Neural Framework

    Get PDF
    From social dining in households to product assembly in manufacturing lines, goal-directed reasoning and cooperation with other agents in shared workspaces is a ubiquitous aspect of our day-to-day activities. Critical for such behaviours is the ability to spontaneously anticipate what is doable by oneself as well as the interacting partner based on the evolving environmental context and thereby exploit such information to engage in goal-oriented action sequences. In the setting of an industrial task where two robots are jointly assembling objects in a shared workspace, we describe a bioinspired neural architecture for goal-directed action planning based on coupled interactions between multiple internal models, primarily of the robot’s body and its peripersonal space. The internal models (of each robot’s body and peripersonal space) are learnt jointly through a process of sensorimotor exploration and then employed in a range of anticipations related to the feasibility and consequence of potential actions of two industrial robots in the context of a joint goal. The ensuing behaviours are demonstrated in a real-world industrial scenario where two robots are assembling industrial fuse-boxes from multiple constituent objects (fuses, fuse-stands) scattered randomly in their workspace. In a spatially unstructured and temporally evolving assembly scenario, the robots employ reward-based dynamics to plan and anticipate which objects to act on at what time instances so as to successfully complete as many assemblies as possible. The existing spatial setting fundamentally necessitates planning collision-free trajectories and avoiding potential collisions between the robots. Furthermore, an interesting scenario where the assembly goal is not realizable by either of the robots individually but only realizable if they meaningfully cooperate is used to demonstrate the interplay between perception, simulation of multiple internal models and the resulting complementary goal-directed actions of both robots. Finally, the proposed neural framework is benchmarked against a typically engineered solution to evaluate its performance in the assembly task. The framework provides a computational outlook to the emerging results from neurosciences related to the learning and use of body schema and peripersonal space for embodied simulation of action and prediction. While experiments reported here engage the architecture in a complex planning task specifically, the internal model based framework is domain-agnostic facilitating portability to several other tasks and platforms

    Compositional Reactive Synthesis for Multi-Agent Systems

    Get PDF
    With growing complexity of systems and guarantees they are required to provide, the need for automated and formal design approaches that can guarantee safety and correctness of the designed system is becoming more evident. To this end, an ambitious goal in system design and control is to automatically synthesize the system from a high-level specification given in a formal language such as linear temporal logic. The goal of this dissertation is to investigate and develop the necessary tools and methods for automated synthesis of controllers from high-level specifications for multi-agent systems. We consider systems where a set of controlled agents react to their environment that includes other uncontrolled, dynamic and potentially adversarial agents. We are particularly interested in studying how the existing structure in systems can be exploited to achieve more efficient synthesis algorithms through compositional reasoning. We explore three different frameworks for compositional synthesis of controllers for multi-agent systems. In the first framework, we decompose the global specification into local ones, we then refine the local specifications until they become realizable, and we show that under certain conditions, the strategies synthesized for the local specifications guarantee the satisfaction of the global specification. In the second framework, we show how parametric and reactive controllers can be specified and synthesized, and how they can be automatically composed to enforce a high-level objective. Finally, in the third framework, we focus on a special but practically useful class of multi-agent systems, and show how by taking advantage of the structure in the system and its objective we can achieve significantly better scalability and can solve problems where the centralized synthesis algorithm is infeasible

    Passive Motion Paradigm: An Alternative to Optimal Control

    Get PDF
    In the last years, optimal control theory (OCT) has emerged as the leading approach for investigating neural control of movement and motor cognition for two complementary research lines: behavioral neuroscience and humanoid robotics. In both cases, there are general problems that need to be addressed, such as the “degrees of freedom (DoFs) problem,” the common core of production, observation, reasoning, and learning of “actions.” OCT, directly derived from engineering design techniques of control systems quantifies task goals as “cost functions” and uses the sophisticated formal tools of optimal control to obtain desired behavior (and predictions). We propose an alternative “softer” approach passive motion paradigm (PMP) that we believe is closer to the biomechanics and cybernetics of action. The basic idea is that actions (overt as well as covert) are the consequences of an internal simulation process that “animates” the body schema with the attractor dynamics of force fields induced by the goal and task-specific constraints. This internal simulation offers the brain a way to dynamically link motor redundancy with task-oriented constraints “at runtime,” hence solving the “DoFs problem” without explicit kinematic inversion and cost function computation. We argue that the function of such computational machinery is not only restricted to shaping motor output during action execution but also to provide the self with information on the feasibility, consequence, understanding and meaning of “potential actions.” In this sense, taking into account recent developments in neuroscience (motor imagery, simulation theory of covert actions, mirror neuron system) and in embodied robotics, PMP offers a novel framework for understanding motor cognition that goes beyond the engineering control paradigm provided by OCT. Therefore, the paper is at the same time a review of the PMP rationale, as a computational theory, and a perspective presentation of how to develop it for designing better cognitive architectures

    Synthesizing stream control

    Get PDF
    For the management of reactive systems, controllers must coordinate time, data streams, and data transformations, all joint by the high level perspective of their control flow. This control flow is required to drive the system correctly and continuously, which turns the development into a challenge. The process is error-prone, time consuming, unintuitive, and costly. An attractive alternative is to synthesize the system instead, where the developer only needs to specify the desired behavior. The synthesis engine then automatically takes care of all the technical details. However, while current algorithms for the synthesis of reactive systems are well-suited to handle control, they fail on complex data transformations due to the complexity of the comparably large data space. Thus, to overcome the challenge of explicitly handling the data we must separate data and control. We introduce Temporal Stream Logic (TSL), a logic which exclusively argues about the control of the controller, while treating data and functional transformations as interchangeable black-boxes. In TSL it is possible to specify control flow properties independently of the complexity of the handled data. Furthermore, with TSL at hand a synthesis engine can check for realizability, even without a concrete implementation of the data transformations. We present a modular development framework that first uses synthesis to identify the high level control flow of a program. If successful, the created control flow then is extended with concrete data transformations in order to be compiled into a final executable. Our results also show that the current synthesis approaches cannot replace existing manual development work flows immediately. During the development of a reactive system, the developer still may use incomplete or faulty specifications at first, that need the be refined after a subsequent inspection. In the worst case, constraints are contradictory or miss important assumptions, which leads to unrealizable specifications. In both scenarios, the developer needs additional feedback from the synthesis engine to debug errors for finally improving the system specification. To this end, we explore two further possible improvements. On the one hand, we consider output sensitive synthesis metrics, which allow to synthesize simple and well structured solutions that help the developer to understand and verify the underlying behavior quickly. On the other hand, we consider the extension of delay, whose requirement is a frequent reason for unrealizability. With both methods at hand, we resolve the aforementioned problems and therefore help the developer in the development phase with the effective creation of a safe and correct reactive system.Um reaktive Systeme zu regeln mĂŒssen SteuergerĂ€te Zeit, Datenströme und Datentransformationen koordinieren, die durch den ĂŒbergeordneten Kontrollfluss zusammengefasst werden. Die Aufgabe des Kontrollflusses ist es das System korrekt und dauerhaft zu betreiben. Die Entwicklung solcher Systeme wird dadurch zu einer Herausforderung, denn der Prozess ist fehleranfĂ€llig, zeitraubend, unintuitiv und kostspielig. Eine attraktive Alternative ist es stattdessen das System zu synthetisieren, wobei der Entwickler nur das gewĂŒnschte Verhalten des Systems festlegt. Der Syntheseapparat kĂŒmmert sich dann automatisch um alle technischen Details. WĂ€hrend aktuelle Algorithmen fĂŒr die Synthese von reaktiven Systemen erfolgreich mit dem Kontrollanteil umgehen können, versagen sie jedoch, sobald komplexe Datentransformationen hinzukommen, aufgrund der KomplexitĂ€t des vergleichsweise großen Datenraums. Daten und Kontrolle mĂŒssen demnach getrennt behandelt werden, um auch große DatenrĂ€umen effizient handhaben zu können. Wir prĂ€sentieren Temporal Stream Logic (TSL), eine Logik die ausschließlich die Kontrolle einer Steuerung betrachtet, wohingegen Daten und funktionale Datentransformationen als austauschbare Blackboxen gehandhabt werden. In TSL ist es möglich Kontrollflusseigenschaften unabhĂ€ngig von der KomplexitĂ€t der zugrunde liegenden Daten zu beschreiben. Des Weiteren kann ein auf TSL beruhender Syntheseapparat die Realisierbarkeit einer Spezifikation prĂŒfen, selbst ohne die konkreten Implementierungen der Datentransformationen zu kennen. Wir prĂ€sentieren ein modulares GrundgerĂŒst fĂŒr die Entwicklung. Es verwendet zunĂ€chst den Syntheseapparat um den ĂŒbergeordneten Kontrollfluss zu erzeugen. Ist dies erfolgreich, so wird der resultierende Kontrollfluss um die konkreten Implementierungen der Datentransformationen erweitert und anschließend zu einer ausfĂŒhrbare Anwendung kompiliert. Wir zeigen auch auf, dass bisherige Syntheseverfahren bereits existierende manuelle Entwicklungsprozesse noch nicht instantan ersetzen können. Im Verlauf der Entwicklung ist es auch weiterhin möglich, dass der Entwickler zunĂ€chst unvollstĂ€ndige oder fehlerhafte Spezifikationen erstellt, welche dann erst nach genauerer Betrachtung des synthetisierten Systems weiter verbessert werden können. Im schlimmsten Fall sind Anforderungen inkonsistent oder wichtige Annahmen ĂŒber das Verhalten fehlen, was zu unrealisierbaren Spezifikationen fĂŒhrt. In beiden FĂ€llen benötigt der Entwickler zusĂ€tzliche RĂŒckmeldungen vom Syntheseapparat, um Fehler zu identifizieren und die Spezifikation schlussendlich zu verbessern. In diesem Zusammenhang untersuchen wir zwei mögliche Erweiterungen. Zum einen betrachten wir ausgabeabhĂ€ngige Metriken, die es dem Entwickler erlauben einfache und wohlstrukturierte Lösungen zu synthetisieren die verstĂ€ndlich sind und deren Verhalten einfach zu verifizieren ist. Zum anderen betrachten wir die Erweiterung um Verzögerungen, welche eine der Hauptursachen fĂŒr Unrealisierbarkeit darstellen. Mit beiden Methoden beheben wir die jeweils zuvor genannten Probleme und helfen damit dem Entwickler wĂ€hrend der Entwicklungsphase auch wirklich das reaktive System zu kreieren, dass er sich auch tatsĂ€chlich vorstellt

    Consciosusness in Cognitive Architectures. A Principled Analysis of RCS, Soar and ACT-R

    Get PDF
    This report analyses the aplicability of the principles of consciousness developed in the ASys project to three of the most relevant cognitive architectures. This is done in relation to their aplicability to build integrated control systems and studying their support for general mechanisms of real-time consciousness.\ud To analyse these architectures the ASys Framework is employed. This is a conceptual framework based on an extension for cognitive autonomous systems of the General Systems Theory (GST).\ud A general qualitative evaluation criteria for cognitive architectures is established based upon: a) requirements for a cognitive architecture, b) the theoretical framework based on the GST and c) core design principles for integrated cognitive conscious control systems

    Spatial Path Planning of Static Robots Using Configuration Space Metrics

    Get PDF

    Doctor of Philosophy

    Get PDF
    dissertationThis dissertation explores the design and use of an electromagnetic manipulation system that has been optimized for the dipole-eld model. This system can be used for noncontact manipulation of adjacent magnetic tools and combines the eld strength control of current electromagnetic systems with the analytical modeling of permanent-magnet systems. To design such a system, it is rst necessary to characterize how the shape of the eld source aects the shape of the magnetic eld. The magnetic eld generated by permanent magnets and electromagnets can be modeled, far from the source, using a multipole expansion. The error associated with the multipole expansion is quantied, and it is shown that, as long as the point of interest is 1.5 radii of the smallest sphere that can fully contain the magnetic source, the full expansion will have less than 1% error. If only the dipole term, the rst term in the expansion, is used, then the error is minimized for cylindrical shapes with a diameter-to-length ratio of 4=3 and for rectangular-bars with a cube. Applying the multipole expansion to electromagnets, an omnidirectional electromagnet, comprising three orthogonal solenoids and a spherical core, is designed that has minimal dipole-eld error and equal strength in all directions. Although this magnet can be constructed with any size core, the optimal design contains a spherical core with a diameter that is 60% of the outer dimension of the magnet. The resulting magnet's ability to dextrously control the eld at a point is demonstrated by rotating an endoscopic-pill mockup to drive it though a lumen and roll a permanent-magnet ball though several trajectories. Dipole elds also apply forces on adjacent magnetized objects. The ability to control these forces is demonstrated by performing position control on an orientation-constrained magnetic oat and nally by steering a permanent magnet, which is aligned with the applied dipole eld, around a rose curve

    Predictive Whole-Body Control of Humanoid Robot Locomotion

    Get PDF
    Humanoid robots are machines built with an anthropomorphic shape. Despite decades of research into the subject, it is still challenging to tackle the robot locomotion problem from an algorithmic point of view. For example, these machines cannot achieve a constant forward body movement without exploiting contacts with the environment. The reactive forces resulting from the contacts are subject to strong limitations, complicating the design of control laws. As a consequence, the generation of humanoid motions requires to exploit fully the mathematical model of the robot in contact with the environment or to resort to approximations of it. This thesis investigates predictive and optimal control techniques for tackling humanoid robot motion tasks. They generate control input values from the system model and objectives, often transposed as cost function to minimize. In particular, this thesis tackles several aspects of the humanoid robot locomotion problem in a crescendo of complexity. First, we consider the single step push recovery problem. Namely, we aim at maintaining the upright posture with a single step after a strong external disturbance. Second, we generate and stabilize walking motions. In addition, we adopt predictive techniques to perform more dynamic motions, like large step-ups. The above-mentioned applications make use of different simplifications or assumptions to facilitate the tractability of the corresponding motion tasks. Moreover, they consider first the foot placements and only afterward how to maintain balance. We attempt to remove all these simplifications. We model the robot in contact with the environment explicitly, comparing different methods. In addition, we are able to obtain whole-body walking trajectories automatically by only specifying the desired motion velocity and a moving reference on the ground. We exploit the contacts with the walking surface to achieve these objectives while maintaining the robot balanced. Experiments are performed on real and simulated humanoid robots, like the Atlas and the iCub humanoid robots
    • 

    corecore