321 research outputs found

    Combining Sensors and Multibody Models for Applications in Vehicles, Machines, Robots and Humans

    Get PDF
    The combination of physical sensors and computational models to provide additional information about system states, inputs and/or parameters, in what is known as virtual sensing, is becoming increasingly popular in many sectors, such as the automotive, aeronautics, aerospatial, railway, machinery, robotics and human biomechanics sectors. While, in many cases, control-oriented models, which are generally simple, are the best choice, multibody models, which can be much more detailed, may be better suited to some applications, such as during the design stage of a new product

    Gaze control modelling and robotic implementation

    Get PDF
    Although we have the impression that we can process the entire visual field in a single fixation, in reality we would be unable to fully process the information outside of foveal vision if we were unable to move our eyes. Because of acuity limitations in the retina, eye movements are necessary for processing the details of the array. Our ability to discriminate fine detail drops off markedly outside of the fovea in the parafovea (extending out to about 5 degrees on either side of fixation) and in the periphery (everything beyond the parafovea). While we are reading or searching a visual array for a target or simply looking at a new scene, our eyes move every 200-350 ms. These eye movements serve to move the fovea (the high resolution part of the retina encompassing 2 degrees at the centre of the visual field) to an area of interest in order to process it in greater detail. During the actual eye movement (or saccade), vision is suppressed and new information is acquired only during the fixation (the period of time when the eyes remain relatively still). While it is true that we can move our attention independently of where the eyes are fixated, it does not seem to be the case in everyday viewing. The separation between attention and fixation is often attained in very simple tasks; however, in tasks like reading, visual search, and scene perception, covert attention and overt attention (the exact eye location) are tightly linked. Because eye movements are essentially motor movements, it takes time to plan and execute a saccade. In addition, the end-point is pre-selected before the beginning of the movement. There is considerable evidence that the nature of the task influences eye movements. Depending on the task, there is considerable variability both in terms of fixation durations and saccade lengths. It is possible to outline five separate movement systems that put the fovea on a target and keep it there. Each of these movement systems shares the same effector pathway—the three bilateral groups of oculomotor neurons in the brain stem. These five systems include three that keep the fovea on a visual target in the environment and two that stabilize the eye during head movement. Saccadic eye movements shift the fovea rapidly to a visual target in the periphery. Smooth pursuit movements keep the image of a moving target on the fovea. Vergence movements move the eyes in opposite directions so that the image is positioned on both foveae. Vestibulo-ocular movements hold images still on the retina during brief head movements and are driven by signals from the vestibular system. Optokinetic movements hold images during sustained head rotation and are driven by visual stimuli. All eye movements but vergence movements are conjugate: each eye moves the same amount in the same direction. Vergence movements are disconjugate: The eyes move in different directions and sometimes by different amounts. Finally, there are times that the eye must stay still in the orbit so that it can examine a stationary object. Thus, a sixth system, the fixation system, holds the eye still during intent gaze. This requires active suppression of eye movement. Vision is most accurate when the eyes are still. When we look at an object of interest a neural system of fixation actively prevents the eyes from moving. The fixation system is not as active when we are doing something that does not require vision, for example, mental arithmetic. Our eyes explore the world in a series of active fixations connected by saccades. The purpose of the saccade is to move the eyes as quickly as possible. Saccades are highly stereotyped; they have a standard waveform with a single smooth increase and decrease of eye velocity. Saccades are extremely fast, occurring within a fraction of a second, at speeds up to 900°/s. Only the distance of the target from the fovea determines the velocity of a saccadic eye movement. We can change the amplitude and direction of our saccades voluntarily but we cannot change their velocities. Ordinarily there is no time for visual feedback to modify the course of the saccade; corrections to the direction of movement are made in successive saccades. Only fatigue, drugs, or pathological states can slow saccades. Accurate saccades can be made not only to visual targets but also to sounds, tactile stimuli, memories of locations in space, and even verbal commands (“look left”). The smooth pursuit system keeps the image of a moving target on the fovea by calculating how fast the target is moving and moving the eyes accordingly. The system requires a moving stimulus in order to calculate the proper eye velocity. Thus, a verbal command or an imagined stimulus cannot produce smooth pursuit. Smooth pursuit movements have a maximum velocity of about 100°/s, much slower than saccades. The saccadic and smooth pursuit systems have very different central control systems. A coherent integration of these different eye movements, together with the other movements, essentially corresponds to a gating-like effect on the brain areas controlled. The gaze control can be seen in a system that decides which action should be enabled and which should be inhibited and in another that improves the action performance when it is executed. It follows that the underlying guiding principle of the gaze control is the kind of stimuli that are presented to the system, by linking therefore the task that is going to be executed. This thesis aims at validating the strong relation between actions and gaze. In the first part a gaze controller has been studied and implemented in a robotic platform in order to understand the specific features of prediction and learning showed by the biological system. The eye movements integration opens the problem of the best action that should be selected when a new stimuli is presented. The action selection problem is solved by the basal ganglia brain structures that react to the different salience values of the environment. In the second part of this work the gaze behaviour has been studied during a locomotion task. The final objective is to show how the different tasks, such as the locomotion task, imply the salience values that drives the gaze

    Computational Methods for Cognitive and Cooperative Robotics

    Get PDF
    In the last decades design methods in control engineering made substantial progress in the areas of robotics and computer animation. Nowadays these methods incorporate the newest developments in machine learning and artificial intelligence. But the problems of flexible and online-adaptive combinations of motor behaviors remain challenging for human-like animations and for humanoid robotics. In this context, biologically-motivated methods for the analysis and re-synthesis of human motor programs provide new insights in and models for the anticipatory motion synthesis. This thesis presents the author’s achievements in the areas of cognitive and developmental robotics, cooperative and humanoid robotics and intelligent and machine learning methods in computer graphics. The first part of the thesis in the chapter “Goal-directed Imitation for Robots” considers imitation learning in cognitive and developmental robotics. The work presented here details the author’s progress in the development of hierarchical motion recognition and planning inspired by recent discoveries of the functions of mirror-neuron cortical circuits in primates. The overall architecture is capable of ‘learning for imitation’ and ‘learning by imitation’. The complete system includes a low-level real-time capable path planning subsystem for obstacle avoidance during arm reaching. The learning-based path planning subsystem is universal for all types of anthropomorphic robot arms, and is capable of knowledge transfer at the level of individual motor acts. Next, the problems of learning and synthesis of motor synergies, the spatial and spatio-temporal combinations of motor features in sequential multi-action behavior, and the problems of task-related action transitions are considered in the second part of the thesis “Kinematic Motion Synthesis for Computer Graphics and Robotics”. In this part, a new approach of modeling complex full-body human actions by mixtures of time-shift invariant motor primitives in presented. The online-capable full-body motion generation architecture based on dynamic movement primitives driving the time-shift invariant motor synergies was implemented as an online-reactive adaptive motion synthesis for computer graphics and robotics applications. The last chapter of the thesis entitled “Contraction Theory and Self-organized Scenarios in Computer Graphics and Robotics” is dedicated to optimal control strategies in multi-agent scenarios of large crowds of agents expressing highly nonlinear behaviors. This last part presents new mathematical tools for stability analysis and synthesis of multi-agent cooperative scenarios.In den letzten Jahrzehnten hat die Forschung in den Bereichen der Steuerung und Regelung komplexer Systeme erhebliche Fortschritte gemacht, insbesondere in den Bereichen Robotik und Computeranimation. Die Entwicklung solcher Systeme verwendet heutzutage neueste Methoden und Entwicklungen im Bereich des maschinellen Lernens und der kĂŒnstlichen Intelligenz. Die flexible und echtzeitfĂ€hige Kombination von motorischen Verhaltensweisen ist eine wesentliche Herausforderung fĂŒr die Generierung menschenĂ€hnlicher Animationen und in der humanoiden Robotik. In diesem Zusammenhang liefern biologisch motivierte Methoden zur Analyse und Resynthese menschlicher motorischer Programme neue Erkenntnisse und Modelle fĂŒr die antizipatorische Bewegungssynthese. Diese Dissertation prĂ€sentiert die Ergebnisse der Arbeiten des Autors im Gebiet der kognitiven und Entwicklungsrobotik, kooperativer und humanoider Robotersysteme sowie intelligenter und maschineller Lernmethoden in der Computergrafik. Der erste Teil der Dissertation im Kapitel “Zielgerichtete Nachahmung fĂŒr Roboter” behandelt das Imitationslernen in der kognitiven und Entwicklungsrobotik. Die vorgestellten Arbeiten beschreiben neue Methoden fĂŒr die hierarchische Bewegungserkennung und -planung, die durch Erkenntnisse zur Funktion der kortikalen Spiegelneuronen-Schaltkreise bei Primaten inspiriert wurden. Die entwickelte Architektur ist in der Lage, ‘durch Imitation zu lernen’ und ‘zu lernen zu imitieren’. Das komplette entwickelte System enthĂ€lt ein echtzeitfĂ€higes Pfadplanungssubsystem zur Hindernisvermeidung wĂ€hrend der DurchfĂŒhrung von Armbewegungen. Das lernbasierte Pfadplanungssubsystem ist universell und fĂŒr alle Arten von anthropomorphen Roboterarmen in der Lage, Wissen auf der Ebene einzelner motorischer Handlungen zu ĂŒbertragen. Im zweiten Teil der Arbeit “Kinematische Bewegungssynthese fĂŒr Computergrafik und Robotik” werden die Probleme des Lernens und der Synthese motorischer Synergien, d.h. von rĂ€umlichen und rĂ€umlich-zeitlichen Kombinationen motorischer Bewegungselemente bei Bewegungssequenzen und bei aufgabenbezogenen Handlungs ĂŒbergĂ€ngen behandelt. Es wird ein neuer Ansatz zur Modellierung komplexer menschlicher Ganzkörperaktionen durch Mischungen von zeitverschiebungsinvarianten Motorprimitiven vorgestellt. Zudem wurde ein online-fĂ€higer Synthesealgorithmus fĂŒr Ganzköperbewegungen entwickelt, der auf dynamischen Bewegungsprimitiven basiert, die wiederum auf der Basis der gelernten verschiebungsinvarianten Primitive konstruiert werden. Dieser Algorithmus wurde fĂŒr verschiedene Probleme der Bewegungssynthese fĂŒr die Computergrafik- und Roboteranwendungen implementiert. Das letzte Kapitel der Dissertation mit dem Titel “Kontraktionstheorie und selbstorganisierte Szenarien in der Computergrafik und Robotik” widmet sich optimalen Kontrollstrategien in Multi-Agenten-Szenarien, wobei die Agenten durch eine hochgradig nichtlineare Kinematik gekennzeichnet sind. Dieser letzte Teil prĂ€sentiert neue mathematische Werkzeuge fĂŒr die StabilitĂ€tsanalyse und Synthese von kooperativen Multi-Agenten-Szenarien

    Fractals in the Nervous System: conceptual Implications for Theoretical Neuroscience

    Get PDF
    This essay is presented with two principal objectives in mind: first, to document the prevalence of fractals at all levels of the nervous system, giving credence to the notion of their functional relevance; and second, to draw attention to the as yet still unresolved issues of the detailed relationships among power law scaling, self-similarity, and self-organized criticality. As regards criticality, I will document that it has become a pivotal reference point in Neurodynamics. Furthermore, I will emphasize the not yet fully appreciated significance of allometric control processes. For dynamic fractals, I will assemble reasons for attributing to them the capacity to adapt task execution to contextual changes across a range of scales. The final Section consists of general reflections on the implications of the reviewed data, and identifies what appear to be issues of fundamental importance for future research in the rapidly evolving topic of this review

    Capturing the Continuous Complexity of Behavior in C. elegans

    Full text link
    Animal behavior is often quantified through subjective, incomplete variables that may mask essential dynamics. Here, we develop a behavioral state space in which the full instantaneous state is smoothly unfolded as a combination of short-time posture dynamics. Our technique is tailored to multivariate observations and extends previous reconstructions through the use of maximal prediction. Applied to high-resolution video recordings of the roundworm \textit{C. elegans}, we discover a low-dimensional state space dominated by three sets of cyclic trajectories corresponding to the worm's basic stereotyped motifs: forward, backward, and turning locomotion. In contrast to this broad stereotypy, we find variability in the presence of locally-unstable dynamics, and this unpredictability shows signatures of deterministic chaos: a collection of unstable periodic orbits together with a positive maximal Lyapunov exponent. The full Lyapunov spectrum is symmetric with positive, chaotic exponents driving variability balanced by negative, dissipative exponents driving stereotypy. The symmetry is indicative of damped, driven Hamiltonian dynamics underlying the worm's movement control.Comment: 26 pages, 14 figure

    Motion Planning and Feedback Control of Simulated Robots in Multi-Contact Scenarios

    Get PDF
    Diese Dissertation prĂ€sentiert eine optimale steuerungsbasierte Architektur fĂŒr die Bewegungsplanung und RĂŒckkopplungssteuerung simulierter Roboter in Multikontaktszenarien. Bewegungsplanung und -steuerung sind grundlegende Bausteine fĂŒr die Erstellung wirklich autonomer Roboter. WĂ€hrend in diesen Bereichen enorme Fortschritte fĂŒr Manipulatoren mit festem Sockel und Radrobotern in den letzten Jahren erzielt wurden, besteht das Problem der Bewegungsplanung und -steuerung fĂŒr Roboter mit Armen und Beinen immer noch ein ungelöstes Problem, das die Notwendigkeit effizienterer und robusterer Algorithmen belegt. In diesem Zusammenhang wird in dieser Dissertation eine Architektur vorgeschlagen, mit der zwei Hauptherausforderungen angegangen werden sollen, nĂ€mlich die effiziente Planung von Kontaktsequenzen und Ganzkörperbewegungen fĂŒr Floating-Base-Roboter sowie deren erfolgreiche AusfĂŒhrung mit RĂŒckkopplungsregelungsstrategien, die Umgebungsunsicherheiten bewĂ€ltigen könne
    • 

    corecore