999 research outputs found

    Biologically-inspired control framework for insect animation.

    Get PDF
    Insects are common in our world, such as ants, spiders, cockroaches etc. Virtual representations of them have wide applications in Virtual Reality (VR), video games and films. Compared with the large volume of works in biped animation, the problem of insect animation was less explored. Their small body parts, complex structures and high-speed movements challenge the standard techniques of motion synthesis. This thesis addressed the aforementioned challenge by presenting a framework to efficiently automate the modelling and authoring of insect locomotion. This framework is inspired by two key observations of real insects: fixed gait pattern and distributed neural system. At the top level, a Triangle Placement Engine (TPE) is modelled based on the double-tripod gait pattern of insects, and determines the location and orientation of insect foot contacts, given various user inputs. At the low level, a Central Pattern Generator (CPG) controller actuates individual joints by mimicking the distributed neural system of insects. A Controller Look-Up Table (CLUT) translates the high-level commands from the TPE into the low-level control parameters of the CPG. In addition, a novel strategy is introduced to determine when legs start to swing. During high-speed movements, the swing mode is triggered when the Centre of Mass (COM) steps outside the Supporting Triangle. However, this simplified mechanism is not sufficient to produce the gait variations when insects are moving at slow speed. The proposed strategy handles the case of slow speed by considering four independent factors, including the relative distance to the extreme poses, the stance period, the relative distance to the neighbouring legs, the load information etc. This strategy is able to avoid the issues of collisions between legs or over stretching of leg joints, which are produced by conventional methods. The framework developed in this thesis allows sufficient control and seamlessly fits into the existing pipeline of animation production. With this framework, animators can model the motion of a single insect in an intuitive way by specifying the walking path, terrains, speed etc. The success of this framework proves that the introduction of biological components could synthesise the insect animation in a naturalness and interactive fashion

    Biologically – Plausible Load Feedback from Dynamically Scaled Robotic Model Insect Legs

    Get PDF
    Researchers have been studying the mechanisms underlying animal motor control for many years using computational models and biomimetic robots. Since testing some theories in animals can be challenging, this approach can enable unique contributions to the field. An example of a system that benefits from this modeling and robotics approach is the campaniform sensillum (CS), a kind of sensory organ used to detect the loads exerted on an insect\u27s legs. The CS on the leg are found in groups on high-stress areas of the exoskeleton and have a major influence on the adaptation of walking behavior. The challenge for studying these sensors is recording CS output from freely walking insects, which would show what the sensors detect during behavior. To address this difficulty, 3 dynamically scaled robotic models of the middle leg of the stick insect Carausius morosus (C. morosus) and the fly Drosophila melanogaster (D. melanogaster) were constructed. Two of the robotic legs model the C. morosus and are scaled to a stick insect at a ratio of 15:1 and 25:1. The robotic fly leg is scaled 400:1 to the leg of the D. melanogaster. Strain gauges are affixed to locations and orientations that are analogous to those of major CS groups. The legs were attached to a linear guide to simulate weight and they stepped on a treadmill to mimic walking. Using these robotic models, it is possible to shed light on how the nervous system of insects detects load feedback, examine the effect of different tarsi designs on load feedback, and compare the CS measurement capabilities of different insects. As mentioned earlier, robotic legs allow for any experiment to be conducted, and strain data can still be recorded, unlike animals. I subjected the 15:1 stick leg to a range of stepping conditions, including various static loading, transient loading, and leg slipping. I then processed the strain data through a previously published dynamic computational model of CS discharge. This demonstrated that the CS signal can robustly signal increasing forces at the beginning of the stance phase and decreasing forces at the end of the stance phase or when the foot slips. The same model leg can then be further expanded upon, allowing us to test how different tarsus designs affect load feedback. To isolate various morphological effects, these tarsi were developed with differing degrees of compliance, passive grip, and biomimetic structure. These experiments demonstrated that the tarsus plays a distinct role in loading the leg because of the various effects each design had on the strain. In the final experiment, two morphologically distinct insects with homologous CS groups were compared. The 400:1 robotic fly middle leg and the 25:1 robotic stick insect middle leg were used for these tests. The measured strains were notably influenced by the leg morphology, stepping kinematics, and sensor locations. Additionally, the sensor locations were lacking in one species in comparison to the other measured strains that were already being measured by the present sensors. These findings contributed to the understanding of load sensing in animal locomotion, effects of tarsal morphology, and sensory organ morphology in motor control

    Learning Perceptual Locomotion on Uneven Terrains using Sparse Visual Observations

    Get PDF
    To proactively navigate and traverse various terrains, active use of visual perception becomes indispensable. We aim to investigate the feasibility and performance of using sparse visual observations to achieve perceptual locomotion over a range of common terrains (steps, ramps, gaps, and stairs) in human-centered environments. We formulate a selection of sparse visual inputs suitable for locomotion over the terrains of interest, and propose a learning framework to integrate exteroceptive and proprioceptive states. We design state observations and a training curriculum to learn feedback control policies effectively over a range of different terrains. We extensively validate and benchmark the learned policy in various tasks: omnidirectional walking on flat ground, and forward locomotion over various obstacles, showing high success rate of traversability. Furthermore, we study exteroceptive ablations and evaluate policy generalization by adding various levels of noise and testing on new unseen terrains. We demonstrate the capabilities of autonomous perceptual locomotion that can be achieved by only using sparse visual observations from direct depth measurements, which are easily available from a Lidar or RGB-D sensor, showing robust ascent and descent over high stairs of 20 cm height, i.e., 50% leg length, and robustness against noise and unseen terrains

    Simulating collective transport of virtual ants

    Get PDF
    This paper simulates the behaviour of collective transport where a group of ants transports an object in a cooperative fashion. Different from humans, the task coordination of collective transport, with ants, is not achieved by direct communication between group individuals, but through indirect information transmission via mechanical movements of the object. This paper proposes a stochastic probability model to model the decision-making procedure of group individuals and trains a neural network via reinforcement learning to represent the force policy. Our method is scalable to different numbers of individuals and is adaptable to users' input, including transport trajectory, object shape, external intervention, etc. Our method can reproduce the characteristic strategies of ants, such as realign and reposition. The simulations show that with the strategy of reposition, the ants can avoid deadlock scenarios during the task of collective transport

    Simplifying robotic locomotion by escaping traps via an active tail

    Get PDF
    Legged systems offer the ability to negotiate and climb heterogeneous terrains, more so than their wheeled counterparts \cite{freedberg_2012}. However, in certain complex environments, these systems are susceptible to failure conditions. These scenarios are caused by the interplay between the locomotor's kinematic state and the local terrain configuration, thus making them challenging to predict and overcome. These failures can cause catastrophic damage to the system and thus, methods to avoid such scenarios have been developed. These strategies typically take the form of environmental sensing or passive mechanical elements that adapt to the terrain. Such methods come at an increased control and mechanical design complexity for the system, often still being susceptible to imperceptible hazards. In this study, we investigated whether a tail could serve to offload this complexity by acting as a mechanism to generate new terradynamic interactions and mitigate failure via substrate contact. To do so, we developed a quadrupedal C-leg robophysical model (length and width = 27 cm, limb radius = 8 cm) capable of walking over rough terrain with an attachable actuated tail (length = 17 cm). We investigated three distinct tail strategies: static pose, periodic tapping, and load-triggered (power) tapping, while varying the angle of the tail relative to the body. We challenged the system to traverse a terrain (length = 160 cm, width = 80 cm) of randomized blocks (length and width = 10 cm, height = 0 to 12 cm) whose dimensions were scaled to the robot. Over this terrain, the robot exhibited trapping failures independent of gait pattern. Using the tail, the robot could free itself from trapping with a probability of 0 to 0.5, with the load-driven behaviors having comparable performance to low frequency periodic tapping across all tested tail angles. Along with increasing this likelihood of freeing, the robot displayed a longer survival distance over the rough terrain with these tail behaviors. In summary, we present the beginning of a framework that leverages mechanics via tail-ground interactions to offload limb control and design complexity to mitigate failure and improve legged system performance in heterogeneous environments.M.S

    MOTION CONTROL SIMULATION OF A HEXAPOD ROBOT

    Get PDF
    This thesis addresses hexapod robot motion control. Insect morphology and locomotion patterns inform the design of a robotic model, and motion control is achieved via trajectory planning and bio-inspired principles. Additionally, deep learning and multi-agent reinforcement learning are employed to train the robot motion control strategy with leg coordination achieves using a multi-agent deep reinforcement learning framework. The thesis makes the following contributions: First, research on legged robots is synthesized, with a focus on hexapod robot motion control. Insect anatomy analysis informs the hexagonal robot body and three-joint single robotic leg design, which is assembled using SolidWorks. Different gaits are studied and compared, and robot leg kinematics are derived and experimentally verified, culminating in a three-legged gait for motion control. Second, an animal-inspired approach employs a central pattern generator (CPG) control unit based on the Hopf oscillator, facilitating robot motion control in complex environments such as stable walking and climbing. The robot\u27s motion process is quantitatively evaluated in terms of displacement change and body pitch angle. Third, a value function decomposition algorithm, QPLEX, is applied to hexapod robot motion control. The QPLEX architecture treats each leg as a separate agent with local control modules, that are trained using reinforcement learning. QPLEX outperforms decentralized approaches, achieving coordinated rhythmic gaits and increased robustness on uneven terrain. The significant of terrain curriculum learning is assessed, with QPLEX demonstrating superior stability and faster consequence. The foot-end trajectory planning method enables robot motion control through inverse kinematic solutions but has limited generalization capabilities for diverse terrains. The animal-inspired CPG-based method offers a versatile control strategy but is constrained to core aspects. In contrast, the multi-agent deep reinforcement learning-based approach affords adaptable motion strategy adjustments, rendering it a superior control policy. These methods can be combined to develop a customized robot motion control policy for specific scenarios

    Motion representation with spiking neural networks for grasping and manipulation

    Get PDF
    Die Natur bedient sich Millionen von Jahren der Evolution, um adaptive physikalische Systeme mit effizienten Steuerungsstrategien zu erzeugen. Im Gegensatz zur konventionellen Robotik plant der Mensch nicht einfach eine Bewegung und führt sie aus, sondern es gibt eine Kombination aus mehreren Regelkreisen, die zusammenarbeiten, um den Arm zu bewegen und ein Objekt mit der Hand zu greifen. Mit der Forschung an humanoiden und biologisch inspirierten Robotern werden komplexe kinematische Strukturen und komplizierte Aktor- und Sensorsysteme entwickelt. Diese Systeme sind schwierig zu steuern und zu programmieren, und die klassischen Methoden der Robotik können deren Stärken nicht immer optimal ausnutzen. Die neurowissenschaftliche Forschung hat große Fortschritte beim Verständnis der verschiedenen Gehirnregionen und ihrer entsprechenden Funktionen gemacht. Dennoch basieren die meisten Modelle auf groß angelegten Simulationen, die sich auf die Reproduktion der Konnektivität und der statistischen neuronalen Aktivität konzentrieren. Dies öffnet eine Lücke bei der Anwendung verschiedener Paradigmen, um Gehirnmechanismen und Lernprinzipien zu validieren und Funktionsmodelle zur Steuerung von Robotern zu entwickeln. Ein vielversprechendes Paradigma ist die ereignis-basierte Berechnung mit SNNs. SNNs fokussieren sich auf die biologischen Aspekte von Neuronen und replizieren deren Arbeitsweise. Sie sind für spike- basierte Kommunikation ausgelegt und ermöglichen die Erforschung von Mechanismen des Gehirns für das Lernen mittels neuronaler Plastizität. Spike-basierte Kommunikation nutzt hoch parallelisierten Hardware-Optimierungen mittels neuromorpher Chips, die einen geringen Energieverbrauch und schnelle lokale Operationen ermöglichen. In dieser Arbeit werden verschiedene SNNs zur Durchführung von Bewegungss- teuerung für Manipulations- und Greifaufgaben mit einem Roboterarm und einer anthropomorphen Hand vorgestellt. Diese basieren auf biologisch inspirierten funktionalen Modellen des menschlichen Gehirns. Ein Motor-Primitiv wird auf parametrische Weise mit einem Aktivierungsparameter und einer Abbildungsfunktion auf die Roboterkinematik übertragen. Die Topologie des SNNs spiegelt die kinematische Struktur des Roboters wider. Die Steuerung des Roboters erfolgt über das Joint Position Interface. Um komplexe Bewegungen und Verhaltensweisen modellieren zu können, werden die Primitive in verschiedenen Schichten einer Hierarchie angeordnet. Dies ermöglicht die Kombination und Parametrisierung der Primitiven und die Wiederverwendung von einfachen Primitiven für verschiedene Bewegungen. Es gibt verschiedene Aktivierungsmechanismen für den Parameter, der ein Motorprimitiv steuert — willkürliche, rhythmische und reflexartige. Außerdem bestehen verschiedene Möglichkeiten neue Motorprimitive entweder online oder offline zu lernen. Die Bewegung kann entweder als Funktion modelliert oder durch Imitation der menschlichen Ausführung gelernt werden. Die SNNs können in andere Steuerungssysteme integriert oder mit anderen SNNs kombiniert werden. Die Berechnung der inversen Kinematik oder die Validierung von Konfigurationen für die Planung ist nicht erforderlich, da der Motorprimitivraum nur durchführbare Bewegungen hat und keine ungültigen Konfigurationen enthält. Für die Evaluierung wurden folgende Szenarien betrachtet, das Zeigen auf verschiedene Ziele, das Verfolgen einer Trajektorie, das Ausführen von rhythmischen oder sich wiederholenden Bewegungen, das Ausführen von Reflexen und das Greifen von einfachen Objekten. Zusätzlich werden die Modelle des Arms und der Hand kombiniert und erweitert, um die mehrbeinige Fortbewegung als Anwendungsfall der Steuerungsarchitektur mit Motorprimitiven zu modellieren. Als Anwendungen für einen Arm (3 DoFs) wurden die Erzeugung von Zeigebewegungen und das perzeptionsgetriebene Erreichen von Zielen modelliert. Zur Erzeugung von Zeigebewegun- gen wurde ein Basisprimitiv, das auf den Mittelpunkt einer Ebene zeigt, offline mit vier Korrekturprimitiven kombiniert, die eine neue Trajektorie erzeugen. Für das wahrnehmungsgesteuerte Erreichen eines Ziels werden drei Primitive online kombiniert unter Verwendung eines Zielsignals. Als Anwendungen für eine Fünf-Finger-Hand (9 DoFs) wurden individuelle Finger-aktivierungen und Soft-Grasping mit nachgiebiger Steuerung modelliert. Die Greif- bewegungen werden mit Motor-Primitiven in einer Hierarchie modelliert, wobei die Finger-Primitive die Synergien zwischen den Gelenken und die Hand-Primitive die unterschiedlichen Affordanzen zur Koordination der Finger darstellen. Für jeden Finger werden zwei Reflexe hinzugefügt, zum Aktivieren oder Stoppen der Bewegung bei Kontakt und zum Aktivieren der nachgiebigen Steuerung. Dieser Ansatz bietet enorme Flexibilität, da Motorprimitive wiederverwendet, parametrisiert und auf unterschiedliche Weise kombiniert werden können. Neue Primitive können definiert oder gelernt werden. Ein wichtiger Aspekt dieser Arbeit ist, dass im Gegensatz zu Deep Learning und End-to-End-Lernmethoden, keine umfangreichen Datensätze benötigt werden, um neue Bewegungen zu lernen. Durch die Verwendung von Motorprimitiven kann der gleiche Modellierungsansatz für verschiedene Roboter verwendet werden, indem die Abbildung der Primitive auf die Roboterkinematik neu definiert wird. Die Experimente zeigen, dass durch Motor- primitive die Motorsteuerung für die Manipulation, das Greifen und die Lokomotion vereinfacht werden kann. SNNs für Robotikanwendungen ist immer noch ein Diskussionspunkt. Es gibt keinen State-of-the-Art-Lernalgorithmus, es gibt kein Framework ähnlich dem für Deep Learning, und die Parametrisierung von SNNs ist eine Kunst. Nichtsdestotrotz können Robotikanwendungen - wie Manipulation und Greifen - Benchmarks und realistische Szenarien liefern, um neurowissenschaftliche Modelle zu validieren. Außerdem kann die Robotik die Möglichkeiten der ereignis- basierten Berechnung mit SNNs und neuromorpher Hardware nutzen. Die physikalis- che Nachbildung eines biologischen Systems, das vollständig mit SNNs implementiert und auf echten Robotern evaluiert wurde, kann neue Erkenntnisse darüber liefern, wie der Mensch die Motorsteuerung und Sensorverarbeitung durchführt und wie diese in der Robotik angewendet werden können. Modellfreie Bewegungssteuerungen, inspiriert von den Mechanismen des menschlichen Gehirns, können die Programmierung von Robotern verbessern, indem sie die Steuerung adaptiver und flexibler machen

    Investigating Sensorimotor Control in Locomotion using Robots and Mathematical Models

    Get PDF
    Locomotion is a very diverse phenomenon that results from the interactions of a body and its environment and enables a body to move from one position to another. Underlying control principles rely among others on the generation of intrinsic body movements, adaptation and synchronization of those movements with the environment, and the generation of respective reaction forces that induce locomotion. We use mathematical and physical models, namely robots, to investigate how movement patterns emerge in a specific environment, and to what extent central and peripheral mechanisms contribute to movement generation. We explore insect walking, undulatory swimming and bimodal terrestrial and aquatic locomotion. We present relevant findings that explain the prevalence of tripod gaits for fast climbing based on the outcome of an optimization procedure. We also developed new control paradigms based on local sensory pressure feedback for anguilliform swimming, which include oscillator-free and decoupled control schemes, and a new design methodology to create physical models for locomotion investigation based on a salamander-like robot. The presented work includes additional relevant contributions to robotics, specifically a new fast dynamically stable walking gait for hexapedal robots and a decentralized scheme for highly modular control of lamprey-like undulatory swimming robots
    corecore