31 research outputs found

    Fast and Continuous Foothold Adaptation for Dynamic Locomotion through CNNs

    Get PDF
    Legged robots can outperform wheeled machines for most navigation tasks across unknown and rough terrains. For such tasks, visual feedback is a fundamental asset to provide robots with terrain-awareness. However, robust dynamic locomotion on difficult terrains with real-time performance guarantees remains a challenge. We present here a real-time, dynamic foothold adaptation strategy based on visual feedback. Our method adjusts the landing position of the feet in a fully reactive manner, using only on-board computers and sensors. The correction is computed and executed continuously along the swing phase trajectory of each leg. To efficiently adapt the landing position, we implement a self-supervised foothold classifier based on a Convolutional Neural Network (CNN). Our method results in an up to 200 times faster computation with respect to the full-blown heuristics. Our goal is to react to visual stimuli from the environment, bridging the gap between blind reactive locomotion and purely vision-based planning strategies. We assess the performance of our method on the dynamic quadruped robot HyQ, executing static and dynamic gaits (at speeds up to 0.5 m/s) in both simulated and real scenarios; the benefit of safe foothold adaptation is clearly demonstrated by the overall robot behavior.Comment: 9 pages, 11 figures. Accepted to RA-L + ICRA 2019, January 201

    Bridging Vision and Dynamic Legged Locomotion

    Get PDF
    Legged robots have demonstrated remarkable advances regarding robustness and versatility in the past decades. The questions that need to be addressed in this field are increasingly focusing on reasoning about the environment and autonomy rather than locomotion only. To answer some of these questions visual information is essential. If a robot has information about the terrain it can plan and take preventive actions against potential risks. However, building a model of the terrain is often computationally costly, mainly because of the dense nature of visual data. On top of the mapping problem, robots need feasible body trajectories and contact sequences to traverse the terrain safely, which may also require heavy computations. This computational cost has limited the use of visual feedback to contexts that guarantee (quasi-) static stability, or resort to planning schemes where contact sequences and body trajectories are computed before starting to execute motions. In this thesis we propose a set of algorithms that reduces the gap between visual processing and dynamic locomotion. We use machine learning to speed up visual data processing and model predictive control to achieve locomotion robustness. In particular, we devise a novel foothold adaptation strategy that uses a map of the terrain built from on-board vision sensors. This map is sent to a foothold classifier based on a convolutional neural network that allows the robot to adjust the landing position of the feet in a fast and continuous fashion. We then use the convolutional neural network-based classifier to provide safe future contact sequences to a model predictive controller that optimizes target ground reaction forces in order to track a desired center of mass trajectory. We perform simulations and experiments on the hydraulic quadruped robots HyQ and HyQReal. For all experiments the contact sequences, the foothold adaptations, the control inputs and the map are computed and processed entirely on-board. The various tests show that the robot is able to leverage the visual terrain information to handle complex scenarios in a safe, robust and reliable manner

    Fast and Continuous Foothold Adaptation for Dynamic Locomotion Through CNNs

    Get PDF
    Legged robots can outperform wheeled machines for most navigation tasks across unknown and rough terrains. For such tasks, visual feedback is a fundamental asset to provide robots with terrain awareness. However, robust dynamic locomotion on difficult terrains with real-time performance guarantees remains a challenge. We present here a real-time, dynamic foothold adaptation strategy based on visual feedback. Our method adjusts the landing position of the feet in a fully reactive manner, using only on-board computers and sensors. The correction is computed and executed continuously along the swing phase trajectory of each leg. To efficiently adapt the landing position, we implement a self-supervised foothold classifier based on a convolutional neural network. Our method results in an up to 200 times faster computation with respect to the full-blown heuristics. Our goal is to react to visual stimuli from the environment, bridging the gap between blind reactive locomotion and purely vision-based planning strategies. We assess the performance of our method on the dynamic quadruped robot HyQ, executing static and dynamic gaits (at speeds up to 0.5 m/s) in both simulated and real scenarios; the benefit of safe foothold adaptation is clearly demonstrated by the overall robot behavior

    SafeSteps: Learning Safer Footstep Planning Policies for Legged Robots via Model-Based Priors

    Full text link
    We present a footstep planning policy for quadrupedal locomotion that is able to directly take into consideration a-priori safety information in its decisions. At its core, a learning process analyzes terrain patches, classifying each landing location by its kinematic feasibility, shin collision, and terrain roughness. This information is then encoded into a small vector representation and passed as an additional state to the footstep planning policy, which furthermore proposes only safe footstep location by applying a masked variant of the Proximal Policy Optimization (PPO) algorithm. The performance of the proposed approach is shown by comparative simulations on an electric quadruped robot walking in different rough terrain scenarios. We show that violations of the above safety conditions are greatly reduced both during training and the successive deployment of the policy, resulting in an inherently safer footstep planner. Furthermore, we show how, as a byproduct, fewer reward terms are needed to shape the behavior of the policy, which in return is able to achieve both better final performances and sample efficienc

    Online Optimization-based Gait Adaptation of Quadruped Robot Locomotion

    Get PDF
    Quadruped robots demonstrated extensive capabilities of traversing complex and unstructured environments. Optimization-based techniques gave a relevant impulse to the research on legged locomotion. Indeed, by designing the cost function and the constraints, we can guarantee the feasibility of a motion and impose high-level locomotion tasks, e.g., tracking of a reference velocity. This allows one to have a generic planning approach without the need to tailor a specific motion for each terrain, as in the heuristic case. In this context, Model Predictive Control (MPC) can compensate for model inaccuracies and external disturbances, thanks to the high-frequency replanning. The main objective of this dissertation is to develop a Nonlinear MPC (NMPC)-based locomotion framework for quadruped robots. The aim is to obtain an algorithm which can be extended to different robots and gaits; in addition, I sought to remove some assumptions generally done in the literature, e.g., heuristic reference generator and user-defined gait sequence. The starting point of my work is the definition of the Optimal Control Problem to generate feasible trajectories for the Center of Mass. It is descriptive enough to capture the linear and angular dynamics of the robot as a whole. A simplified model (Single Rigid Body Dynamics model) is used for the system dynamics, while a novel cost term maximizes leg mobility to improve robustness in the presence of nonflat terrain. In addition, to test the approach on the real robot, I dedicated particular effort to implementing both a heuristic reference generator and an interface for the controller, and integrating them into the controller framework developed previously by other team members. As a second contribution of my work, I extended the locomotion framework to deal with a trot gait. In particular, I generalized the reference generator to be based on optimization. Exploiting the Linear Inverted Pendulum model, this new module can deal with the underactuation of the trot when only two legs are in contact with the ground, endowing the NMPC with physically informed reference trajectories to be tracked. In addition, the reference velocities are used to correct the heuristic footholds, obtaining contact locations coherent with the motion of the base, even though they are not directly optimized. The model used by the NMPC receives as input the gait sequence, thus with the last part of my work I developed an online multi-contact planner and integrated it into the MPC framework. Using a machine learning approach, the planner computes the best feasible option, even in complex environments, in a few milliseconds, by ranking online a set of discrete options for footholds, i.e., which leg to move and where to step. To train the network, I designed a novel function, evaluated offline, which considers the value of the cost of the NMPC and robustness/stability metrics for each option. These methods have been validated with simulations and experiments over the three years. I tested the NMPC on the Hydraulically actuated Quadruped robot (HyQ) of the IIT’s Dynamic Legged Systems lab, performing omni-directional motions on flat terrain and stepping on a pallet (both static and relocated during the motion) with a crawl gait. The trajectory replanning is performed at high-frequency, and visual information of the terrain is included to traverse uneven terrain. A Unitree Aliengo quadruped robot is used to execute experiments with the trot gait. The optimization-based reference generator allows the robot to reach a fixed goal and recover from external pushes without modifying the structure of the NMPC. Finally, simulations with the Solo robot are performed to validate the neural network-based contact planning. The robot successfully traverses complex scenarios, e.g., stepping stones, with both walk and trot gaits, choosing the footholds online. The achieved results improved the robustness and the performance of the quadruped locomotion. High-frequency replanning, dealing with a fixed goal, recovering after a push, and the automatic selection of footholds could help the robots to accomplish important tasks for the humans, for example, providing support in a disaster response scenario or inspecting an unknown environment. In the future, the contact planning will be transferred to the real hardware. Possible developments foresee the optimization of the gait timings, i.e., stance and swing duration, and a framework which allows the automatic transition between gaits

    Synaptic motor adaptation: A three-factor learning rule for adaptive robotic control in spiking neural networks

    Full text link
    Legged robots operating in real-world environments must possess the ability to rapidly adapt to unexpected conditions, such as changing terrains and varying payloads. This paper introduces the Synaptic Motor Adaptation (SMA) algorithm, a novel approach to achieving real-time online adaptation in quadruped robots through the utilization of neuroscience-derived rules of synaptic plasticity with three-factor learning. To facilitate rapid adaptation, we meta-optimize a three-factor learning rule via gradient descent to adapt to uncertainty by approximating an embedding produced by privileged information using only locally accessible onboard sensing data. Our algorithm performs similarly to state-of-the-art motor adaptation algorithms and presents a clear path toward achieving adaptive robotics with neuromorphic hardware

    Planning Hybrid Driving-Stepping Locomotion for Ground Robots in Challenging Environments

    Get PDF
    Ground robots capable of navigating a wide range of terrains are needed in several domains such as disaster response or planetary exploration. Hybrid driving-stepping locomotion is promising since it combines the complementary strengths of the two locomotion modes. However, suitable platforms require complex kinematic capabilities which need to be considered in corresponding locomotion planning methods. High terrain complexities induce further challenges for the planning problem. We present a search-based hybrid driving-stepping locomotion planning approach for robots which possess a quadrupedal base with legs ending in steerable wheels allowing for omnidirectional driving and stepping. Driving is preferred on sufficiently flat terrain while stepping is considered in the vicinity of obstacles. Steps are handled in a hierarchical manner: while only the connection between suitable footholds is considered during planning, those steps in the resulting path are expanded to detailed motion sequences considering the robot stability. To enable precise locomotion in challenging terrain, the planner takes the individual robot footprint into account. The method is evaluated in simulation and in real-world applications with the robots Momaro and Centauro. The results indicate that the planner provides bounded sub-optimal paths in feasible time. However, the required fine resolution and high-dimensional robot representation result in too large state spaces for more complex scenarios exceeding computation time and memory constraints. To enable the planner to be applicable in those scenarios, the method is extended to incorporate three levels of representation. In the vicinity of the robot, the detailed representation is used to obtain reliable paths for the near future. With increasing distance from the robot, the resolution gets coarser and the degrees of freedom of the robot representation decrease. To compensate this loss of information, those representations are enriched with additional semantics increasing the scene understanding. We further present how the most abstract representation can be used to generate an informed heuristic. Evaluation shows that planning is accelerated by multiple orders of magnitude with comparable result quality. However, manually designing the additional representations and tuning the corresponding cost functions requires a high effort. Therefore, we present a method to support the generation of an abstract representation through a convolutional neural network (CNN). While a low-dimensional, coarse robot representation and corresponding action set can be easily defined, a CNN is trained on artificially generated data to represent the abstract cost function. Subsequently, the abstract representation can be used to generate a similar informed heuristic, as described above. The CNN evaluation on multiple data sets indicates that the learned cost function generalizes well to realworld scenes and that the abstraction quality outperforms the manually tuned approach. Applied to hybrid driving-stepping locomotion planning, the heuristic achieves similar performance while design and tuning efforts are minimized. Since a learning-based method turned out to be beneficial to support the search-based planner, we finally investigate if the whole planning problem can be solved by a learning-based approach. Value Iteration Networks (VINs) are known to show good generalizability and goal-directed behavior, while being limited to small state spaces. Inspired by the above-described results, we extend VINs to incorporate multiple levels of abstraction to represent larger planning problems with suitable state space sizes. Experiments in 2D grid worlds show that this extension enables VINs to solve significantly larger planning tasks. We further apply the method to omnidirectional driving of the Centauro robot in cluttered environments which indicates limitations but also emphasizes the future potential of learning-based planning methods.Planung von Hybrider Fahr-Lauf-Lokomotion für Bodenroboter in Anspruchsvollen Umgebungen Bodenroboter, welche eine Vielzahl von Untergründen überwinden können, werden in vielen Anwendungsgebieten benötigt. Beispielszenarien sind die Katastrophenhilfe oder Erkundungsmissionen auf fremden Planeten. In diesem Kontext ist hybride Fahr-/Lauf-Fortbewegung vielversprechend, da sie die sich ergänzenden Stärken der beiden Fortbewegungsarten miteinander vereint. Um dies zu realisieren benötigen entsprechende Roboter allerdings komplexe kinematische Fähigkeiten, welche auch in adäquaten Ansätzen für die Planung dieser Fortbewegung berücksichtigt werden müssen. Anspruchsvolle Umgebungen mit komplexen Untergründen erhöhen dabei zusätzlich die Anforderungen an die Bewegungsplanung. In dieser Arbeit wird ein suchbasierter Ansatz für kombinierte Fahr-/Lauf-Fortbewegungsplanung vorgestellt. Die adressierten Zielplattformen sind vierbeinige Roboter, deren Beine in lenkbaren Rädern enden, so dass sie omnidirektional fahren und laufen können. Auf ausreichend ebenem Untergrund wird generell Fahren bevorzugt, während der Planer Laufmanöver in der Nähe von Hindernissen in Erwägung zieht. Schritte werden dabei in einer hierarchischen Art undWeise realisiert: Während des Planens werden nur Verbindungen zwischen geeigneten Auftrittsflächen gesucht. Nur solche Schritte, die im Ergebnispfad enthalten sind, werden anschließend zu detaillierten Bewegungsabläufen verfeinert, welche die Balance des Roboters sicherstellen. Um präzise Fortbewegung in anspruchsvollen Umgebungen zu ermöglichen, betrachtet der Planer die spezifischen Aufstandsflächen der vier Füße. Der Ansatz wurde sowohl in simulierten als auch in realen Tests mit den Robotern Momaro und Centauro evaluiert, wobei der Planer in der Lage war, Lösungspfade von ausreichender Qualität in zulässiger Zeit zu generieren. Allerdings ergeben die benötigte feine Planungsauflösung und die hochdimensionale Roboterrepräsentation große Zustandsräumen. Diese würden für komplexere oder größere Planungsprobleme die zulässige Rechenzeit und den verfügbaren Speicher überschreiten. Damit der Planer auch eben diese komplexeren oder größeren Planungsprobleme handhaben kann, wird eine Erweiterung des Ansatzes beschrieben, welche mehrere Repräsentationslevel mit einbezieht. In unmittelbarer Umgebung des Roboters wird die zuvor beschriebene detaillierte Repräsentation genutzt, um hochwertige Pfade für die nahe Zukunft zu erzeugen. Mit zunehmendem Abstand vom Roboter wird die Auflösung gröber und die Anzahl der Freiheitsgrade in der Roboterrepräsentation sinkt. Um den mit dieser Vergröberung einhergehenden Informationsverlust zu kompensieren, werden diese Repräsentationen mit zusätzlicher Semantik ausgestattet, welche das Szenenverständnis erhöht. Darüber hinaus wird beschrieben, wie die Repräsentation mit dem höchsten Abstraktionsgrad zur Berechnung einer effektiven Heuristik genutzt werden kann. Die Evaluation in Simulationsumgebungen zeigt, dass der Planungsprozess um mehrere Größenordnungen beschleunigt werden kann, während die Ergebnisqualität vergleichbar bleibt. Allerdings sind das manuelle Gestalten der zusätzlichen Repräsentationen und das dazugehörige Parametrisieren der Kostenfunktionen sehr arbeitsintensiv. Um diesen Aufwand zu reduzieren, wird daher eine Methode beschrieben, welche die Gestaltung einer abstrakten Repräsentation durch ein Convolutional Neural Network (CNN) unterstützt. Während eine grobe, niedrigdimensionale Roboterrepräsentation und ein dazugehöriges Aktionsset einfach definiert werden können, wird ein CNN auf künstlich erzeugten Daten trainiert, um die abstrakte Kostenfunktion zu lernen. Anschließend kann die so erzeugte abstrakte Repräsentation genutzt werden, um die bereits zuvor erwähnte effektive Heuristik zu berechnen. In der Evaluation des CNNs auf verschiedenen Datensätzen zeigt sich, dass die gelernte Kostenfunktion auch mit Daten aus realen Umgebungen funktioniert und dass die generelle Ergebnisqualität oberhalb der Ergebnisse mit manuell erzeugten Repräsentationen liegt. Die Anwendnung der Methode zur Planung hybrider Fahr-/Lauf-Fortbewegung zeigt, dass die so erzeugte Heuristik gleichwertige Ergebnisse wie die Heuristik auf Basis manuell erzeugter Repräsentation liefert, während der Aufwand zur Gestaltung und Parametrisierung deutlich verringert wurde. Da sich gezeigt hat, dass eine lernbasierte Methode den klassischen suchbasierten Ansatz effektiv unterstützen kann, wird in dieser Arbeit abschließend untersucht, ob das gesamte Planungsproblem durch eine lernbasierte Methode gelöst werden kann. Value Iteration Networks (VINs) sind in diesem Zusammenhang ein vielversprechender Ansatz, da sie bekanntlich ein gutes zielorientiertes Planungsverhalten lernen und das Gelernte auf unbekannte Situationen verallgemeinern können. Allerdings ist ihre bisherige Anwendung auf kleine Zustandsräume begrenzt. Durch die zuvor beschriebenen Ergebnisse motiviert, wird eine Erweiterung von VINs beschrieben, so dass diese auf verschiedenen Abstraktionsleveln planen, um größere Planungsprobleme in Zustandsräumen entsprechender Größe darzustellen. Experimente in 2D-Rasterumgebungen zeigen, dass die beschriebene Methode VINs in die Lage versetzt, deutlich größere Planungsprobleme zu lösen. Darüber hinaus wird die beschriebene Methode benutzt, um omnidirektionale Fahrmanöver für den Centauro-Roboter in anspruchsvollen Umgebungen zu planen. Gleichzeitig werden hier aber auch die momentanen, hardware-bedingten Grenzen rein lernbasierter Ansätze sowie ihr zukünftiges Potential aufgezeigt
    corecore