8,961 research outputs found

    Deep Visual Perception for Dynamic Walking on Discrete Terrain

    Full text link
    Dynamic bipedal walking on discrete terrain, like stepping stones, is a challenging problem requiring feedback controllers to enforce safety-critical constraints. To enforce such constraints in real-world experiments, fast and accurate perception for foothold detection and estimation is needed. In this work, a deep visual perception model is designed to accurately estimate step length of the next step, which serves as input to the feedback controller to enable vision-in-the-loop dynamic walking on discrete terrain. In particular, a custom convolutional neural network architecture is designed and trained to predict step length to the next foothold using a sampled image preview of the upcoming terrain at foot impact. The visual input is offered only at the beginning of each step and is shown to be sufficient for the job of dynamically stepping onto discrete footholds. Through extensive numerical studies, we show that the robot is able to successfully autonomously walk for over 100 steps without failure on a discrete terrain with footholds randomly positioned within a step length range of 45-85 centimeters.Comment: Presented at Humanoids 201

    Material Recognition CNNs and Hierarchical Planning for Biped Robot Locomotion on Slippery Terrain

    Full text link
    In this paper we tackle the problem of visually predicting surface friction for environments with diverse surfaces, and integrating this knowledge into biped robot locomotion planning. The problem is essential for autonomous robot locomotion since diverse surfaces with varying friction abound in the real world, from wood to ceramic tiles, grass or ice, which may cause difficulties or huge energy costs for robot locomotion if not considered. We propose to estimate friction and its uncertainty from visual estimation of material classes using convolutional neural networks, together with probability distribution functions of friction associated with each material. We then robustly integrate the friction predictions into a hierarchical (footstep and full-body) planning method using chance constraints, and optimize the same trajectory costs at both levels of the planning method for consistency. Our solution achieves fully autonomous perception and locomotion on slippery terrain, which considers not only friction and its uncertainty, but also collision, stability and trajectory cost. We show promising friction prediction results in real pictures of outdoor scenarios, and planning experiments on a real robot facing surfaces with different friction

    A Survey of Behavior Learning Applications in Robotics -- State of the Art and Perspectives

    Full text link
    Recent success of machine learning in many domains has been overwhelming, which often leads to false expectations regarding the capabilities of behavior learning in robotics. In this survey, we analyze the current state of machine learning for robotic behaviors. We will give a broad overview of behaviors that have been learned and used on real robots. Our focus is on kinematically or sensorially complex robots. That includes humanoid robots or parts of humanoid robots, for example, legged robots or robotic arms. We will classify presented behaviors according to various categories and we will draw conclusions about what can be learned and what should be learned. Furthermore, we will give an outlook on problems that are challenging today but might be solved by machine learning in the future and argue that classical robotics and other approaches from artificial intelligence should be integrated more with machine learning to form complete, autonomous systems.Comment: under review at International Journal of Robotics Researc

    Bridging Vision and Dynamic Legged Locomotion

    Get PDF
    Legged robots have demonstrated remarkable advances regarding robustness and versatility in the past decades. The questions that need to be addressed in this field are increasingly focusing on reasoning about the environment and autonomy rather than locomotion only. To answer some of these questions visual information is essential. If a robot has information about the terrain it can plan and take preventive actions against potential risks. However, building a model of the terrain is often computationally costly, mainly because of the dense nature of visual data. On top of the mapping problem, robots need feasible body trajectories and contact sequences to traverse the terrain safely, which may also require heavy computations. This computational cost has limited the use of visual feedback to contexts that guarantee (quasi-) static stability, or resort to planning schemes where contact sequences and body trajectories are computed before starting to execute motions. In this thesis we propose a set of algorithms that reduces the gap between visual processing and dynamic locomotion. We use machine learning to speed up visual data processing and model predictive control to achieve locomotion robustness. In particular, we devise a novel foothold adaptation strategy that uses a map of the terrain built from on-board vision sensors. This map is sent to a foothold classifier based on a convolutional neural network that allows the robot to adjust the landing position of the feet in a fast and continuous fashion. We then use the convolutional neural network-based classifier to provide safe future contact sequences to a model predictive controller that optimizes target ground reaction forces in order to track a desired center of mass trajectory. We perform simulations and experiments on the hydraulic quadruped robots HyQ and HyQReal. For all experiments the contact sequences, the foothold adaptations, the control inputs and the map are computed and processed entirely on-board. The various tests show that the robot is able to leverage the visual terrain information to handle complex scenarios in a safe, robust and reliable manner

    Terrain RL Simulator

    Full text link
    We provide 8989 challenging simulation environments that range in difficulty. The difficulty of solving a task is linked not only to the number of dimensions in the action space but also to the size and shape of the distribution of configurations the agent experiences. Therefore, we are releasing a number of simulation environments that include randomly generated terrain. The library also provides simple mechanisms to create new environments with different agent morphologies and the option to modify the distribution of generated terrain. We believe using these and other more complex simulations will help push the field closer to creating human-level intelligence.Comment: 10 page

    Navigation by Imitation in a Pedestrian-Rich Environment

    Full text link
    Deep neural networks trained on demonstrations of human actions give robot the ability to perform self-driving on the road. However, navigation in a pedestrian-rich environment, such as a campus setup, is still challenging---one needs to take frequent interventions to the robot and take control over the robot from early steps leading to a mistake. An arduous burden is, hence, placed on the learning framework design and data acquisition. In this paper, we propose a new learning-from-intervention Dataset Aggregation (DAgger) algorithm to overcome the limitations brought by applying imitation learning to navigation in the pedestrian-rich environment. Our new learning algorithm implements an error backtrack function that is able to effectively learn from expert interventions. Combining our new learning algorithm with deep convolutional neural networks and a hierarchically-nested policy-selection mechanism, we show that our robot is able to map pixels direct to control commands and navigate successfully in real world without explicitly modeling the pedestrian behaviors or the world model

    Emergence of Locomotion Behaviours in Rich Environments

    Full text link
    The reinforcement learning paradigm allows, in principle, for complex behaviours to be learned directly from simple reward signals. In practice, however, it is common to carefully hand-design the reward function to encourage a particular solution, or to derive it from demonstration data. In this paper explore how a rich environment can help to promote the learning of complex behavior. Specifically, we train agents in diverse environmental contexts, and find that this encourages the emergence of robust behaviours that perform well across a suite of tasks. We demonstrate this principle for locomotion -- behaviours that are known for their sensitivity to the choice of reward. We train several simulated bodies on a diverse set of challenging terrains and obstacles, using a simple reward function based on forward progress. Using a novel scalable variant of policy gradient reinforcement learning, our agents learn to run, jump, crouch and turn as required by the environment without explicit reward-based guidance. A visual depiction of highlights of the learned behavior can be viewed following https://youtu.be/hx_bgoTF7bs

    ALLSTEPS: Curriculum-driven Learning of Stepping Stone Skills

    Full text link
    Humans are highly adept at walking in environments with foot placement constraints, including stepping-stone scenarios where the footstep locations are fully constrained. Finding good solutions to stepping-stone locomotion is a longstanding and fundamental challenge for animation and robotics. We present fully learned solutions to this difficult problem using reinforcement learning. We demonstrate the importance of a curriculum for efficient learning and evaluate four possible curriculum choices compared to a non-curriculum baseline. Results are presented for a simulated human character, a realistic bipedal robot simulation and a monster character, in each case producing robust, plausible motions for challenging stepping stone sequences and terrains

    Modelling Locomotor Control: the advantages of mobile gaze

    No full text
    In 1958, JJ Gibson put forward proposals on the visual control of locomotion. Research in the last 50 years has served to clarify the sources of visual and nonvisual information that contribute to successful steering, but has yet to determine how this information is optimally combined under conditions of uncertainty. Here, we test the conditions under which a locomotor robot with a mobile camera can steer effectively using simple visual and extra-retinal parameters to examine how such models cope with the noisy real-world visual and motor estimates that are available to humans. This applied modeling gives us an insight into both the advantages and limitations of using active gaze to sample information when steering

    Intra-active Boundaries: An investigation into the dynamic interrelationship between the human body and the environment using painterly media

    Get PDF
    Intra-active boundaries: An investigation into the dynamic interrelationship between the human body and the environment using painterly media. To acknowledge ‘I am this body’…is not to lock up awareness within the density of a closed and bounded object, for as we shall see, the boundaries of a living body are open and indeterminate; more like membranes than barriers, they define a surface of metamorphosis and exchange. David Abram, 1996, The Spell of the Sensuous, Vintage Books, New York, p.46. The MFA research addresses the problem of how to aesthetically visualize the interconnection between the human body and the environment through a series of experimental paintings and mixed media. Traditional pictorial approaches, grounded in art world conventions and Cartesian philosophy, tend to portray the human body and landscape as static spaces disconnected from each other. As such, they focus on the representation of discrete spaces/objects rather than the experience of the processes involved between them. There is a vast gap between these traditional concepts and the contemporary redefinition of the nexus between the human and non-human environment suggested by recent philosophical re-conceptualisations of Maurice Merleau-Ponty, Gilles Deleuze, Felix Guattari and Michel Serres. This interconnected space was foreshadowed in the philosophy of Merleau-Ponty, who proposed that the perception of reality emerges through the interaction of the body within the world, rather than as the exchange between static objects in a world of empty space, as formulated by Réne Descartes. This redefinition portrays the relationship of the human self and the non-human world as inextricably one. As Deleuze and Guattari state, “there is no such thing as either man or nature now, only a process that produces the one within the other”. More significantly Serres, in his ecological philosophy, directs us to the materiality of the human and the natural as intra-active, defining each other through their mutual interactions, both occupying the same biospheric terrain, the biological sphere that makes life possible on the planet. The experimental paintings at the heart of the MFA research investigate the way in which we can re-experience human and environmental phenomena, such as bodies and landscapes, in a way that questions conventional assumptions of their separateness. In order to do so, the research first examines the development of landscape art in the West as a reflection of changing human and environmental relations, through the work of selected artists, including JMW Turner and Olafur Eliasson, who have sought to respond to this changing dynamic by exploring the human form as part of an environment implicit with the human body. The research also examines the work of other artists, such as Juul Kraijer and Berlinde de Bruyckere, whose work involves the integration of aspects of the non-human within the human, predicated on an intention to explore the human condition as opposed to the relationship of the human with the natural world. The research tests the hypothesis, foreshadowed in recent philosophy, that an aesthetic reformulation of the figure/landscape is needed to better understand the human/environmental interrelationship, via a series of experimental paintings where aspects of the body are fused with aspects of the environment to create an integrated spatial concept that makes visible their interconnection. Framed within an ecological aesthetic that explores the network of relations between human and environmental processes, the research undertakes this interconnection in three ways: 1. Using common intercellular processes – where the intrinsic properties of the raw materials, and their physical interactions used within the painterly techniques, are analogous to the mutual biological processes of interchange between the body and the environment; the forms exist in an ambiguous space inside or outside the body. 2. Integrating sites of the body with geographic sites – fusing specific organs within the body, such as parts of the eye, heart and lungs, with geophysical sites, such as salt lakes, geological strata, plant communities, such that they can be read as one or the other, or both, simultaneously. 3. Creating a terrain that fuses the human and the non-human – amalgamating features of the human and non-human to visualize a new form of topography that embodies a reciprocal biospheric space, a new mutual terrain that cannot be interpreted as separate entities. These reformulations of processes, forms and imagery, use an integrated spatial concept that embodies an immersive mode of experience. It is immersive in the sense that it prompts recognition of reality as an enveloping interaction occurring between the inside and the outside of the body, between the haptic self and the dynamic exterior world. What this encourages is an aesthetic position where the human self is no longer considered separate from the outside world but rather entangled within it, and vice versa. In this way both are intra-active with each other, defining each other through their interactions
    corecore