2 research outputs found

    Design a Fall Recovery Strategy for a Wheel-Legged Quadruped Robot Using Stability Feature Space

    Get PDF
    In this paper, we introduced a conceptual analysis to select stability features when performing predefined and precise motions on robots. By analyzing the different stable poses named features and the possible transitions towards different ones, the introduced concept allows to design more predictable and suitable motions when performing particular tasks. As an example of how the concept can be applied we use it on the fall recovery of the quadruped robot CENTAURO. This robot, which is equipped with a custom hybrid wheel-legged mobility system, have good intrinsic stability as other quadrupeds. However, the characteristics of the rough terrains where it might be deployed require complex maneuvers to cope with possible strong disturbances. To prevent and more importantly recover from falls, realignment of postural responses will not be adequate, and effective recovery procedures should be developed. This paper introduces the details of how the presented conceptual analysis provides and an effective fall recovery routine for CENTAURO based on a state machine. The performance of the proposed approach is evaluated with extensive simulation trials using the dynamic model of the CENTAURO robot showing good effectiveness in recovering the robot after fall on flat and inclined surfaces

    Learning dynamic motor skills for terrestrial locomotion

    Get PDF
    The use of Deep Reinforcement Learning (DRL) has received significantly increased attention from researchers within the robotics field following the success of AlphaGo, which demonstrated the superhuman capabilities of deep reinforcement algorithms in terms of solving complex tasks by beating professional GO players. Since then, an increasing number of researchers have investigated the potential of using DRL to solve complex high-dimensional robotic tasks, such as legged locomotion, arm manipulation, and grasping, which are difficult tasks to solve using conventional optimization approaches. Understanding and recreating various modes of terrestrial locomotion has been of long-standing interest to roboticists. A large variety of applications, such as rescue missions, disaster responses and science expeditions, strongly demand mobility and versatility in legged locomotion to enable task completion. In order to create useful physical robots, it is necessary to design controllers to synthesize the complex locomotion behaviours observed in humans and other animals. In the past, legged locomotion was mainly achieved via analytical engineering approaches. However, conventional analytical approaches have their limitations, as they require relatively large amounts of human effort and knowledge. Machine learning approaches, such as DRL, require less human effort compared to analytical approaches. The project conducted for this thesis explores the feasibility of using DRL to acquire control policies comparable to, or better than, those acquired through analytical approaches while requiring less human effort. In this doctoral thesis, we developed a Multi-Expert Learning Architecture (MELA) that uses DRL to learn multi-skill control policies capable of synthesizing a diverse set of dynamic locomotion behaviours for legged robots. We first proposed a novel DRL framework for the locomotion of humanoid robots. The proposed learning framework is capable of acquiring robust and dynamic motor skills for humanoids, including balancing, walking, standing-up fall recovery. We subsequently improved upon the learning framework and design a novel multi-expert learning architecture that is capable of fusing multiple motor skills together in a seamless fashion and ultimately deploy this framework on a real quadrupedal robot. The successful deployment of learned control policies on a real quadrupedal robot demonstrates the feasibility of using an Artificial Intelligence (AI) based approach for real robot motion control
    corecore