162 research outputs found

    A Robot Operating System (ROS) based humanoid robot control

    Get PDF
    This thesis presents adapting techniques required to enhance the capability of a commercially available robot, namely, Robotis Bioloid Premium Humanoid Robot (BPHR). BeagleBone Black (BBB), the decision-making and implementing (intelligence providing) component, with multifunctional capabilities is used in this research. Robot operating System (ROS) and its libraries, as well as Python Script and its libraries have been developed and incorporated into the BBB. This fortified BBB intelligence providing component is then transplanted into the structure of the Robotis Bioloid humanoid robot, after removing the latter’s original decision-making and implementing component (controller). Thus, this study revitalizes the Bioloid humanoid robot by converting it into a humanoid robot with multiple features that can be inherited using ROS. This is a first of its kind approach wherein ROS is used as the development framework in conjunction with the main BBB controller and the software impregnated with Python libraries is used to integrate robotic functions. A full ROS computation is developed and a high level Application Programming Interface (API) usable by software utilizing ROS services is also developed. In this revised two-legged-humanoid robot, USB2Dynamixel connector is used to operate the Dynamixel AX-12A actuators through the Wi-Fi interface of the fortified BBB. An accelerometer sensor supports balancing of the robot, and updates data to the BBB periodically. An Infrared (IR) sensor is used to detect obstacles. This dynamic model is used to actuate the motors mounted on the robot leg thereby resulting in a swing-stance period of the legs for a stable forward movement of the robot. The maximum walking speed of the robot is 0.5 feet/second, beyond this limit the robot becomes unstable. The angle at which the robot leans is governed by the feedback from the accelerometer sensor, which is 20 degrees. If the robot tilts beyond a specific degree, then it would come back to its standstill position and stop further movement. When the robot moves forward, the IR sensors sense obstacles in front of the robot. If an obstacle is detected within 35 cm, then the robot stops moving further. Implementation of ROS on top of the BBB (by replacing CM530 controller with the BBB) and using feedback controls from the accelerometer and IR sensor to control the two-legged robotic movement are the novelties of this work

    Real Time Animation of Virtual Humans: A Trade-off Between Naturalness and Control

    Get PDF
    Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or ‘natural’) and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in the trade-off they offer between motion naturalness and the control that can be exerted over the motion. We show mechanisms to parametrize, combine (on different body parts) and concatenate motions generated by different animation techniques. We discuss several aspects of motion naturalness and show how it can be evaluated. We conclude by showing the promise of combinations of different animation paradigms to enhance both naturalness and control

    C\cdotASE: Learning Conditional Adversarial Skill Embeddings for Physics-based Characters

    Full text link
    We present C\cdotASE, an efficient and effective framework that learns conditional Adversarial Skill Embeddings for physics-based characters. Our physically simulated character can learn a diverse repertoire of skills while providing controllability in the form of direct manipulation of the skills to be performed. C\cdotASE divides the heterogeneous skill motions into distinct subsets containing homogeneous samples for training a low-level conditional model to learn conditional behavior distribution. The skill-conditioned imitation learning naturally offers explicit control over the character's skills after training. The training course incorporates the focal skill sampling, skeletal residual forces, and element-wise feature masking to balance diverse skills of varying complexities, mitigate dynamics mismatch to master agile motions and capture more general behavior characteristics, respectively. Once trained, the conditional model can produce highly diverse and realistic skills, outperforming state-of-the-art models, and can be repurposed in various downstream tasks. In particular, the explicit skill control handle allows a high-level policy or user to direct the character with desired skill specifications, which we demonstrate is advantageous for interactive character animation.Comment: SIGGRAPH Asia 202

    Learning dynamic motor skills for terrestrial locomotion

    Get PDF
    The use of Deep Reinforcement Learning (DRL) has received significantly increased attention from researchers within the robotics field following the success of AlphaGo, which demonstrated the superhuman capabilities of deep reinforcement algorithms in terms of solving complex tasks by beating professional GO players. Since then, an increasing number of researchers have investigated the potential of using DRL to solve complex high-dimensional robotic tasks, such as legged locomotion, arm manipulation, and grasping, which are difficult tasks to solve using conventional optimization approaches. Understanding and recreating various modes of terrestrial locomotion has been of long-standing interest to roboticists. A large variety of applications, such as rescue missions, disaster responses and science expeditions, strongly demand mobility and versatility in legged locomotion to enable task completion. In order to create useful physical robots, it is necessary to design controllers to synthesize the complex locomotion behaviours observed in humans and other animals. In the past, legged locomotion was mainly achieved via analytical engineering approaches. However, conventional analytical approaches have their limitations, as they require relatively large amounts of human effort and knowledge. Machine learning approaches, such as DRL, require less human effort compared to analytical approaches. The project conducted for this thesis explores the feasibility of using DRL to acquire control policies comparable to, or better than, those acquired through analytical approaches while requiring less human effort. In this doctoral thesis, we developed a Multi-Expert Learning Architecture (MELA) that uses DRL to learn multi-skill control policies capable of synthesizing a diverse set of dynamic locomotion behaviours for legged robots. We first proposed a novel DRL framework for the locomotion of humanoid robots. The proposed learning framework is capable of acquiring robust and dynamic motor skills for humanoids, including balancing, walking, standing-up fall recovery. We subsequently improved upon the learning framework and design a novel multi-expert learning architecture that is capable of fusing multiple motor skills together in a seamless fashion and ultimately deploy this framework on a real quadrupedal robot. The successful deployment of learned control policies on a real quadrupedal robot demonstrates the feasibility of using an Artificial Intelligence (AI) based approach for real robot motion control
    corecore