89 research outputs found

    Scaled Autonomy for Networked Humanoids

    Get PDF
    Humanoid robots have been developed with the intention of aiding in environments designed for humans. As such, the control of humanoid morphology and effectiveness of human robot interaction form the two principal research issues for deploying these robots in the real world. In this thesis work, the issue of humanoid control is coupled with human robot interaction under the framework of scaled autonomy, where the human and robot exchange levels of control depending on the environment and task at hand. This scaled autonomy is approached with control algorithms for reactive stabilization of human commands and planned trajectories that encode semantically meaningful motion preferences in a sequential convex optimization framework. The control and planning algorithms have been extensively tested in the field for robustness and system verification. The RoboCup competition provides a benchmark competition for autonomous agents that are trained with a human supervisor. The kid-sized and adult-sized humanoid robots coordinate over a noisy network in a known environment with adversarial opponents, and the software and routines in this work allowed for five consecutive championships. Furthermore, the motion planning and user interfaces developed in the work have been tested in the noisy network of the DARPA Robotics Challenge (DRC) Trials and Finals in an unknown environment. Overall, the ability to extend simplified locomotion models to aid in semi-autonomous manipulation allows untrained humans to operate complex, high dimensional robots. This represents another step in the path to deploying humanoids in the real world, based on the low dimensional motion abstractions and proven performance in real world tasks like RoboCup and the DRC

    Current sensing feedback for humanoid stability

    Get PDF
    For humanoid robots to function in changing environments, they must be able to maintain balance similar to human beings. At present, humanoids recover from pushes by the use of either the ankles or hips and a rigid body. This method has been proven to work, but causes excessive strain on the joints of the robot and does not maximize on the capabilities of a humanlike body. The focus of this paper is to enable advanced dynamic balancing through torque classification and balance improving positional changes. For the robot to be able to balance dynamically, external torques must be determined accurately. The proposed method of this paper uses current sensing feedback at the humanoids power source to classify external torques. Through understanding the current draw of each joint, an external torque can be modeled. After being modeled, the external torque can be nullified with balancing techniques. Current sensing has the advantage that it adds detailed feedback while requiring small adjustments to the robot. Also, current sensing minimizes additional sensors, cost, and weight to the robot. Current sensing technology lies between the power supply and drive motors, thus can be implement without altering the robot. After an external torque has been modeled, the robot will undertake balancing positions to reduce the instability. The specialized positions increase the robot\u27s balance while reducing the workload of each joint. The balancing positions incorporate the humanlike body of the robot and torque from each of the leg servos. The best balancing positions were generated with a genetic algorithm and simulated in Webots. The simulation environment provided an accurate physical model and physics engine. The genetic algorithm reduced the workload of searching the workspace of a robot with ten degrees of freedom below the waist. The current sensing theory was experimentally tested on the TigerBot, a humanoid produced by the Rochester Institute of Technology (RIT). The TigerBot has twenty three degrees of freedom that fully simulate human motion. The robot stands at thirty-one inches tall and weighs close to nine pounds. The legs of the robot have six degrees of freedom per leg, which fully mimics the human leg. The robot was awarded first place in the 2012 IEEE design competition for innovation in New York

    RPBP: Rapid-prototyped remote-brain biped with 3D perception

    Get PDF
    This paper provides the design of a novel open-hardware mini-bipedal robot, named Rapid-Prototyped Remote-Brain BiPed (RPBP), that is developed to provide a low-cost and reliable platform for locomotion and perception research. The robot is made of customized 3D-printed material (ABS plastic) and electronics, and commercial Robotics Dynamixel MX-28 actuators, as well as visual RGB-D and IMU sensing systems. We show that the robot is able to perform some locomotion/visual-odometry tasks and it is easy to switch between different feet designs, providing also a novel Center-of-Pressure (CoP) sensing system, so that it can deal with various types of terrain. Moreover, we provide a description of its control and perception system architecture, as well as our opensource software packages that provide sensing and navigation tools for locomotion and visual odometry on the robot. Finally, we briefly discuss the transferability of some prototype research that has been done on the developed mini-biped, to half or fullsize humanoid robots, such as COMAN or WALK-MAN

    From walking to running: robust and 3D humanoid gait generation via MPC

    Get PDF
    Humanoid robots are platforms that can succeed in tasks conceived for humans. From locomotion in unstructured environments, to driving cars, or working in industrial plants, these robots have a potential that is yet to be disclosed in systematic every-day-life applications. Such a perspective, however, is opposed by the need of solving complex engineering problems under the hardware and software point of view. In this thesis, we focus on the software side of the problem, and in particular on locomotion control. The operativity of a legged humanoid is subordinate to its capability of realizing a reliable locomotion. In many settings, perturbations may undermine the balance and make the robot fall. Moreover, complex and dynamic motions might be required by the context, as for instance it could be needed to start running or climbing stairs to achieve a certain location in the shortest time. We present gait generation schemes based on Model Predictive Control (MPC) that tackle both the problem of robustness and tridimensional dynamic motions. The proposed control schemes adopt the typical paradigm of centroidal MPC for reference motion generation, enforcing dynamic balance through the Zero Moment Point condition, plus a whole-body controller that maps the generated trajectories to joint commands. Each of the described predictive controllers also feature a so-called stability constraint, preventing the generation of diverging Center of Mass trajectories with respect to the Zero Moment Point. Robustness is addressed by modeling the humanoid as a Linear Inverted Pendulum and devising two types of strategies. For persistent perturbations, a way to use a disturbance observer and a technique for constraint tightening (to ensure robust constraint satisfaction) are presented. In the case of impulsive pushes instead, techniques for footstep and timing adaptation are introduced. The underlying approach is to interpret robustness as a MPC feasibility problem, thus aiming at ensuring the existence of a solution for the constrained optimization problem to be solved at each iteration in spite of the perturbations. This perspective allows to devise simple solutions to complex problems, favoring a reliable real-time implementation. For the tridimensional locomotion, on the other hand, the humanoid is modeled as a Variable Height Inverted Pendulum. Based on it, a two stage MPC is introduced with particular emphasis on the implementation of the stability constraint. The overall result is a gait generation scheme that allows the robot to overcome relatively complex environments constituted by a non-flat terrain, with also the capability of realizing running gaits. The proposed methods are validated in different settings: from conceptual simulations in Matlab to validations in the DART dynamic environment, up to experimental tests on the NAO and the OP3 platforms

    Motion Planning and Control for the Locomotion of Humanoid Robot

    Get PDF
    This thesis aims to contribute on the motion planning and control problem of the locomotion of humanoid robots. For the motion planning, various methods were proposed in different levels of model dependence. First, a model free approach was proposed which utilizes linear regression to estimate the relationship between foot placement and moving velocity. The data-based feature makes it quite robust to handle modeling error and external disturbance. As a generic control philosophy, it can be applied to various robots with different gaits. To reduce the risk of collecting experimental data of model-free method, based on the simplified linear inverted pendulum model, the classic planning method of model predictive control was explored to optimize CoM trajectory with predefined foot placements or optimize them two together with respect to the ZMP constraint. Along with elaborately designed re-planning algorithm and sparse discretization of trajectories, it is fast enough to run in real time and robust enough to resist external disturbance. Thereafter, nonlinear models are utilized for motion planning by performing forward simulation iteratively following the multiple shooting method. A walking pattern is predefined to fix most of the degrees of the robot, and only one decision variable, foot placement, is left in one motion plane and therefore able to be solved in milliseconds which is sufficient to run in real time. In order to track the planned trajectories and prevent the robot from falling over, diverse control strategies were proposed according to the types of joint actuators. CoM stabilizer was designed for the robots with position-controlled joints while quasi-static Cartesian impedance control and optimization-based full body torque control were implemented for the robots with torque-controlled joints. Various scenarios were set up to demonstrate the feasibility and robustness of the proposed approaches, like walking on uneven terrain, walking with narrow feet or straight leg, push recovery and so on

    Legged Robots for Object Manipulation: A Review

    Get PDF
    Legged robots can have a unique role in manipulating objects in dynamic, human-centric, or otherwise inaccessible environments. Although most legged robotics research to date typically focuses on traversing these challenging environments, many legged platform demonstrations have also included "moving an object" as a way of doing tangible work. Legged robots can be designed to manipulate a particular type of object (e.g., a cardboard box, a soccer ball, or a larger piece of furniture), by themselves or collaboratively. The objective of this review is to collect and learn from these examples, to both organize the work done so far in the community and highlight interesting open avenues for future work. This review categorizes existing works into four main manipulation methods: object interactions without grasping, manipulation with walking legs, dedicated non-locomotive arms, and legged teams. Each method has different design and autonomy features, which are illustrated by available examples in the literature. Based on a few simplifying assumptions, we further provide quantitative comparisons for the range of possible relative sizes of the manipulated object with respect to the robot. Taken together, these examples suggest new directions for research in legged robot manipulation, such as multifunctional limbs, terrain modeling, or learning-based control, to support a number of new deployments in challenging indoor/outdoor scenarios in warehouses/construction sites, preserved natural areas, and especially for home robotics.Comment: Preprint of the paper submitted to Frontiers in Mechanical Engineerin
    corecore