693 research outputs found
Straight-Leg Walking Through Underconstrained Whole-Body Control
We present an approach for achieving a natural, efficient gait on bipedal
robots using straightened legs and toe-off. Our algorithm avoids complex height
planning by allowing a whole-body controller to determine the straightest
possible leg configuration at run-time. The controller solutions are biased
towards a straight leg configuration by projecting leg joint angle objectives
into the null-space of the other quadratic program motion objectives. To allow
the legs to remain straight throughout the gait, toe-off was utilized to
increase the kinematic reachability of the legs. The toe-off motion is achieved
through underconstraining the foot position, allowing it to emerge naturally.
We applied this approach of under-specifying the motion objectives to the Atlas
humanoid, allowing it to walk over a variety of terrain. We present both
experimental and simulation results and discuss performance limitations and
potential improvements.Comment: Submitted to 2018 IEEE International Conference on Robotics and
Automatio
Splicing of concurrent upper-body motion spaces with locomotion
In this paper, we present a motion splicing technique for generating concurrent upper-body actions occurring simultaneously with the evolution of a lower-body locomotion sequence. Specifically, we show that a layered interpolation motion model generates upper-body poses while assigning different actions to each upper-body part. Hence, in the proposed motion splicing approach, it is possible to increase the number of generated motions as well as the number of desired actions that can be performed by virtual characters. Additionally, we propose an iterative motion blending solution, inverse pseudo-blending, to maintain a smooth and natural interaction between the virtual character and the virtual environment; inverse pseudo-blending is a constraint-based motion editing technique that blends the motions enclosed in a tetrahedron by minimising the distances between the end-effector positions of the actual and blended motions. Additionally, to evaluate the proposed solution, we implemented an example-based application for interactive motion splicing based on specified constraints. Finally, the generated results show that the proposed solution can be beneficially applied to interactive applications where concurrent actions of the upper-body are desired
Momentum Control with Hierarchical Inverse Dynamics on a Torque-Controlled Humanoid
Hierarchical inverse dynamics based on cascades of quadratic programs have
been proposed for the control of legged robots. They have important benefits
but to the best of our knowledge have never been implemented on a torque
controlled humanoid where model inaccuracies, sensor noise and real-time
computation requirements can be problematic. Using a reformulation of existing
algorithms, we propose a simplification of the problem that allows to achieve
real-time control. Momentum-based control is integrated in the task hierarchy
and a LQR design approach is used to compute the desired associated closed-loop
behavior and improve performance. Extensive experiments on various balancing
and tracking tasks show very robust performance in the face of unknown
disturbances, even when the humanoid is standing on one foot. Our results
demonstrate that hierarchical inverse dynamics together with momentum control
can be efficiently used for feedback control under real robot conditions.Comment: 21 pages, 11 figures, 4 tables in Autonomous Robots (2015
Offline and Online Planning and Control Strategies for the Multi-Contact and Biped Locomotion of Humanoid Robots
In the past decades, the Research on humanoid robots made progress forward accomplishing exceptionally dynamic and agile motions. Starting from the DARPA Robotic Challenge in 2015, humanoid platforms have been successfully employed to perform more and more challenging tasks with the eventual aim of assisting or replacing humans in hazardous and stressful working situations. However, the deployment of these complex machines in realistic domestic and working environments still represents a high-level challenge for robotics. Such environments are characterized by unstructured and cluttered settings with continuously varying conditions due to the dynamic presence of humans and other mobile entities, which cannot only compromise the operation of the robotic system but can also pose severe risks both to the people and the robot itself due to unexpected interactions and impacts. The ability to react to these unexpected interactions is therefore a paramount requirement for enabling the robot to adapt its behavior to the task needs and the characteristics of the environment. Further, the capability to move in a complex and varying environment is an essential skill for a humanoid robot for the execution of any task. Indeed, human instructions may often require the robot to move and reach a desired location, e.g., for bringing an object or for inspecting a specific place of an infrastructure. In this context, a flexible and autonomous walking behavior is an essential skill, study of which represents one of the main topics of this Thesis, considering disturbances and unfeasibilities coming both from the environment and dynamic obstacles that populate realistic scenarios.
Locomotion planning strategies are still an open theme in the humanoids and legged robots research and can be classified in sample-based and optimization-based planning algorithms. The first, explore the configuration space, finding a feasible path between the start and goal robotâs configuration with different logic depending on the algorithm. They suffer of a high computational cost that often makes difficult, if not impossible, their online implementations but, compared to their counterparts, they do not need any environment or robot simplification to find a solution and they are probabilistic complete, meaning that a feasible solution can be certainly found if at least one exists. The goal of this thesis is to merge the two algorithms in a coupled offline-online planning framework to generate an offline global trajectory with a sample-based approach to cope with any kind of cluttered and complex environment, and online locally refine it during the execution, using a faster optimization-based algorithm that more suits an online implementation. The offline planner performances are improved by planning in the robot contact space instead of the whole-body robot configuration space, requiring an algorithm that maps the two state spaces.
The framework proposes a methodology to generate whole-body trajectories for the motion of humanoid and legged robots in realistic and dynamically changing environments.
This thesis focuses on the design and test of each component of this planning framework, whose validation is carried out on the real robotic platforms CENTAURO and COMAN+ in various loco-manipulation tasks scenarios.  
Methods to improve the coping capacities of whole-body controllers for humanoid robots
Current applications for humanoid robotics require autonomy in an environment specifically
adapted to humans, and safe coexistence with people. Whole-body control is
promising in this sense, having shown to successfully achieve locomotion and manipulation
tasks. However, robustness remains an issue: whole-body controllers can still
hardly cope with unexpected disturbances, with changes in working conditions, or
with performing a variety of tasks, without human intervention. In this thesis, we
explore how whole-body control approaches can be designed to address these issues.
Based on whole-body control, contributions have been developed along three main
axes: joint limit avoidance, automatic parameter tuning, and generalizing whole-body
motions achieved by a controller. We first establish a whole-body torque-controller
for the iCub, based on the stack-of-tasks approach and proposed feedback control
laws in SE(3). From there, we develop a novel, theoretically guaranteed joint limit
avoidance technique for torque-control, through a parametrization of the feasible joint
space. This technique allows the robot to remain compliant, while resisting external
perturbations that push joints closer to their limits, as demonstrated with experiments
in simulation and with the real robot. Then, we focus on the issue of automatically
tuning parameters of the controller, in order to improve its behavior across different
situations. We show that our approach for learning task priorities, combining domain
randomization and carefully selected fitness functions, allows the successful transfer of
results between platforms subjected to different working conditions. Following these
results, we then propose a controller which allows for generic, complex whole-body
motions through real-time teleoperation. This approach is notably verified on the robot
to follow generic movements of the teleoperator while in double support, as well as to
follow the teleoperator\u2019s upper-body movements while walking with footsteps adapted
from the teleoperator\u2019s footsteps. The approaches proposed in this thesis therefore
improve the capability of whole-body controllers to cope with external disturbances,
different working conditions and generic whole-body motions
Dynamic whole-body motion generation under rigid contacts and other unilateral constraints
The most widely used technique for generating wholebody motions on a humanoid robot accounting for various tasks and constraints is inverse kinematics. Based on the task-function approach, this class of methods enables the coordination of robot movements to execute several tasks in parallel and account for the sensor feedback in real time, thanks to the low computation cost.
To some extent, it also enables us to deal with some of the robot constraints (e.g., joint limits or visibility) and manage the quasi-static balance of the robot. In order to fully use the whole range of possible motions, this paper proposes extending the task-function approach to handle the full dynamics of the robot multibody along with any constraint written as equality or inequality of the state and control variables. The definition of multiple objectives is made possible by ordering them inside a strict hierarchy. Several models of contact with the environment can be implemented in the framework. We propose a reduced formulation of the multiple rigid planar contact that keeps a low computation cost. The efficiency of this approach is illustrated by presenting several multicontact dynamic motions in simulation and on the real HRP-2 robot
Motion Control of the Hybrid Wheeled-Legged Quadruped Robot Centauro
Emerging applications will demand robots to deal with a complex environment, which lacks the structure and predictability of the industrial workspace. Complex scenarios will require robot complexity to increase as well, as compared to classical topologies such as fixed-base manipulators, wheeled mobile platforms, tracked vehicles, and their combinations. Legged robots, such as humanoids and quadrupeds, promise to provide platforms which are flexible enough to handle real world scenarios; however, the improved flexibility comes at the cost of way higher control complexity. As a trade-off, hybrid wheeled-legged robots have been proposed, resulting in the mitigation of control complexity whenever the ground surface is suitable for driving. Following this idea, a new hybrid robot called Centauro has been developed inside the Humanoid and Human Centered Mechatronics lab at Istituto Italiano di Tecnologia (IIT). Centauro is a wheeled-legged quadruped with a humanoid bi-manual upper-body. Differently from other platform of similar concept, Centauro employs customized actuation units, which provide high torque outputs, moderately fast motions, and the possibility to control the exerted torque. Moreover, with more than forty motors moving its limbs, Centauro is a very redundant platform, with the potential to execute many different tasks at the same time. This thesis deals with the design and development of a software architecture, and a control system, tailored to such a robot; both wheeled and legged locomotion strategies have been studied, as well as prioritized, whole-body and interaction controllers exploiting the robot torque control capabilities, and capable to handle the system redundancy. A novel software architecture, made of (i) a real-time robotic middleware, and (ii) a framework for online, prioritized Cartesian controller, forms the basis of the entire work
Contact-aware Human Motion Forecasting
In this paper, we tackle the task of scene-aware 3D human motion forecasting,
which consists of predicting future human poses given a 3D scene and a past
human motion. A key challenge of this task is to ensure consistency between the
human and the scene, accounting for human-scene interactions. Previous attempts
to do so model such interactions only implicitly, and thus tend to produce
artifacts such as "ghost motion" because of the lack of explicit constraints
between the local poses and the global motion. Here, by contrast, we propose to
explicitly model the human-scene contacts. To this end, we introduce
distance-based contact maps that capture the contact relationships between
every joint and every 3D scene point at each time instant. We then develop a
two-stage pipeline that first predicts the future contact maps from the past
ones and the scene point cloud, and then forecasts the future human poses by
conditioning them on the predicted contact maps. During training, we explicitly
encourage consistency between the global motion and the local poses via a prior
defined using the contact maps and future poses. Our approach outperforms the
state-of-the-art human motion forecasting and human synthesis methods on both
synthetic and real datasets. Our code is available at
https://github.com/wei-mao-2019/ContAwareMotionPred.Comment: Accepted to NeurIPS202
- âŚ