22 research outputs found
A Benchmarking of DCM Based Architectures for Position and Velocity Controlled Walking of Humanoid Robots
This paper contributes towards the development and comparison of
Divergent-Component-of-Motion (DCM) based control architectures for humanoid
robot locomotion. More precisely, we present and compare several DCM based
implementations of a three layer control architecture. From top to bottom,
these three layers are here called: trajectory optimization, simplified model
control, and whole-body QP control. All layers use the DCM concept to generate
references for the layer below. For the simplified model control layer, we
present and compare both instantaneous and Receding Horizon Control
controllers. For the whole-body QP control layer, we present and compare
controllers for position and velocity control robots. Experimental results are
carried out on the one-meter tall iCub humanoid robot. We show which
implementation of the above control architecture allows the robot to achieve a
walking velocity of 0.41 meters per second.Comment: Submitted to Humanoids201
Learning-based methods for planning and control of humanoid robots
Nowadays, humans and robots are more and more likely to coexist as time goes by. The anthropomorphic nature of humanoid robots facilitates physical human-robot interaction, and makes social human-robot interaction more natural. Moreover, it makes humanoids ideal candidates for many applications related to tasks and environments designed for humans.
No matter the application, an ubiquitous requirement for the humanoid is to possess proper locomotion skills. Despite long-lasting research, humanoid locomotion is still far from being a trivial task. A common approach to address humanoid locomotion consists in decomposing its complexity by means of a model-based hierarchical control architecture. To cope with computational constraints, simplified models for the humanoid are employed in some of the architectural layers. At the same time, the redundancy of the humanoid with respect to the locomotion task as well as the closeness of such a task to human locomotion suggest a data-driven approach to learn it directly from experience.
This thesis investigates the application of learning-based techniques to planning and control of humanoid locomotion. In particular, both deep reinforcement learning and deep supervised learning are considered to address humanoid locomotion tasks in a crescendo of complexity.
First, we employ deep reinforcement learning to study the spontaneous emergence of balancing and push recovery strategies for the humanoid, which represent essential prerequisites for more complex locomotion tasks.
Then, by making use of motion capture data collected from human subjects, we employ deep supervised learning to shape the robot walking trajectories towards an improved human-likeness.
The proposed approaches are validated on real and simulated humanoid robots. Specifically, on two versions of the iCub humanoid: iCub v2.7 and iCub v3
Predictive Whole-Body Control of Humanoid Robot Locomotion
Humanoid robots are machines built with an anthropomorphic shape. Despite decades of research into the subject, it is still challenging to tackle the robot locomotion problem from an algorithmic point of view. For example, these machines cannot achieve a constant forward body movement without exploiting contacts with the environment. The reactive forces resulting from the contacts are subject to strong limitations, complicating the design of control laws. As a consequence, the generation of humanoid motions requires to exploit fully the mathematical model of the robot in contact with the environment or to resort to approximations of it.
This thesis investigates predictive and optimal control techniques for tackling humanoid robot motion tasks. They generate control input values from the system model and objectives, often transposed as cost function to minimize.
In particular, this thesis tackles several aspects of the humanoid robot locomotion problem in a crescendo of complexity. First, we consider the single step push recovery problem. Namely, we aim at maintaining the upright posture with a single step after a strong external disturbance. Second, we generate and stabilize walking motions. In addition, we adopt predictive techniques to perform more dynamic motions, like large step-ups.
The above-mentioned applications make use of different simplifications or assumptions to facilitate the tractability of the corresponding motion tasks. Moreover, they consider first the foot placements and only afterward how to maintain balance. We attempt to remove all these simplifications. We model the robot in contact with the environment explicitly, comparing different methods. In addition, we are able to obtain whole-body walking trajectories automatically by only specifying the desired motion velocity and a moving reference on the ground. We exploit the contacts with the walking surface to achieve these objectives while maintaining the robot balanced.
Experiments are performed on real and simulated humanoid robots, like the Atlas and the iCub humanoid robots
Learning to Walk and Fly with Adversarial Motion Priors
Robot multimodal locomotion encompasses the ability to transition between
walking and flying, representing a significant challenge in robotics. This work
presents an approach that enables automatic smooth transitions between legged
and aerial locomotion. Leveraging the concept of Adversarial Motion Priors, our
method allows the robot to imitate motion datasets and accomplish the desired
task without the need for complex reward functions. The robot learns walking
patterns from human-like gaits and aerial locomotion patterns from motions
obtained using trajectory optimization. Through this process, the robot adapts
the locomotion scheme based on environmental feedback using reinforcement
learning, with the spontaneous emergence of mode-switching behavior. The
results highlight the potential for achieving multimodal locomotion in aerial
humanoid robotics through automatic control of walking and flying modes, paving
the way for applications in diverse domains such as search and rescue,
surveillance, and exploration missions. This research contributes to advancing
the capabilities of aerial humanoid robots in terms of versatile locomotion in
various environments.Comment: 6 pages, 8 figures, submitted to ICRA 202
Enabling Human-Robot Collaboration via Holistic Human Perception and Partner-Aware Control
As robotic technology advances, the barriers to the coexistence of humans and robots are slowly coming down. Application domains like elderly care, collaborative manufacturing, collaborative manipulation, etc., are considered the need of the hour, and progress in robotics holds the potential to address many societal challenges. The future socio-technical systems constitute of blended workforce with a symbiotic relationship between human and robot partners working collaboratively. This thesis attempts to address some of the research challenges in enabling human-robot collaboration. In particular, the challenge of a holistic perception of a human partner to continuously communicate his intentions and needs in real-time to a robot partner is crucial for the successful realization of a collaborative task. Towards that end, we present a holistic human perception framework for real-time monitoring of whole-body human motion and dynamics. On the other hand, the challenge of leveraging assistance from a human partner will lead to improved human-robot collaboration. In this direction, we attempt at methodically defining what constitutes assistance from a human partner and propose partner-aware robot control strategies to endow robots with the capacity to meaningfully engage in a collaborative task
Advanced Mobile Robotics: Volume 3
Mobile robotics is a challenging field with great potential. It covers disciplines including electrical engineering, mechanical engineering, computer science, cognitive science, and social science. It is essential to the design of automated robots, in combination with artificial intelligence, vision, and sensor technologies. Mobile robots are widely used for surveillance, guidance, transportation and entertainment tasks, as well as medical applications. This Special Issue intends to concentrate on recent developments concerning mobile robots and the research surrounding them to enhance studies on the fundamental problems observed in the robots. Various multidisciplinary approaches and integrative contributions including navigation, learning and adaptation, networked system, biologically inspired robots and cognitive methods are welcome contributions to this Special Issue, both from a research and an application perspective