261 research outputs found
Legged Robots for Object Manipulation: A Review
Legged robots can have a unique role in manipulating objects in dynamic,
human-centric, or otherwise inaccessible environments. Although most legged
robotics research to date typically focuses on traversing these challenging
environments, many legged platform demonstrations have also included "moving an
object" as a way of doing tangible work. Legged robots can be designed to
manipulate a particular type of object (e.g., a cardboard box, a soccer ball,
or a larger piece of furniture), by themselves or collaboratively. The
objective of this review is to collect and learn from these examples, to both
organize the work done so far in the community and highlight interesting open
avenues for future work. This review categorizes existing works into four main
manipulation methods: object interactions without grasping, manipulation with
walking legs, dedicated non-locomotive arms, and legged teams. Each method has
different design and autonomy features, which are illustrated by available
examples in the literature. Based on a few simplifying assumptions, we further
provide quantitative comparisons for the range of possible relative sizes of
the manipulated object with respect to the robot. Taken together, these
examples suggest new directions for research in legged robot manipulation, such
as multifunctional limbs, terrain modeling, or learning-based control, to
support a number of new deployments in challenging indoor/outdoor scenarios in
warehouses/construction sites, preserved natural areas, and especially for home
robotics.Comment: Preprint of the paper submitted to Frontiers in Mechanical
Engineerin
Hierarchical Reinforcement Learning for Precise Soccer Shooting Skills using a Quadrupedal Robot
We address the problem of enabling quadrupedal robots to perform precise
shooting skills in the real world using reinforcement learning. Developing
algorithms to enable a legged robot to shoot a soccer ball to a given target is
a challenging problem that combines robot motion control and planning into one
task. To solve this problem, we need to consider the dynamics limitation and
motion stability during the control of a dynamic legged robot. Moreover, we
need to consider motion planning to shoot the hard-to-model deformable ball
rolling on the ground with uncertain friction to a desired location. In this
paper, we propose a hierarchical framework that leverages deep reinforcement
learning to train (a) a robust motion control policy that can track arbitrary
motions and (b) a planning policy to decide the desired kicking motion to shoot
a soccer ball to a target. We deploy the proposed framework on an A1
quadrupedal robot and enable it to accurately shoot the ball to random targets
in the real world.Comment: Accepted to 2022 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS 2022
Words into Action: Learning Diverse Humanoid Robot Behaviors using Language Guided Iterative Motion Refinement
Humanoid robots are well suited for human habitats due to their morphological
similarity, but developing controllers for them is a challenging task that
involves multiple sub-problems, such as control, planning and perception. In
this paper, we introduce a method to simplify controller design by enabling
users to train and fine-tune robot control policies using natural language
commands. We first learn a neural network policy that generates behaviors given
a natural language command, such as "walk forward", by combining Large Language
Models (LLMs), motion retargeting, and motion imitation. Based on the
synthesized motion, we iteratively fine-tune by updating the text prompt and
querying LLMs to find the best checkpoint associated with the closest motion in
history. We validate our approach using a simulated Digit humanoid robot and
demonstrate learning of diverse motions, such as walking, hopping, and kicking,
without the burden of complex reward engineering. In addition, we show that our
iterative refinement enables us to learn 3x times faster than a naive
formulation that learns from scratch
Online Visual Robot Tracking and Identification using Deep LSTM Networks
Collaborative robots working on a common task are necessary for many
applications. One of the challenges for achieving collaboration in a team of
robots is mutual tracking and identification. We present a novel pipeline for
online visionbased detection, tracking and identification of robots with a
known and identical appearance. Our method runs in realtime on the limited
hardware of the observer robot. Unlike previous works addressing robot tracking
and identification, we use a data-driven approach based on recurrent neural
networks to learn relations between sequential inputs and outputs. We formulate
the data association problem as multiple classification problems. A deep LSTM
network was trained on a simulated dataset and fine-tuned on small set of real
data. Experiments on two challenging datasets, one synthetic and one real,
which include long-term occlusions, show promising results.Comment: IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS), Vancouver, Canada, 2017. IROS RoboCup Best Paper Awar
Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning
We investigate whether Deep Reinforcement Learning (Deep RL) is able to
synthesize sophisticated and safe movement skills for a low-cost, miniature
humanoid robot that can be composed into complex behavioral strategies in
dynamic environments. We used Deep RL to train a humanoid robot with 20
actuated joints to play a simplified one-versus-one (1v1) soccer game. We first
trained individual skills in isolation and then composed those skills
end-to-end in a self-play setting. The resulting policy exhibits robust and
dynamic movement skills such as rapid fall recovery, walking, turning, kicking
and more; and transitions between them in a smooth, stable, and efficient
manner - well beyond what is intuitively expected from the robot. The agents
also developed a basic strategic understanding of the game, and learned, for
instance, to anticipate ball movements and to block opponent shots. The full
range of behaviors emerged from a small set of simple rewards. Our agents were
trained in simulation and transferred to real robots zero-shot. We found that a
combination of sufficiently high-frequency control, targeted dynamics
randomization, and perturbations during training in simulation enabled
good-quality transfer, despite significant unmodeled effects and variations
across robot instances. Although the robots are inherently fragile, minor
hardware modifications together with basic regularization of the behavior
during training led the robots to learn safe and effective movements while
still performing in a dynamic and agile way. Indeed, even though the agents
were optimized for scoring, in experiments they walked 156% faster, took 63%
less time to get up, and kicked 24% faster than a scripted baseline, while
efficiently combining the skills to achieve the longer term objectives.
Examples of the emergent behaviors and full 1v1 matches are available on the
supplementary website.Comment: Project website: https://sites.google.com/view/op3-socce
Development of a Locomotion and Balancing Strategy for Humanoid Robots
The locomotion ability and high mobility are the most distinguished features of humanoid robots. Due to the non-linear dynamics of walking, developing and controlling the locomotion of humanoid robots is a challenging task. In this thesis, we study and develop a walking engine for the humanoid robot, NAO, which is the official robotic platform used in the RoboCup Spl. Aldebaran Robotics, the manufacturing company of NAO provides a walking module that has disadvantages, such as being a black box that does not provide control of the gait as well as the robot walk with a bent knee. The latter disadvantage, makes the gait unnatural, energy inefficient and exert large amounts of torque to the knee joint. Thus creating a walking engine that produces a quality and natural gait is essential for humanoid robots in general and is a factor for succeeding in RoboCup competition.
Humanoids robots are required to walk fast to be practical for various life tasks. However, its complex structure makes it prone to falling during fast locomotion. On the same hand, the robots are expected to work in constantly changing environments alongside humans and robots, which increase the chance of collisions. Several human-inspired recovery strategies have been studied and adopted to humanoid robots in order to face unexpected and avoidable perturbations. These strategies include hip, ankle, and stepping, however, the use of the arms as a recovery strategy did not enjoy as much attention. The arms can be employed in different motions for fall prevention. The arm rotation strategy can be employed to control the angular momentum of the body and help to regain balance. In this master\u27s thesis, I developed a detailed study of different ways in which the arms can be used to enhance the balance recovery of the NAO humanoid robot while stationary and during locomotion. I model the robot as a linear inverted pendulum plus a flywheel to account for the angular momentum change at the CoM. I considered the role of the arms in changing the body\u27s moment of inertia which help to prevent the robot from falling or to decrease the falling impact. I propose a control algorithm that integrates the arm rotation strategy with the on-board sensors of the NAO. Additionally, I present a simple method to control the amount of recovery from rotating the arms. I also discuss the limitation of the strategy and how it can have a negative impact if it was misused. I present simulations to evaluate the approach in keeping the robot stable against various disturbance sources. The results show the success of the approach in keeping the NAO stable against various perturbations. Finally,I adopt the arm rotation to stabilize the ball kick, which is a common reason for falling in the soccer humanoid RoboCup competitions
- …