252 research outputs found
Learning Complex Motor Skills for Legged Robot Fall Recovery
Falling is inevitable for legged robots in challenging real-world scenarios, where environments are unstructured and situations are unpredictable, such as uneven terrain in the wild. Hence, to recover from falls and achieve all-terrain traversability, it is essential for intelligent robots to possess the complex motor skills required to resume operation. To go beyond the limitation of handcrafted control, we investigated a deep reinforcement learning approach to learn generalized feedback-control policies for fall recovery that are robust to external disturbances. We proposed a design guideline for selecting key states for initialization, including a comparison to the random state initialization. The proposed learning-based pipeline is applicable to different robot models and their corner cases, including both small-/large-size bipeds and quadrupeds. Further, we show that the learned fall recovery policies are hardware-feasible and can be implemented on real robots
Hybrid Architectures for Object Pose and Velocity Tracking at the Intersection of Kalman Filtering and Machine Learning
The study of object perception algorithms is fundamental for the development of robotic platforms capable of planning and executing actions involving objects with high precision, reliability and safety. Indeed, this topic has been vastly explored in both the robotic and computer vision research communities using diverse techniques, ranging from classical Bayesian filtering to more modern Machine Learning techniques, and complementary sensing modalities such as vision and touch. Recently, the ever-growing availability of tools for synthetic data generation has substantially increased the adoption of Deep Learning for both 2D tasks, as object detection and segmentation, and 6D tasks, such as object pose estimation and tracking.
The proposed methods exhibit interesting performance on computer vision benchmarks and robotic tasks, e.g. using object pose estimation for grasp planning purposes. Nonetheless, they generally do not consider useful information connected with the physics of the object motion and the peculiarities and requirements of robotic systems. Examples are the necessity to provide well-behaved output signals for robot motion control, the possibility to integrate modelling priors on the motion of the object and algorithmic priors. These help exploit the temporal correlation of the object poses, handle the pose uncertainties and mitigate the effect of outliers. Most of these concepts are considered in classical approaches, e.g. from the Bayesian and Kalman filtering literature, which however are not as powerful as Deep Learning in handling visual data. As a consequence, the development of hybrid architectures that combine the best features from both worlds is particularly appealing in a robotic setting.
Motivated by these considerations, in this Thesis, I aimed at devising hybrid architectures for object perception, focusing on the task of object pose and velocity tracking. The proposed architectures use Kalman filtering supported by state-of-the-art Deep Neural Networks to track the 6D pose and velocity of objects from images. The devised solutions exhibit state-of-the-art performance, increased modularity and do not require training to implement the actual tracking behaviors. Furthermore, they can track even fast object motions despite the possible non-negligible inference times of the adopted neural networks. Also, by relying on data-driven Kalman filtering, I explored a paradigm that enables to track the state of systems that cannot be easily modeled analytically. Specifically, I used this approach to learn the measurement model of soft 3D tactile sensors and address the problem of tracking the sliding motion of hand-held objects
Scaled Autonomy for Networked Humanoids
Humanoid robots have been developed with the intention of aiding in environments designed for humans. As such, the control of humanoid morphology and effectiveness of human robot interaction form the two principal research issues for deploying these robots in the real world. In this thesis work, the issue of humanoid control is coupled with human robot interaction under the framework of scaled autonomy, where the human and robot exchange levels of control depending on the environment and task at hand. This scaled autonomy is approached with control algorithms for reactive stabilization of human commands and planned trajectories that encode semantically meaningful motion preferences in a sequential convex optimization framework.
The control and planning algorithms have been extensively tested in the field for robustness and system verification. The RoboCup competition provides a benchmark competition for autonomous agents that are trained with a human supervisor. The kid-sized and adult-sized humanoid robots coordinate over a noisy network in a known environment with adversarial opponents, and the software and routines in this work allowed for five consecutive championships. Furthermore, the motion planning and user interfaces developed in the work have been tested in the noisy network of the DARPA Robotics Challenge (DRC) Trials and Finals in an unknown environment.
Overall, the ability to extend simplified locomotion models to aid in semi-autonomous manipulation allows untrained humans to operate complex, high dimensional robots. This represents another step in the path to deploying humanoids in the real world, based on the low dimensional motion abstractions and proven performance in real world tasks like RoboCup and the DRC
Model-Based Environmental Visual Perception for Humanoid Robots
The visual perception of a robot should answer two fundamental questions: What? and Where? In order to properly and efficiently reply to these questions, it is essential to establish a bidirectional coupling between the external stimuli and the internal representations. This coupling links the physical world with the inner abstraction models by sensor transformation, recognition, matching and optimization algorithms. The objective of this PhD is to establish this sensor-model coupling
Learning kinematic structure correspondences using multi-order similarities
We present a novel framework for finding the kinematic structure correspondences between two articulated objects in videos via hypergraph matching. In contrast to appearance and graph alignment based matching methods, which have been applied among two similar static images, the proposed method finds correspondences between two dynamic kinematic structures of heterogeneous objects in videos. Thus our method allows matching the structure of objects which have similar topologies or motions, or a combination of the two. Our main contributions are summarised as follows: (i)casting the kinematic structure correspondence problem into a hypergraph matching problem by incorporating multi-order similarities with normalising weights, (ii)introducing a structural topology similarity measure by aggregating topology constrained subgraph isomorphisms, (iii)measuring kinematic correlations between pairwise nodes, and (iv)proposing a combinatorial local motion similarity measure using geodesic distance on the Riemannian manifold. We demonstrate the robustness and accuracy of our method through a number of experiments on synthetic and real data, showing that various other recent and state of the art methods are outperformed. Our method is not limited to a specific application nor sensor, and can be used as building block in applications such as action recognition, human motion retargeting to robots, and articulated object manipulation
Obstacle avoidance based-visual navigation for micro aerial vehicles
This paper describes an obstacle avoidance system for low-cost Unmanned Aerial Vehicles (UAVs) using vision as the principal source of information through the monocular onboard camera. For detecting obstacles, the proposed system compares the image obtained in real time from the UAV with a database of obstacles that must be avoided. In our proposal, we include the feature point detector Speeded Up Robust Features (SURF) for fast obstacle detection and a control law to avoid them. Furthermore, our research includes a path recovery algorithm. Our method is attractive for compact MAVs in which other sensors will not be implemented. The system was tested in real time on a Micro Aerial Vehicle (MAV), to detect and avoid obstacles in an unknown controlled environment; we compared our approach with related works.Peer ReviewedPostprint (published version
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
RLOC: Terrain-Aware Legged Locomotion using Reinforcement Learning and Optimal Control
We present a unified model-based and data-driven approach for quadrupedal
planning and control to achieve dynamic locomotion over uneven terrain. We
utilize on-board proprioceptive and exteroceptive feedback to map sensory
information and desired base velocity commands into footstep plans using a
reinforcement learning (RL) policy trained in simulation over a wide range of
procedurally generated terrains. When ran online, the system tracks the
generated footstep plans using a model-based controller. We evaluate the
robustness of our method over a wide variety of complex terrains. It exhibits
behaviors which prioritize stability over aggressive locomotion. Additionally,
we introduce two ancillary RL policies for corrective whole-body motion
tracking and recovery control. These policies account for changes in physical
parameters and external perturbations. We train and evaluate our framework on a
complex quadrupedal system, ANYmal version B, and demonstrate transferability
to a larger and heavier robot, ANYmal C, without requiring retraining.Comment: 19 pages, 15 figures, 6 tables, 1 algorithm, submitted to T-RO; under
revie
Recommended from our members
Analysis and synthesis of bipedal humanoid movement : a physical simulation approach
textAdvances in graphics and robotics have increased the importance of tools for synthesizing humanoid movements to control animated characters and physical robots. There is also an increasing need for analyzing human movements for clinical diagnosis and rehabilitation. Existing tools can be expensive, inefficient, or difficult to use. Using simulated physics and motion capture to develop an interactive virtual reality environment, we capture natural human movements in response to controlled stimuli. This research then applies insights into the mathematics underlying physics simulation to adapt the physics solver to support many important tasks involved in analyzing and synthesizing humanoid movement. These tasks include fitting an articulated physical model to motion capture data, modifying the model pose to achieve a desired configuration (inverse kinematics), inferring internal torques consistent with changing pose data (inverse dynamics), and transferring a movement from one model to another model (retargeting). The result is a powerful and intuitive process for analyzing and synthesizing movement in a single unified framework.Computer Science
- …