6,065 research outputs found
Body randomization reduces the sim-to-real gap for compliant quadruped locomotion
Designing controllers for compliant, underactuated robots is challenging and usually requires a learning procedure. Learning robotic control in simulated environments can speed up the process whilst lowering risk of physical damage. Since perfect simulations are unfeasible, several techniques are used to improve transfer to the real world. Here, we investigate the impact of randomizing body parameters during learning of CPG controllers in simulation. The controllers are evaluated on our physical quadruped robot. We find that body randomization in simulation increases chances of finding gaits that function well on the real robot
Robot Impedance Control and Passivity Analysis with Inner Torque and Velocity Feedback Loops
Impedance control is a well-established technique to control interaction
forces in robotics. However, real implementations of impedance control with an
inner loop may suffer from several limitations. Although common practice in
designing nested control systems is to maximize the bandwidth of the inner loop
to improve tracking performance, it may not be the most suitable approach when
a certain range of impedance parameters has to be rendered. In particular, it
turns out that the viable range of stable stiffness and damping values can be
strongly affected by the bandwidth of the inner control loops (e.g. a torque
loop) as well as by the filtering and sampling frequency. This paper provides
an extensive analysis on how these aspects influence the stability region of
impedance parameters as well as the passivity of the system. This will be
supported by both simulations and experimental data. Moreover, a methodology
for designing joint impedance controllers based on an inner torque loop and a
positive velocity feedback loop will be presented. The goal of the velocity
feedback is to increase (given the constraints to preserve stability) the
bandwidth of the torque loop without the need of a complex controller.Comment: 14 pages in Control Theory and Technology (2016
On discrete control of nonlinear systems with applications to robotics
Much progress has been reported in the areas of modeling and control of nonlinear dynamic systems in a continuous-time framework. From implementation point of view, however, it is essential to study these nonlinear systems directly in a discrete setting that is amenable for interfacing with digital computers. But to develop discrete models and discrete controllers for a nonlinear system such as robot is a nontrivial task. Robot is also inherently a variable-inertia dynamic system involving additional complications. Not only the computer-oriented models of these systems must satisfy the usual requirements for such models, but these must also be compatible with the inherent capabilities of computers and must preserve the fundamental physical characteristics of continuous-time systems such as the conservation of energy and/or momentum. Preliminary issues regarding discrete systems in general and discrete models of a typical industrial robot that is developed with full consideration of the principle of conservation of energy are presented. Some research on the pertinent tactile information processing is reviewed. Finally, system control methods and how to integrate these issues in order to complete the task of discrete control of a robot manipulator are also reviewed
Reinforcement Learning for UAV Attitude Control
Autopilot systems are typically composed of an "inner loop" providing
stability and control, while an "outer loop" is responsible for mission-level
objectives, e.g. way-point navigation. Autopilot systems for UAVs are
predominately implemented using Proportional, Integral Derivative (PID) control
systems, which have demonstrated exceptional performance in stable
environments. However more sophisticated control is required to operate in
unpredictable, and harsh environments. Intelligent flight control systems is an
active area of research addressing limitations of PID control most recently
through the use of reinforcement learning (RL) which has had success in other
applications such as robotics. However previous work has focused primarily on
using RL at the mission-level controller. In this work, we investigate the
performance and accuracy of the inner control loop providing attitude control
when using intelligent flight control systems trained with the state-of-the-art
RL algorithms, Deep Deterministic Gradient Policy (DDGP), Trust Region Policy
Optimization (TRPO) and Proximal Policy Optimization (PPO). To investigate
these unknowns we first developed an open-source high-fidelity simulation
environment to train a flight controller attitude control of a quadrotor
through RL. We then use our environment to compare their performance to that of
a PID controller to identify if using RL is appropriate in high-precision,
time-critical flight control.Comment: 13 pages, 9 figure
Nonlinear Discrete Observer for Flexibility Compensation of Industrial Robots
This paper demonstrates the solutions of digital observer implementation for industrial applications. A nonlinear high-gain discrete observer is proposed to compensate the tracking error due to the flexibility of robot manipulators. The proposed discrete observer is obtained by using Euler approximate discretization of the continuous observer. A series of experimental validations have been carried out on a 6 DOF industrial manipulator during a Friction Stir Welding process. The results showed good performance of discrete observer and the observer based compensation has succeed to correct the positioning error in real-time implementation.ANR COROUSS
- …