21,516 research outputs found
Feedback MPC for Torque-Controlled Legged Robots
The computational power of mobile robots is currently insufficient to achieve
torque level whole-body Model Predictive Control (MPC) at the update rates
required for complex dynamic systems such as legged robots. This problem is
commonly circumvented by using a fast tracking controller to compensate for
model errors between updates. In this work, we show that the feedback policy
from a Differential Dynamic Programming (DDP) based MPC algorithm is a viable
alternative to bridge the gap between the low MPC update rate and the actuation
command rate. We propose to augment the DDP approach with a relaxed barrier
function to address inequality constraints arising from the friction cone. A
frequency-dependent cost function is used to reduce the sensitivity to
high-frequency model errors and actuator bandwidth limits. We demonstrate that
our approach can find stable locomotion policies for the torque-controlled
quadruped, ANYmal, both in simulation and on hardware.Comment: Paper accepted to IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS 2019
Dynamic Active Constraints for Surgical Robots using Vector Field Inequalities
Robotic assistance allows surgeons to perform dexterous and tremor-free
procedures, but robotic aid is still underrepresented in procedures with
constrained workspaces, such as deep brain neurosurgery and endonasal surgery.
In these procedures, surgeons have restricted vision to areas near the surgical
tooltips, which increases the risk of unexpected collisions between the shafts
of the instruments and their surroundings. In this work, our
vector-field-inequalities method is extended to provide dynamic
active-constraints to any number of robots and moving objects sharing the same
workspace. The method is evaluated with experiments and simulations in which
robot tools have to avoid collisions autonomously and in real-time, in a
constrained endonasal surgical environment. Simulations show that with our
method the combined trajectory error of two robotic systems is optimal.
Experiments using a real robotic system show that the method can autonomously
prevent collisions between the moving robots themselves and between the robots
and the environment. Moreover, the framework is also successfully verified
under teleoperation with tool-tissue interactions.Comment: Accepted on T-RO 2019, 19 Page
Dynamic whole-body motion generation under rigid contacts and other unilateral constraints
The most widely used technique for generating wholebody motions on a humanoid robot accounting for various tasks and constraints is inverse kinematics. Based on the task-function approach, this class of methods enables the coordination of robot movements to execute several tasks in parallel and account for the sensor feedback in real time, thanks to the low computation cost.
To some extent, it also enables us to deal with some of the robot constraints (e.g., joint limits or visibility) and manage the quasi-static balance of the robot. In order to fully use the whole range of possible motions, this paper proposes extending the task-function approach to handle the full dynamics of the robot multibody along with any constraint written as equality or inequality of the state and control variables. The definition of multiple objectives is made possible by ordering them inside a strict hierarchy. Several models of contact with the environment can be implemented in the framework. We propose a reduced formulation of the multiple rigid planar contact that keeps a low computation cost. The efficiency of this approach is illustrated by presenting several multicontact dynamic motions in simulation and on the real HRP-2 robot
Towards Visual Ego-motion Learning in Robots
Many model-based Visual Odometry (VO) algorithms have been proposed in the
past decade, often restricted to the type of camera optics, or the underlying
motion manifold observed. We envision robots to be able to learn and perform
these tasks, in a minimally supervised setting, as they gain more experience.
To this end, we propose a fully trainable solution to visual ego-motion
estimation for varied camera optics. We propose a visual ego-motion learning
architecture that maps observed optical flow vectors to an ego-motion density
estimate via a Mixture Density Network (MDN). By modeling the architecture as a
Conditional Variational Autoencoder (C-VAE), our model is able to provide
introspective reasoning and prediction for ego-motion induced scene-flow.
Additionally, our proposed model is especially amenable to bootstrapped
ego-motion learning in robots where the supervision in ego-motion estimation
for a particular camera sensor can be obtained from standard navigation-based
sensor fusion strategies (GPS/INS and wheel-odometry fusion). Through
experiments, we show the utility of our proposed approach in enabling the
concept of self-supervised learning for visual ego-motion estimation in
autonomous robots.Comment: Conference paper; Submitted to IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS) 2017, Vancouver CA; 8 pages, 8 figures,
2 table
- âŚ