9,118 research outputs found
Learning the dynamics of articulated tracked vehicles
In this work, we present a Bayesian non-parametric approach to model the motion control of ATVs. The motion control model is based on a Dirichlet Process-Gaussian Process (DP-GP) mixture model. The DP-GP mixture model provides a flexible representation of patterns of control manoeuvres along trajectories of different lengths and discretizations. The model also estimates the number of patterns, sufficient for modeling the dynamics of the ATV
Keep Rollin' - Whole-Body Motion Control and Planning for Wheeled Quadrupedal Robots
We show dynamic locomotion strategies for wheeled quadrupedal robots, which
combine the advantages of both walking and driving. The developed optimization
framework tightly integrates the additional degrees of freedom introduced by
the wheels. Our approach relies on a zero-moment point based motion
optimization which continuously updates reference trajectories. The reference
motions are tracked by a hierarchical whole-body controller which computes
optimal generalized accelerations and contact forces by solving a sequence of
prioritized tasks including the nonholonomic rolling constraints. Our approach
has been tested on ANYmal, a quadrupedal robot that is fully torque-controlled
including the non-steerable wheels attached to its legs. We conducted
experiments on flat and inclined terrains as well as over steps, whereby we
show that integrating the wheels into the motion control and planning framework
results in intuitive motion trajectories, which enable more robust and dynamic
locomotion compared to other wheeled-legged robots. Moreover, with a speed of 4
m/s and a reduction of the cost of transport by 83 % we prove the superiority
of wheeled-legged robots compared to their legged counterparts.Comment: IEEE Robotics and Automation Letter
Robust Whole-Body Motion Control of Legged Robots
We introduce a robust control architecture for the whole-body motion control
of torque controlled robots with arms and legs. The method is based on the
robust control of contact forces in order to track a planned Center of Mass
trajectory. Its appeal lies in the ability to guarantee robust stability and
performance despite rigid body model mismatch, actuator dynamics, delays,
contact surface stiffness, and unobserved ground profiles. Furthermore, we
introduce a task space decomposition approach which removes the coupling
effects between contact force controller and the other non-contact controllers.
Finally, we verify our control performance on a quadruped robot and compare its
performance to a standard inverse dynamics approach on hardware.Comment: 8 Page
Real-Time Salient Closed Boundary Tracking via Line Segments Perceptual Grouping
This paper presents a novel real-time method for tracking salient closed
boundaries from video image sequences. This method operates on a set of
straight line segments that are produced by line detection. The tracking scheme
is coherently integrated into a perceptual grouping framework in which the
visual tracking problem is tackled by identifying a subset of these line
segments and connecting them sequentially to form a closed boundary with the
largest saliency and a certain similarity to the previous one. Specifically, we
define a new tracking criterion which combines a grouping cost and an area
similarity constraint. The proposed criterion makes the resulting boundary
tracking more robust to local minima. To achieve real-time tracking
performance, we use Delaunay Triangulation to build a graph model with the
detected line segments and then reduce the tracking problem to finding the
optimal cycle in this graph. This is solved by our newly proposed closed
boundary candidates searching algorithm called "Bidirectional Shortest Path
(BDSP)". The efficiency and robustness of the proposed method are tested on
real video sequences as well as during a robot arm pouring experiment.Comment: 7 pages, 8 figures, The 2017 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2017) submission ID 103
Co-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects
In this paper we introduce Co-Fusion, a dense SLAM system that takes a live
stream of RGB-D images as input and segments the scene into different objects
(using either motion or semantic cues) while simultaneously tracking and
reconstructing their 3D shape in real time. We use a multiple model fitting
approach where each object can move independently from the background and still
be effectively tracked and its shape fused over time using only the information
from pixels associated with that object label. Previous attempts to deal with
dynamic scenes have typically considered moving regions as outliers, and
consequently do not model their shape or track their motion over time. In
contrast, we enable the robot to maintain 3D models for each of the segmented
objects and to improve them over time through fusion. As a result, our system
can enable a robot to maintain a scene description at the object level which
has the potential to allow interactions with its working environment; even in
the case of dynamic scenes.Comment: International Conference on Robotics and Automation (ICRA) 2017,
http://visual.cs.ucl.ac.uk/pubs/cofusion,
https://github.com/martinruenz/co-fusio
- …