1,539 research outputs found
Two solutions to the adaptive visual servoing problem
Published versio
Markerless visual servoing on unknown objects for humanoid robot platforms
To precisely reach for an object with a humanoid robot, it is of central
importance to have good knowledge of both end-effector, object pose and shape.
In this work we propose a framework for markerless visual servoing on unknown
objects, which is divided in four main parts: I) a least-squares minimization
problem is formulated to find the volume of the object graspable by the robot's
hand using its stereo vision; II) a recursive Bayesian filtering technique,
based on Sequential Monte Carlo (SMC) filtering, estimates the 6D pose
(position and orientation) of the robot's end-effector without the use of
markers; III) a nonlinear constrained optimization problem is formulated to
compute the desired graspable pose about the object; IV) an image-based visual
servo control commands the robot's end-effector toward the desired pose. We
demonstrate effectiveness and robustness of our approach with extensive
experiments on the iCub humanoid robot platform, achieving real-time
computation, smooth trajectories and sub-pixel precisions
Technical report on Optimization-Based Bearing-Only Visual Homing with Applications to a 2-D Unicycle Model
We consider the problem of bearing-based visual homing: Given a mobile robot
which can measure bearing directions with respect to known landmarks, the goal
is to guide the robot toward a desired "home" location. We propose a control
law based on the gradient field of a Lyapunov function, and give sufficient
conditions for global convergence. We show that the well-known Average Landmark
Vector method (for which no convergence proof was known) can be obtained as a
particular case of our framework. We then derive a sliding mode control law for
a unicycle model which follows this gradient field. Both controllers do not
depend on range information. Finally, we also show how our framework can be
used to characterize the sensitivity of a home location with respect to noise
in the specified bearings. This is an extended version of the conference paper
[1].Comment: This is an extender version of R. Tron and K. Daniilidis, "An
optimization approach to bearing-only visual homing with applications to a
2-D unicycle model," in IEEE International Conference on Robotics and
Automation, 2014, containing additional proof
Learning visual docking for non-holonomic autonomous vehicles
This paper presents a new method of learning visual docking skills for non-holonomic vehicles by direct interaction with the environment. The method is based on a reinforcement algorithm, which speeds up Q-learning by applying memorybased sweeping and enforcing the “adjoining property”, a filtering mechanism to only allow transitions between states that satisfy a fixed distance. The method overcomes some limitations of reinforcement learning techniques when they are employed in applications with continuous non-linear systems, such as car-like vehicles. In particular, a good approximation to the optimal
behaviour is obtained by a small look-up table. The algorithm is tested within an image-based visual servoing framework on a docking task. The training time was less than 1 hour on the real vehicle. In experiments, we show the satisfactory performance of the algorithm
- …