874 research outputs found
Planar image based visual servoing as a navigation problem
We describe a hybrid planar image-based servo algorithm which, for a simplified planar convex rigid body, converges to a static goal for all initial conditions within the workspace of the camera. This is achieved by using the sequential composition of a palette of continuous image based controllers. Each sub-controller, based on a specified set of collinear feature points, is shown to converge for all initial configurations in which the feature points are visible. Furthermore, the controller guarantees that the body will maintain a visible orientation, i.e. the feature points will always be in view of the camera. This is achieved by introducing a change of coordinates from SE(2) to an image plane measurement of three points, and imposing a navigation function in that coordinate system. Our intuition suggests that appropriately generalized versions of these ideas may be extended to SE(3
Visual Servoing from Deep Neural Networks
We present a deep neural network-based method to perform high-precision,
robust and real-time 6 DOF visual servoing. The paper describes how to create a
dataset simulating various perturbations (occlusions and lighting conditions)
from a single real-world image of the scene. A convolutional neural network is
fine-tuned using this dataset to estimate the relative pose between two images
of the same scene. The output of the network is then employed in a visual
servoing control scheme. The method converges robustly even in difficult
real-world settings with strong lighting variations and occlusions.A
positioning error of less than one millimeter is obtained in experiments with a
6 DOF robot.Comment: fixed authors lis
Technical report on Optimization-Based Bearing-Only Visual Homing with Applications to a 2-D Unicycle Model
We consider the problem of bearing-based visual homing: Given a mobile robot
which can measure bearing directions with respect to known landmarks, the goal
is to guide the robot toward a desired "home" location. We propose a control
law based on the gradient field of a Lyapunov function, and give sufficient
conditions for global convergence. We show that the well-known Average Landmark
Vector method (for which no convergence proof was known) can be obtained as a
particular case of our framework. We then derive a sliding mode control law for
a unicycle model which follows this gradient field. Both controllers do not
depend on range information. Finally, we also show how our framework can be
used to characterize the sensitivity of a home location with respect to noise
in the specified bearings. This is an extended version of the conference paper
[1].Comment: This is an extender version of R. Tron and K. Daniilidis, "An
optimization approach to bearing-only visual homing with applications to a
2-D unicycle model," in IEEE International Conference on Robotics and
Automation, 2014, containing additional proof
Exploring Convolutional Networks for End-to-End Visual Servoing
Present image based visual servoing approaches rely on extracting hand
crafted visual features from an image. Choosing the right set of features is
important as it directly affects the performance of any approach. Motivated by
recent breakthroughs in performance of data driven methods on recognition and
localization tasks, we aim to learn visual feature representations suitable for
servoing tasks in unstructured and unknown environments. In this paper, we
present an end-to-end learning based approach for visual servoing in diverse
scenes where the knowledge of camera parameters and scene geometry is not
available a priori. This is achieved by training a convolutional neural network
over color images with synchronised camera poses. Through experiments performed
in simulation and on a quadrotor, we demonstrate the efficacy and robustness of
our approach for a wide range of camera poses in both indoor as well as outdoor
environments.Comment: IEEE ICRA 201
Efficient and secure real-time mobile robots cooperation using visual servoing
This paper deals with the challenging problem of navigation in formation of mobiles robots fleet. For that purpose, a secure approach is used based on visual servoing to control velocities (linear and angular) of the multiple robots. To construct our system, we develop the interaction matrix which combines the moments in the image with robots velocities and we estimate the depth between each robot and the targeted object. This is done without any communication between the robots which eliminate the problem of the influence of each robot errors on the whole. For a successful visual servoing, we propose a powerful mechanism to execute safely the robots navigation, exploiting a robot accident reporting system using raspberry Pi3. In addition, in case of problem, a robot accident detection reporting system testbed is used to send an accident notification, in the form of a specifical message. Experimental results are presented using nonholonomic mobiles robots with on-board real time cameras, to show the effectiveness of the proposed method
- …