408 research outputs found
Generic decoupled image-based visual servoing for cameras obeying the unified projection model
In this paper a generic decoupled imaged-based control scheme for calibrated cameras obeying the unified projection model is proposed. The proposed decoupled scheme is based on the surface of object projections onto the unit sphere. Such features are invariant to rotational motions. This allows the control of translational motion independently from the rotational motion. Finally, the proposed results are validated with experiments using a classical perspective camera as well as a fisheye camera mounted on a 6 dofs robot platform
Image based visual servoing using algebraic curves applied to shape alignment
Visual servoing schemes generally employ various image features (points, lines, moments etc.) in their control formulation. This paper presents a novel method for using boundary information in visual servoing. Object boundaries are
modeled by algebraic equations and decomposed as a unique sum of product of lines. We propose that these lines can be used to extract useful features for visual servoing purposes. In this paper, intersection of these lines are used as point features in visual servoing. Simulations are performed with a 6 DOF Puma
560 robot using Matlab Robotics Toolbox for the alignment of a free-form object. Also, experiments are realized with a 2 DOF SCARA direct drive robot. Both simulation and experimental results are quite promising and show potential of our new method
Efficient and secure real-time mobile robots cooperation using visual servoing
This paper deals with the challenging problem of navigation in formation of mobiles robots fleet. For that purpose, a secure approach is used based on visual servoing to control velocities (linear and angular) of the multiple robots. To construct our system, we develop the interaction matrix which combines the moments in the image with robots velocities and we estimate the depth between each robot and the targeted object. This is done without any communication between the robots which eliminate the problem of the influence of each robot errors on the whole. For a successful visual servoing, we propose a powerful mechanism to execute safely the robots navigation, exploiting a robot accident reporting system using raspberry Pi3. In addition, in case of problem, a robot accident detection reporting system testbed is used to send an accident notification, in the form of a specifical message. Experimental results are presented using nonholonomic mobiles robots with on-board real time cameras, to show the effectiveness of the proposed method
Modelling the Xbox 360 Kinect for visual servo control applications
A research report submitted to the faculty of Engineering and the built environment, University of the Witwatersrand, Johannesburg, in partial fulfilment of the requirements for the degree of Master of Science in Engineering.
Johannesburg, August 2016There has been much interest in using the Microsoft Xbox 360 Kinect
cameras for visual servo control applications. It is a relatively cheap
device with expected shortcomings. This work contributes to the practical
considerations of using the Kinect for visual servo control applications.
A comprehensive characterisation of the Kinect is synthesised
from existing literature and results from a nonlinear calibration procedure.
The Kinect reduces computational overhead on image processing
stages, such as pose estimation or depth estimation. It is limited
by its 0.8m to 3.5m practical depth range and quadratic depth resolution
of 1.8mm to 35mm, respectively. Since the Kinect uses an
infra-red (IR) projector, a class one laser, it should not be used outdoors,
due to IR saturation, and objects belonging to classes of non-
IR-friendly surfaces should be avoided, due to IR refraction, absorption,
or specular reflection. Problems of task stability due to invalid
depth measurements in Kinect depth maps and practical depth range
limitations can be reduced by using depth map preprocessing and
activating classical visual servoing techniques when Kinect-based approaches
are near task failure.MT201
Pose-Based Tactile Servoing: Controlled Soft Touch using Deep Learning
This article describes a new way of controlling robots using soft tactile
sensors: pose-based tactile servo (PBTS) control. The basic idea is to embed a
tactile perception model for estimating the sensor pose within a servo control
loop that is applied to local object features such as edges and surfaces. PBTS
control is implemented with a soft curved optical tactile sensor (the BRL
TacTip) using a convolutional neural network trained to be insensitive to
shear. In consequence, robust and accurate controlled motion over various
complex 3D objects is attained. First, we review tactile servoing and its
relation to visual servoing, before formalising PBTS control. Then, we assess
tactile servoing over a range of regular and irregular objects. Finally, we
reflect on the relation to visual servo control and discuss how controlled soft
touch gives a route towards human-like dexterity in robots.Comment: A summary video is available here https://youtu.be/12-DJeRcfn0 *NL
and JL contributed equally to this wor
- …