89 research outputs found
Robot eye-hand coordination learning by watching human demonstrations: a task function approximation approach
We present a robot eye-hand coordination learning method that can directly
learn visual task specification by watching human demonstrations. Task
specification is represented as a task function, which is learned using inverse
reinforcement learning(IRL) by inferring differential rewards between state
changes. The learned task function is then used as continuous feedbacks in an
uncalibrated visual servoing(UVS) controller designed for the execution phase.
Our proposed method can directly learn from raw videos, which removes the need
for hand-engineered task specification. It can also provide task
interpretability by directly approximating the task function. Besides,
benefiting from the use of a traditional UVS controller, our training process
is efficient and the learned policy is independent from a particular robot
platform. Various experiments were designed to show that, for a certain DOF
task, our method can adapt to task/environment variances in target positions,
backgrounds, illuminations, and occlusions without prior retraining.Comment: Accepted in ICRA 201
A Novel Uncalibrated Visual Servoing Controller Baesd on Model-Free Adaptive Control Method with Neural Network
Nowadays, with the continuous expansion of application scenarios of robotic
arms, there are more and more scenarios where nonspecialist come into contact
with robotic arms. However, in terms of robotic arm visual servoing,
traditional Position-based Visual Servoing (PBVS) requires a lot of calibration
work, which is challenging for the nonspecialist to cope with. To cope with
this situation, Uncalibrated Image-Based Visual Servoing (UIBVS) frees people
from tedious calibration work. This work applied a model-free adaptive control
(MFAC) method which means that the parameters of controller are updated in real
time, bringing better ability of suppression changes of system and environment.
An artificial intelligent neural network is applied in designs of controller
and estimator for hand-eye relationship. The neural network is updated with the
knowledge of the system input and output information in MFAC method. Inspired
by "predictive model" and "receding-horizon" in Model Predictive Control (MPC)
method and introducing similar structures into our algorithm, we realizes the
uncalibrated visual servoing for both stationary targets and moving
trajectories. Simulated experiments with a robotic manipulator will be carried
out to validate the proposed algorithm.Comment: 16 pages, 8 figure
A comparative study on the performance of neural networks in visual guidance and feedback applications
Vision-based systems increase the flexibility of industrial automation applications by providing non-touching sensory information for processing and feedback. Artificial neural networks (ANNs) help such conformities through prediction in overcoming nonlinear computational
spaces. They transform multiple possibilities of outcomes or regions of uncertainty posed by the system components towards solution spaces. Trained networks impart a certain level of intelligence to robotic systems. This paper discusses two applications of machine vision. The 3
degrees of freedom (DOF) robotic assembly provides an accurate cutting of soft materials with visual guidance using pixel elimination. The 6-DOF robot combines visual guidance from a supervisory camera and visual feedback from an attached camera. Using a switching approach in the control strategy, pick and place applications are carried out. With the inclusion of ANN to make the strategies intelligent, both the systems performed better with regard to computational time and convergence. The networks make use of the extracted image features
from the scene for different applications. Simulation and experimental results validate the proposed schemes and show the effectiveness of ANN in machine vision applications
Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain
Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors
Visual Servoing
The goal of this book is to introduce the visional application by excellent researchers in the world currently and offer the knowledge that can also be applied to another field widely. This book collects the main studies about machine vision currently in the world, and has a powerful persuasion in the applications employed in the machine vision. The contents, which demonstrate that the machine vision theory, are realized in different field. For the beginner, it is easy to understand the development in the vision servoing. For engineer, professor and researcher, they can study and learn the chapters, and then employ another application method
- …