627 research outputs found
A Novel Uncalibrated Visual Servoing Controller Baesd on Model-Free Adaptive Control Method with Neural Network
Nowadays, with the continuous expansion of application scenarios of robotic
arms, there are more and more scenarios where nonspecialist come into contact
with robotic arms. However, in terms of robotic arm visual servoing,
traditional Position-based Visual Servoing (PBVS) requires a lot of calibration
work, which is challenging for the nonspecialist to cope with. To cope with
this situation, Uncalibrated Image-Based Visual Servoing (UIBVS) frees people
from tedious calibration work. This work applied a model-free adaptive control
(MFAC) method which means that the parameters of controller are updated in real
time, bringing better ability of suppression changes of system and environment.
An artificial intelligent neural network is applied in designs of controller
and estimator for hand-eye relationship. The neural network is updated with the
knowledge of the system input and output information in MFAC method. Inspired
by "predictive model" and "receding-horizon" in Model Predictive Control (MPC)
method and introducing similar structures into our algorithm, we realizes the
uncalibrated visual servoing for both stationary targets and moving
trajectories. Simulated experiments with a robotic manipulator will be carried
out to validate the proposed algorithm.Comment: 16 pages, 8 figure
Sim2Real View Invariant Visual Servoing by Recurrent Control
Humans are remarkably proficient at controlling their limbs and tools from a
wide range of viewpoints and angles, even in the presence of optical
distortions. In robotics, this ability is referred to as visual servoing:
moving a tool or end-point to a desired location using primarily visual
feedback. In this paper, we study how viewpoint-invariant visual servoing
skills can be learned automatically in a robotic manipulation scenario. To this
end, we train a deep recurrent controller that can automatically determine
which actions move the end-point of a robotic arm to a desired object. The
problem that must be solved by this controller is fundamentally ambiguous:
under severe variation in viewpoint, it may be impossible to determine the
actions in a single feedforward operation. Instead, our visual servoing system
must use its memory of past movements to understand how the actions affect the
robot motion from the current viewpoint, correcting mistakes and gradually
moving closer to the target. This ability is in stark contrast to most visual
servoing methods, which either assume known dynamics or require a calibration
phase. We show how we can learn this recurrent controller using simulated data
and a reinforcement learning objective. We then describe how the resulting
model can be transferred to a real-world robot by disentangling perception from
control and only adapting the visual layers. The adapted model can servo to
previously unseen objects from novel viewpoints on a real-world Kuka IIWA
robotic arm. For supplementary videos, see:
https://fsadeghi.github.io/Sim2RealViewInvariantServoComment: Supplementary video:
https://fsadeghi.github.io/Sim2RealViewInvariantServ
Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain
Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors
Uncalibrated Dynamic Mechanical System Controller
An apparatus and method for enabling an uncalibrated, model independent controller for a mechanical system using a dynamic quasi-Newton algorithm which incorporates velocity components of any moving system parameter(s) is provided. In the preferred embodiment, tracking of a moving target by a robot having multiple degrees of freedom is achieved using an uncalibrated model independent visual servo control. Model independent visual servo control is defined as using visual feedback to control a robot's servomotors without a precisely calibrated kinematic robot model or camera model. A processor updates a Jacobian and a controller provides control signals such that the robot's end effector is directed to a desired location relative to a target on a workpiece.Georgia Tech Research Corporatio
Alignment control using visual servoing and mobilenet single-shot multi-box detection (SSD): a review
The concept is highly critical for robotic technologies that rely on visual feedback. In this context, robot systems tend to be unresponsive due to reliance on pre-programmed trajectory and path, meaning the occurrence of a change in the environment or the absence of an object. This review paper aims to provide comprehensive studies on the recent application of visual servoing and DNN. PBVS and Mobilenet-SSD were chosen algorithms for alignment control of the film handler mechanism of the portable x-ray system. It also discussed the theoretical framework features extraction and description, visual servoing, and Mobilenet-SSD. Likewise, the latest applications of visual servoing and DNN was summarized, including the comparison of Mobilenet-SSD with other sophisticated models. As a result of a previous study presented, visual servoing and MobileNet-SSD provide reliable tools and models for manipulating robotics systems, including where occlusion is present. Furthermore, effective alignment control relies significantly on visual servoing and deep neural reliability, shaped by different parameters such as the type of visual servoing, feature extraction and description, and DNNs used to construct a robust state estimator. Therefore, visual servoing and MobileNet-SSD are parameterized concepts that require enhanced optimization to achieve a specific purpose with distinct tools
Robot eye-hand coordination learning by watching human demonstrations: a task function approximation approach
We present a robot eye-hand coordination learning method that can directly
learn visual task specification by watching human demonstrations. Task
specification is represented as a task function, which is learned using inverse
reinforcement learning(IRL) by inferring differential rewards between state
changes. The learned task function is then used as continuous feedbacks in an
uncalibrated visual servoing(UVS) controller designed for the execution phase.
Our proposed method can directly learn from raw videos, which removes the need
for hand-engineered task specification. It can also provide task
interpretability by directly approximating the task function. Besides,
benefiting from the use of a traditional UVS controller, our training process
is efficient and the learned policy is independent from a particular robot
platform. Various experiments were designed to show that, for a certain DOF
task, our method can adapt to task/environment variances in target positions,
backgrounds, illuminations, and occlusions without prior retraining.Comment: Accepted in ICRA 201
Adaptive Finite-Time Model Estimation and Control for Manipulator Visual Servoing using Sliding Mode Control and Neural Networks
The image-based visual servoing without models of system is challenging since
it is hard to fetch an accurate estimation of hand-eye relationship via merely
visual measurement. Whereas, the accuracy of estimated hand-eye relationship
expressed in local linear format with Jacobian matrix is important to whole
system's performance. In this article, we proposed a finite-time controller as
well as a Jacobian matrix estimator in a combination of online and offline way.
The local linear formulation is formulated first. Then, we use a combination of
online and offline method to boost the estimation of the highly coupled and
nonlinear hand-eye relationship with data collected via depth camera. A neural
network (NN) is pre-trained to give a relative reasonable initial estimation of
Jacobian matrix. Then, an online updating method is carried out to modify the
offline trained NN for a more accurate estimation. Moreover, sliding mode
control algorithm is introduced to realize a finite-time controller. Compared
with previous methods, our algorithm possesses better convergence speed. The
proposed estimator possesses excellent performance in the accuracy of initial
estimation and powerful tracking capabilities for time-varying estimation for
Jacobian matrix compared with other data-driven estimators. The proposed scheme
acquires the combination of neural network and finite-time control effect which
drives a faster convergence speed compared with the exponentially converge
ones. Another main feature of our algorithm is that the state signals in system
is proved to be semi-global practical finite-time stable. Several experiments
are carried out to validate proposed algorithm's performance.Comment: 24 pages, 10 figure
- ā¦