7,574 research outputs found
Image based visual servoing using algebraic curves applied to shape alignment
Visual servoing schemes generally employ various image features (points, lines, moments etc.) in their control formulation. This paper presents a novel method for using boundary information in visual servoing. Object boundaries are
modeled by algebraic equations and decomposed as a unique sum of product of lines. We propose that these lines can be used to extract useful features for visual servoing purposes. In this paper, intersection of these lines are used as point features in visual servoing. Simulations are performed with a 6 DOF Puma
560 robot using Matlab Robotics Toolbox for the alignment of a free-form object. Also, experiments are realized with a 2 DOF SCARA direct drive robot. Both simulation and experimental results are quite promising and show potential of our new method
Positioning and trajectory following tasks in microsystems using model free visual servoing
In this paper, we explore model free visual servoing algorithms by
experimentally evaluating their performances for various tasks
performed on a microassembly workstation developed in our lab. Model
free or so called uncalibrated visual servoing does not need the
system calibration (microscope-camera-micromanipulator) and the
model of the observed scene. It is robust to parameter changes and
disturbances. We tested its performance in point-to-point
positioning and various trajectory following tasks. Experimental
results validate the utility of model free visual servoing in
microassembly tasks
Exploring Convolutional Networks for End-to-End Visual Servoing
Present image based visual servoing approaches rely on extracting hand
crafted visual features from an image. Choosing the right set of features is
important as it directly affects the performance of any approach. Motivated by
recent breakthroughs in performance of data driven methods on recognition and
localization tasks, we aim to learn visual feature representations suitable for
servoing tasks in unstructured and unknown environments. In this paper, we
present an end-to-end learning based approach for visual servoing in diverse
scenes where the knowledge of camera parameters and scene geometry is not
available a priori. This is achieved by training a convolutional neural network
over color images with synchronised camera poses. Through experiments performed
in simulation and on a quadrotor, we demonstrate the efficacy and robustness of
our approach for a wide range of camera poses in both indoor as well as outdoor
environments.Comment: IEEE ICRA 201
Sim2Real View Invariant Visual Servoing by Recurrent Control
Humans are remarkably proficient at controlling their limbs and tools from a
wide range of viewpoints and angles, even in the presence of optical
distortions. In robotics, this ability is referred to as visual servoing:
moving a tool or end-point to a desired location using primarily visual
feedback. In this paper, we study how viewpoint-invariant visual servoing
skills can be learned automatically in a robotic manipulation scenario. To this
end, we train a deep recurrent controller that can automatically determine
which actions move the end-point of a robotic arm to a desired object. The
problem that must be solved by this controller is fundamentally ambiguous:
under severe variation in viewpoint, it may be impossible to determine the
actions in a single feedforward operation. Instead, our visual servoing system
must use its memory of past movements to understand how the actions affect the
robot motion from the current viewpoint, correcting mistakes and gradually
moving closer to the target. This ability is in stark contrast to most visual
servoing methods, which either assume known dynamics or require a calibration
phase. We show how we can learn this recurrent controller using simulated data
and a reinforcement learning objective. We then describe how the resulting
model can be transferred to a real-world robot by disentangling perception from
control and only adapting the visual layers. The adapted model can servo to
previously unseen objects from novel viewpoints on a real-world Kuka IIWA
robotic arm. For supplementary videos, see:
https://fsadeghi.github.io/Sim2RealViewInvariantServoComment: Supplementary video:
https://fsadeghi.github.io/Sim2RealViewInvariantServ
Model-based vs. model-free visual servoing: A Performance evaluation in microsystems
In this paper, model-based and model-free image based visual servoing (VS) approaches are implemented on a microassembly workstation, and their regulation and tracking performances are evaluated. A precise image based VS relies on computation of the image jacobian. In the model-based visual servoing, the image Jacobian is computed via calibrating the optical system. Precisely calibrated model based VS promises better positioning and tracking performance than the model-free approach. However, in the model-free approach, optical system calibration is not required due to the dynamic Jacobian estimation, thus it has the advantage of adapting to the different operating modes
Alignment control using visual servoing and mobilenet single-shot multi-box detection (SSD): a review
The concept is highly critical for robotic technologies that rely on visual feedback. In this context, robot systems tend to be unresponsive due to reliance on pre-programmed trajectory and path, meaning the occurrence of a change in the environment or the absence of an object. This review paper aims to provide comprehensive studies on the recent application of visual servoing and DNN. PBVS and Mobilenet-SSD were chosen algorithms for alignment control of the film handler mechanism of the portable x-ray system. It also discussed the theoretical framework features extraction and description, visual servoing, and Mobilenet-SSD. Likewise, the latest applications of visual servoing and DNN was summarized, including the comparison of Mobilenet-SSD with other sophisticated models. As a result of a previous study presented, visual servoing and MobileNet-SSD provide reliable tools and models for manipulating robotics systems, including where occlusion is present. Furthermore, effective alignment control relies significantly on visual servoing and deep neural reliability, shaped by different parameters such as the type of visual servoing, feature extraction and description, and DNNs used to construct a robust state estimator. Therefore, visual servoing and MobileNet-SSD are parameterized concepts that require enhanced optimization to achieve a specific purpose with distinct tools
Aerial-Ground collaborative sensing: Third-Person view for teleoperation
Rapid deployment and operation are key requirements in time critical
application, such as Search and Rescue (SaR). Efficiently teleoperated ground
robots can support first-responders in such situations. However, first-person
view teleoperation is sub-optimal in difficult terrains, while a third-person
perspective can drastically increase teleoperation performance. Here, we
propose a Micro Aerial Vehicle (MAV)-based system that can autonomously provide
third-person perspective to ground robots. While our approach is based on local
visual servoing, it further leverages the global localization of several ground
robots to seamlessly transfer between these ground robots in GPS-denied
environments. Therewith one MAV can support multiple ground robots on a demand
basis. Furthermore, our system enables different visual detection regimes, and
enhanced operability, and return-home functionality. We evaluate our system in
real-world SaR scenarios.Comment: Accepted for publication in 2018 IEEE International Symposium on
Safety, Security and Rescue Robotics (SSRR
Visual Servoing from Deep Neural Networks
We present a deep neural network-based method to perform high-precision,
robust and real-time 6 DOF visual servoing. The paper describes how to create a
dataset simulating various perturbations (occlusions and lighting conditions)
from a single real-world image of the scene. A convolutional neural network is
fine-tuned using this dataset to estimate the relative pose between two images
of the same scene. The output of the network is then employed in a visual
servoing control scheme. The method converges robustly even in difficult
real-world settings with strong lighting variations and occlusions.A
positioning error of less than one millimeter is obtained in experiments with a
6 DOF robot.Comment: fixed authors lis
Visual servoing of nonholonomic cart
This paper presents a visual feedback control scheme for a nonholonomic cart without capabilities for dead reckoning. A camera is mounted on the cart and it observes cues attached on the environment. The dynamics of the cart are transformed into a coordinate system in the image plane. An image-based controller which linearizes the dynamics is proposed. Since the positions of the cues in the image plane are controlled directly, possibility of missing cues is reduced considerably. Simulations are carried out to evaluate the validity of the proposed scheme. Experiments on a radio controlled car with a CCD camera are also given</p
- …