255 research outputs found
Image based visual servoing using bitangent points applied to planar shape alignment
We present visual servoing strategies based on bitangents for aligning planar shapes. In order to acquire bitangents we use convex-hull of a curve. Bitangent points are employed in the construction of a feature vector to be used in visual control. Experimental results obtained on a 7 DOF Mitsubishi PA10 robot, verifies the proposed method
Visual servoing of aerial manipulators
The final publication is available at link.springer.comThis chapter describes the classical techniques to control an aerial manipulator by means of visual information and presents an uncalibrated image-based visual servo method to drive the aerial vehicle. The proposed technique has the advantage that it contains mild assumptions about the principal point and skew values of the camera, and it does not require prior knowledge of the focal length, in contrast to traditional image-based approaches.Peer ReviewedPostprint (author's final draft
Robot eye-hand coordination learning by watching human demonstrations: a task function approximation approach
We present a robot eye-hand coordination learning method that can directly
learn visual task specification by watching human demonstrations. Task
specification is represented as a task function, which is learned using inverse
reinforcement learning(IRL) by inferring differential rewards between state
changes. The learned task function is then used as continuous feedbacks in an
uncalibrated visual servoing(UVS) controller designed for the execution phase.
Our proposed method can directly learn from raw videos, which removes the need
for hand-engineered task specification. It can also provide task
interpretability by directly approximating the task function. Besides,
benefiting from the use of a traditional UVS controller, our training process
is efficient and the learned policy is independent from a particular robot
platform. Various experiments were designed to show that, for a certain DOF
task, our method can adapt to task/environment variances in target positions,
backgrounds, illuminations, and occlusions without prior retraining.Comment: Accepted in ICRA 201
Uncalibrated image-based visual servoing
This paper develops a new method for uncalibrated image-based visual servoing. In contrast to traditional image-based visual servo, the proposed solution does not require a known value of camera focal length for the computation of the image Jacobian. Instead, it is estimated at run time from the observation of the tracked target. The technique is shown to outperform classical visual servoing schemes in situations with noisy calibration parameters and for unexpected changes in the camera zoom. The method’s performance is demonstrated both in simulation experiments and in a ROS implementation of a quadrotor servoing task. The developed solution is tightly integrated with ROS and is made available as part of the IRI ROS stack.Peer ReviewedPostprint (author draft version
Bridging Low-level Geometry to High-level Concepts in Visual Servoing of Robot Manipulation Task Using Event Knowledge Graphs and Vision-Language Models
In this paper, we propose a framework of building knowledgeable robot control
in the scope of smart human-robot interaction, by empowering a basic
uncalibrated visual servoing controller with contextual knowledge through the
joint usage of event knowledge graphs (EKGs) and large-scale pretrained
vision-language models (VLMs). The framework is expanded in twofold: first, we
interpret low-level image geometry as high-level concepts, allowing us to
prompt VLMs and to select geometric features of points and lines for motor
control skills; then, we create an event knowledge graph (EKG) to conceptualize
a robot manipulation task of interest, where the main body of the EKG is
characterized by an executable behavior tree, and the leaves by semantic
concepts relevant to the manipulation context. We demonstrate, in an
uncalibrated environment with real robot trials, that our method lowers the
reliance of human annotation during task interfacing, allows the robot to
perform activities of daily living more easily by treating low-level
geometric-based motor control skills as high-level concepts, and is beneficial
in building cognitive thinking for smart robot applications
Uncalibrated visual servo for unmanned aerial manipulation
© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.This paper addresses the problem of autonomous servoing an unmanned redundant aerial manipulator using computer vision. The overactuation of the system is exploited by means of a hierarchical control law, which allows to prioritize several tasks during flight. We propose a safety-related primary task to avoid possible collisions. As a secondary task, we present an uncalibrated image-based visual servo strategy to drive the arm end-effector to a desired position and orientation by using a camera attached to it. In contrast to the previous visual servo approaches, a known value of camera focal length is not strictly required. To further improve flight behavior, we hierarchically add one task to reduce dynamic effects by vertically aligning the arm center of gravity to the multirotor gravitational vector, and another one that keeps the arm close to a desired configuration of high manipulability and avoiding arm joint limits. The performance of the hierarchical control law, with and without activation of each of the tasks, is shown in simulations and in real experiments confirming the viability of such prioritized control scheme for aerial manipulation.Peer ReviewedPostprint (author's final draft
- …