287 research outputs found
Distance-based and Orientation-based Visual Servoing from Three Points
International audienceThis paper is concerned with the use of a spherical-projection model for visual servoing from three points. We propose a new set of six features to control a 6-degree-of-freedom (DOF) robotic system with good decoupling properties. The first part of the set consists of three invariants to camera rotations. These invariants are built using the Cartesian distances between the spherical projections of the three points. The second part of the set corresponds to the angle-axis representation of a rotation matrix measured from the image of two points. Regarding the theoretical comparison with the classical perspective coordinates of points, the new set does not present more singularities. In addition, using the new set inside its nonsingular domain, a classical control law is proven to be optimal for pure rotational motions. The theoretical results and the robustness to points range errors of the new control scheme are validated through simulations and experiments on a 6-DOF robot arm
Active sensor placement for complete scene reconstruction and exploration
International audienceThis paper deals with the 3D structure estimation and exploration of a scene using active vision. We have used the structure from controlled motion approach to obtain a precise and robust estimation of the 3D structure of geo-metrical primitives. Since it involves to gaze successively on the considered primitives, we have developed perceptual strategies able to perform a succession of robust estima-tions without any assumption on the number and on the localization of the different objects. An exploration process centered on current visual features and on the structure of the previously studied primitives is presented. This leads to a gaze planning strategy that mainly uses a representation of known and unknown areas as a basis for selecting view-points. The proposed strategy ensures the completeness of the reconstruction
Multi-sensor data fusion in sensor-based control: application to multi-camera visual servoing
International audienceA low-level sensor fusion scheme is presented for the positioning of a multi-sensor robot. This non-hierarchical framework can be used for robot arms or other velocity- controlled robots, and is part of the task function approach. A stability analysis is presented for the general case, then several control laws illustrate the versatility of the framework. This approach is applied to the multi-camera eye-in-hand/eye- to-hand configuration in visual servoing. Experimental results point out the feasibility and the effectiveness of the proposed control laws. Mono-camera and multi-camera schemes are compared, showing that the proposed sensor fusion scheme improves the behavior of a robot arm
Avoiding joint limits with a low-level fusion scheme
International audienceJoint limits avoidance is a crucial issue in sensor- based control. In this paper we propose an avoidance strategy based on a low-level data fusion. The joint positions of a robot arm are considered as features that are continuously added to the control scheme when they approach the joint limits, and removed when the position is safe. We expose an optimal tuning of the avoidance scheme, ensuring the main task is disturbed as little as possible. We propose additional strategies to solve the particular cases of unsafe desired position and local minima. The control scheme is applied to the avoidance of joint limits while performing visual servoing. Both simulation and experimental results illustrate the validity of our approach
Visual detection and 3D model-based tracking for landing on aircraft carrier
International audienceA challenging task of airborne operations remains the landing on the carrier deck, which limits the carrier operational efficiency during rough sea. In this paper, a method of carrier visual detection and tracking is described. With the help of the aircraft sensors, the carrier is first detected in the image using a warped patch of a reference image. This provides an initialization to a real time 3D model-based tracker estimating the camera pose during the sequence. This method is demonstrated and evaluated using a simulator with high-fidelity visualization and on real video
ViSP for visual servoing: a generic software platform with a wide class of robot control skills
Special issue on Software Packages for Vision-Based Control of Motion, P. Oh, D. Burschka (Eds.)International audienceViSP (Visual Servoing Platform), a fully functional modular architecture that allows fast development of visual servoing applications, is described. The platform takes the form of a library which can be divided in three main modules: control processes, canonical vision-based tasks that contain the most classical linkages, and real-time tracking. ViSP software environment features independence with respect to the hardware, simplicity, extendibility, and portability. ViSP also features a large library of elementary tasks with various visual features that can be combined together, an image processing library that allows the tracking of visual cues at video rate, a simulator, an interface with various classical framegrabbers, a virtual 6-DOF robot that allows the simulation of visual servoing experiments, etc. The platform is implemented in C++ under Linux
- …