210 research outputs found
An autonomous active vision system for complete and accurate 3D scene reconstruction
International audienceWe propose in this paper an active vision approach for performing the 3D reconstruction of static scenes. The perception-action cycles are handled at various levels: from the definition of perception strategies for scene exploration downto the automatic generation of camera motions using visual servoing. To perform the reconstruction, we use a structure from controlled motion method which allows an optimal estimation of geometrical primitive parameters. As this method is based on particular camera motions, perceptual strategies able to appropriately perform a succession of such individual primitive reconstructions are proposed in order to recover the complete spatial structure of the scene. Two algorithms are proposed to ensure the exploration of the scene. The former is an incremental reconstruction algorithm based on the use of a prediction/verification scheme managed using decision theory and Bayes nets. It allows the visual system to get a high level description of the observed part of the scene. The latter, based on the computation of new viewpoints ensures the complete reconstruction of the scene. Experiments carried out on a robotic cell have demonstrated the validity of our approach
Visual servoing with respect to complex objects
International audienceThis paper presents new advances in the field of visual servoing. More precisely, we consider the case where complex objects are observed by a camera. In a first part, planar objects of unknown shape are considered using image moments as input of the image-based control law. In the second part, a pose estimation and tracking algorithm is described to deal with real objects whose 3D model is known. For each case, experimental results obtained with an eye-in-hand system are presented
Stratégies de reconstruction par vision active~: une approche par réseaux Bayesiens
National audienceCet article traite du problème de la reconstruction de scènes polyédriques dans un contexte de vision active. À la base du processus de reconstruction, nous utilisons une méthode qui consiste à contraindre les mouvements de la caméra de manière à obtenir une estimation particulièrement précise des paramètres représentant la position 3D des segments. À cet aspect continu du processus de localisation, il est nécessaire de définir des stratégies de reconstruction et d'exploration de la scène. L'étape d'exploration que nous décrivons dans cet article permet de reconstruire de manière incrémentale l'ensemble des primitives qui apparaissent dans le champ de vision de la caméra. Elle repose sur une approche de prédiction/vérification d'hypothèses gérées à l'aide de réseaux Bayesiens. Cette approche permet d'obtenir une représentation de plus haut niveau des objets considérés, tout en traitant les problèmes locaux d'occlusion. Les méthodes que nous avons développées ont été mises en oeuvre sur la cellule de vision robotique de l' Irisa
A Bayes nets-based prediction/verification scheme for active visual reconstruction
International audienceWe propose in this paper an active vision approach for performing the 3-D reconstruction of polyhedral scenes. To perform the reconstruction we use a structure from controlled motion method which allows a robust estimation of primitive parameters. As this method is based on particular camera motions, perceptual strategies able to appropriately perform a succession of such individual primitive reconstructions are proposed in order to recover the complete spatial structure of complex scenes. Two algorithms are proposed to ensure the exploration of the scene. The former is a simple incremental reconstruction algorithm. The latter is based on the use of a prediction/verification scheme managed using decision theory and Bayes Nets. It allows the visual system to get a more complete high level description of the scene. Experiments carried out on a robotic cell have demonstrated the validity of our approach
ViSP for visual servoing: a generic software platform with a wide class of robot control skills
Special issue on Software Packages for Vision-Based Control of Motion, P. Oh, D. Burschka (Eds.)International audienceViSP (Visual Servoing Platform), a fully functional modular architecture that allows fast development of visual servoing applications, is described. The platform takes the form of a library which can be divided in three main modules: control processes, canonical vision-based tasks that contain the most classical linkages, and real-time tracking. ViSP software environment features independence with respect to the hardware, simplicity, extendibility, and portability. ViSP also features a large library of elementary tasks with various visual features that can be combined together, an image processing library that allows the tracking of visual cues at video rate, a simulator, an interface with various classical framegrabbers, a virtual 6-DOF robot that allows the simulation of visual servoing experiments, etc. The platform is implemented in C++ under Linux
Complex articulated object tracking
International audienceIn this paper new results are presented for tracking complex multi-body objects. The theoretical framework is based on robotics techniques and uses an a-priori model of the object including a general mechanical link description. A new kinematic-set formulation takes into account that articulated degrees of freedom are directly observable from the camera and therefore their estimation does not need to pass via a kinematic-chain back to the root. By doing this the tracking techniques are efficient and precise leading to real-time performance and accurate measurements. The system is locally based upon an accurate modeling of a distance criteria. A general method is given for defining any type of mechanical link and experimental results show prismatic, rotational and helical type links. A statistical M-estimation technique is applied to improve robustness. A monocular camera system was used as a real-time sensor to verify the theory
Positioning a coarse-calibrated camera with respect to an unknown object by 2D 1/2 visual servoing
International audienceIn this paper we propose a new vision-based robot control approach halfway between the classical position-based and image-based visual servoings. It allows to avoid their respective disadvantages. The homography between some planar feature points extracted from two images (corresponding to the current and desired camera poses) is computed at each iteration. Then, an approximate partial-pose, where the translational term is known only up to a scale factor, is deduced, from which can be designed a closed-loop control law controlling the six camera d.o.f.. Contrarily to the position-based visual servoing, our scheme does not need any geometric 3D model of the object. Furthermore and contrarily to the image-based visual servoing, our approach ensures the convergence of the control law in all the task space
Improvements in robust 2D visual servoing
International audienceA fundamental step towards broadening the use of real world image-based visual servoing is to deal with the important issues of reliability and robustness. In order to address this issue, a closed loop control law is proposed that simultaneously accomplishes a visual servoing task and is robust to a general class of image processing errors. This is achieved with the application of widely accepted statistical techniques of robust M-estimation. Furthermore improvement have been added in the weight computation process: memory, initialization. Indeed, when the error between current visual features and desired ones are large, which occurs when large robot displacement are required, M-estimator may not detect outliers. To address this point, the method we propose to initialize the confidence in each feature is based on the LMedS estimators. Experimental results are presented which demonstrate visual servoing tasks which resist severe outlier contamination
- …