19 research outputs found

    Vision-based Manipulation of Deformable and Rigid Objects Using Subspace Projections of 2D Contours

    Full text link
    This paper proposes a unified vision-based manipulation framework using image contours of deformable/rigid objects. Instead of using human-defined cues, the robot automatically learns the features from processed vision data. Our method simultaneously generates---from the same data---both, visual features and the interaction matrix that relates them to the robot control inputs. Extraction of the feature vector and control commands is done online and adaptively, with little data for initialization. The method allows the robot to manipulate an object without knowing whether it is rigid or deformable. To validate our approach, we conduct numerical simulations and experiments with both deformable and rigid objects

    High-Speed Vision and Force Feedback for Motion-Controlled Industrial Manipulators

    Get PDF
    Over the last decades, both force sensors and cameras have emerged as useful sensors for different applications in robotics. This thesis considers a number of dynamic visual tracking and control problems, as well as the integration of these techniques with contact force control. Different topics ranging from basic theory to system implementation and applications are treated. A new interface developed for external sensor control is presented, designed by making non-intrusive extensions to a standard industrial robot control system. The structure of these extensions are presented, the system properties are modeled and experimentally verified, and results from force-controlled stub grinding and deburring experiments are presented. A novel system for force-controlled drilling using a standard industrial robot is also demonstrated. The solution is based on the use of force feedback to control the contact forces and the sliding motions of the pressure foot, which would otherwise occur during the drilling phase. Basic methods for feature-based tracking and servoing are presented, together with an extension for constrained motion estimation based on a dual quaternion pose parametrization. A method for multi-camera real-time rigid body tracking with time constraints is also presented, based on an optimal selection of the measured features. The developed tracking methods are used as the basis for two different approaches to vision/force control, which are illustrated in experiments. Intensity-based techniques for tracking and vision-based control are also developed. A dynamic visual tracking technique based directly on the image intensity measurements is presented, together with new stability-based methods suitable for dynamic tracking and feedback problems. The stability-based methods outperform the previous methods in many situations, as shown in simulations and experiments

    Une approche contour pour la commande basée image sur objets de forme complexe

    Get PDF
    International audienceWe describe a method to achieve robotic positioning tasks by image-based visual servoing when the object being observed has a complex and unknown shape. We first focus on the computation of an analytical expression of the interaction matrix according to a polar description of the image contour of the object. Experimental results are presented to validate the proposed algorithm. In particular, the robustness of the control law is tested with respect to a coarse calibrated system, to an approximation of the depth of the object, and to partial occlusion.Nous décrivons une méthode permettant de réaliser des tâches robotiques de positionnement par asservissement visuel basé image quand l'objet observé est de forme complexe et inconnue. Dans un premier temps, nous nous focalisons sur le calcul de l'expression analytique de la matrice d'interaction grâce à description polaire du contour de l'objet. Des résultats expérimentaux valident l'algorithme proposé. En particulier, la robustesse de la loi de commande est testée vis-à-vis d'un système grossièrement calibré, d'une approximation sur la profondeur et d'une occlusion partielle

    Visual Feedback Without Geometric Features Against Occlusion: A Walsh Basis

    Get PDF
    Date of Online Publication: 09 January 2018For a visual feedback without geometric features, this brief suggests to apply a basis made by the Walsh functions in order to reduce the off-line experimental cost. Depending on the resolution, the feedback is implementable and achieves the closed-loop stability of dynamical systems as long as the input-output linearity on matrix space exists. Remarkably, a part of the whole occlusion effects is rejected, and the remaining part is attenuated. The validity is confirmed by the experimental feedback for nonplanar sloshing

    Autonomous Target Recognition and Localization for Manipulator Sampling Tasks

    Get PDF
    Future exploration missions will require autonomous robotic operations to minimize overhead on human operators. Autonomous manipulation in unknown environments requires target identification and tracking from initial discovery through grasp and stow sequences. Even with a supervisor in the loop, automating target identification and localization processes significantly lowers operator workload and data throughput requirements. This thesis introduces the Autonomous Vision Application for Target Acquisition and Ranging (AVATAR), a software system capable of recognizing appropriate targets and determining their locations for manipulator retrieval tasks. AVATAR utilizes an RGB color filter to segment possible sampling or tracking targets, applies geometric-based matching constraints, and performs stereo triangulation to determine absolute 3-D target position. Neutral buoyancy and 1-G tests verify AVATAR capabilities over a diverse matrix of targets and visual environments as well as camera and manipulator configurations. AVATAR repeatably and reliably recognizes targets and provides real-time position data sufficiently accurate for autonomous sampling

    Visual Perception for Manipulation and Imitation in Humanoid Robots

    Get PDF
    This thesis deals with visual perception for manipulation and imitation in humanoid robots. In particular, real-time applicable methods for object recognition and pose estimation as well as for markerless human motion capture have been developed. As only sensor a small baseline stereo camera system (approx. human eye distance) was used. An extensive experimental evaluation has been performed on simulated as well as real image data from real-world scenarios using the humanoid robot ARMAR-III

    Linear 3D object pose estimation with dense sample images

    Get PDF
    線形回帰による画像のパラメータ推定は,画像ベクトルの次元が高いため回帰係数の決定には非常に高い自由度を有する。本稿では,線形回帰において逐次更新による回帰係数の算出方法を提案し,稠密な学習サンプルに対して現実的な計算コストでの回帰係数の計算を実現する。また,この手法を3次元物体の姿勢推定に応用し,coil-20を用いた実験結果より線形回帰による3次元物体の線形姿勢推定の能力限界について考察する。In the image parameter estimation by the linear regression, it has very high degrees of freedom for the decision of regression coefficients, because the dimension of image vector is huge high. In this paper, we propose a sequential regression coefficient calculation algorithm, and we realize its calculation for dense samples with reasonable computational cost. Moreover, we apply this method to the pose estimation of the 3-D object, and we discuss about limit of parameter estimation ability by the linear regression with the coil-20 image library

    Efficient tracking of 3D objects from appearance

    Get PDF
    In this article, we propose an efficient tracking algorithm to follow 3D objects in image sequences. 3D objects are represented by a collection of reference images. The originality of this method is not to use high-level primitives (points of interest) to follow the movement of the object in the image but rather the difference between the vectors of gray-levels of the tracked reference pattern and the current pattern sampled in an area of interest. The tracking problem is reduced then to the estimate of the parameters representing the possible movements of the object in the image by the determination of interaction matrices learned during an off-line training stage, and that for each reference view. The first one relates the variations of intensity of the 2D current pattern to be tracked to its fronto parallel movement (parallel movement to the image plane). The aspect of the pattern representing the tracked object is not modified by this movement. However, the position, the orientation and the size of the pattern can change. The second matrix relates the variations of appearance of the currently tracked pattern to a change of attitude between the object and the camera (modification of the angular values in rolling and pitching). We show that the on-line use of these interaction matrices for the correction of the predicted position of the object in the image and the estimate of the variations of aspect of the tracked pattern allows a real time implementation of this algorithm (a matrix multiplied by a vector). Moreover, we also show how the problem of occlusions can be managed.Dans cet article, nous proposons un algorithme efficace de suivi d'un objet 3D dans une séquence d'images. Pour cela, l'objet 3D est représenté par une collection d'images de référence. L'originalité de cette méthode est de ne pas utiliser des primitives de haut niveau (points d'intérêt) pour suivre le déplacement de l'objet dans l'image mais plutôt la différence de vecteurs de niveaux de gris entre le motif de référence suivi et le motif courant échantillonné dans une zone d'intérêt de l'image. Le problème du suivi se ramène alors à l'estimation des paramètres qui caractérisent les mouvements possibles de l'objet dans l'image par la détermination de matrices dites d'interaction apprises lors d'une phase d'apprentissage hors ligne, et cela pour chacune des vues de référence. La première matrice lie les variations d'intensité lumineuse du motif de référence 2D de l'objet suivi à son déplacement fronto parallèle (déplacement paralièle au plan image). Sous l'hypothèse d'un tel mouvement, l'aspect apparent de l'objet suivi n'est pas modifié. Toutefois, sa position, son orientation planaire et sa taille peuvent changer. La deuxième matrice relie les variations d'apparence du motif suivi suite à un changement d'orientation par rapport au capteur (modification des angles de site et d'azimut). Nous montrons que l'utilisation en ligne de ces matrices pour la correction de la position prédite de l'objet dans l'image et de l'estimation des variations d'aspect du motif suivi correspond à un coût algorithmique très faible (multiplication d'une matrice par un vecteur) permettant une mise en oeuvre temps réel. De plus, nous évoquons le problème des occultations lors du suivi par une méthode de seuillage adaptatif

    Visual Homing in Dynamic Indoor Environments

    Get PDF
    Institute of Perception, Action and BehaviourOur dissertation concerns robotic navigation in dynamic indoor environments using image-based visual homing. Image-based visual homing infers the direction to a goal location S from the navigator’s current location C using the similarity between panoramic images IS and IC captured at those locations. There are several ways to compute this similarity. One of the contributions of our dissertation is to identify a robust image similarity measure – mutual image information – to use in dynamic indoor environments. We crafted novel methods to speed the computation of mutual image information with both parallel and serial processors and demonstrated that these time-savers had little negative effect on homing success. Image-based visual homing requires a homing agent tomove so as to optimise themutual image information signal. As the mutual information signal is corrupted by sensor noise we turned to the stochastic optimisation literature for appropriate optimisation algorithms. We tested a number of these algorithms in both simulated and real dynamic laboratory environments and found that gradient descent (with gradients computed by one-sided differences) works best
    corecore