259 research outputs found

    Generic decoupled image-based visual servoing for cameras obeying the unified projection model

    Get PDF
    In this paper a generic decoupled imaged-based control scheme for calibrated cameras obeying the unified projection model is proposed. The proposed decoupled scheme is based on the surface of object projections onto the unit sphere. Such features are invariant to rotational motions. This allows the control of translational motion independently from the rotational motion. Finally, the proposed results are validated with experiments using a classical perspective camera as well as a fisheye camera mounted on a 6 dofs robot platform

    PWM and PFM for visual servoing in fully decoupled approaches

    Full text link
    In this paper, novel visual servoing techniques based on Pulse Width Modulation (PWM) and Pulse Frequency Modulation (PFM) are presented. In order to apply previous pulse modulations, a fully decoupled position based visual servoing approach (i.e. with block-diagonal interaction matrix) is considered, controlling independently translational and rotational camera motions. These techniques, working at high frequency, could be considered to address the sensor latency problem inherent in visual servoing systems. The expected appearance of ripple due to the concentration of the control action in pulses is quantified and analyzed under simulated scenario. This high frequency ripple does not affect the system performance since it is filtered by the manipulator dynamics. On the contrary it can be seen as a dither signal to minimize the impact of friction and overcome back-lashing.This work was supported in part by the Spanish Government under Grant BES-2010-038486 and Project DPI2013-42302-R.Muñoz Benavent, P.; Solanes Galbis, JE.; Gracia Calandin, LI.; Tornero Montserrat, J. (2015). PWM and PFM for visual servoing in fully decoupled approaches. Robotics and Autonomous Systems. 65(1):57-64. doi:10.1016/j.robot.2014.11.011S576465

    Weighted moments of SIFT-features for decoupled visual servoing in 6DOF

    Get PDF
    Mobile manipulation in service robotic applications requires the alignment of the end-effector with recognized objects of unknown pose. Image based visual servoing provides a means of model-free manipulation of objects solely relying on 2D image information. This contribution proposes a visual servoing scheme that utilizes the pixel coordinates, scale and orientation of the Scale Invariant Feature Transform (SIFT). The control is based on dynamic weighted moments aggregated from a mutable set of redundant SIFT feature correspondences between the current and the reference view. The key idea of our approach is to weight the point features in a way that eliminates or at least minimizes the undesired couplings to establish a one-to-one correspondence between feature and camera motion. The scheme achieves a complete decoupling in 4DOF, additionally the visual features in 6DOF are largely decoupled except for some minor residual coupling between the horizontal translation and the two rotational degrees of freedom. The decoupling results in a visual control in which the convergence in image and task space along a particular degree of motion is not effected by the remaining feature errors. Several simulations in virtual reality and experiments on a robotic arm equipped with a monocular eye-in-hand camera demonstrate that the approach is robust, efficient and reliable for 4DOF as well as 6DOF positioning

    Exploring Convolutional Networks for End-to-End Visual Servoing

    Full text link
    Present image based visual servoing approaches rely on extracting hand crafted visual features from an image. Choosing the right set of features is important as it directly affects the performance of any approach. Motivated by recent breakthroughs in performance of data driven methods on recognition and localization tasks, we aim to learn visual feature representations suitable for servoing tasks in unstructured and unknown environments. In this paper, we present an end-to-end learning based approach for visual servoing in diverse scenes where the knowledge of camera parameters and scene geometry is not available a priori. This is achieved by training a convolutional neural network over color images with synchronised camera poses. Through experiments performed in simulation and on a quadrotor, we demonstrate the efficacy and robustness of our approach for a wide range of camera poses in both indoor as well as outdoor environments.Comment: IEEE ICRA 201

    Distance-based and Orientation-based Visual Servoing from Three Points

    Get PDF
    International audienceThis paper is concerned with the use of a spherical-projection model for visual servoing from three points. We propose a new set of six features to control a 6-degree-of-freedom (DOF) robotic system with good decoupling properties. The first part of the set consists of three invariants to camera rotations. These invariants are built using the Cartesian distances between the spherical projections of the three points. The second part of the set corresponds to the angle-axis representation of a rotation matrix measured from the image of two points. Regarding the theoretical comparison with the classical perspective coordinates of points, the new set does not present more singularities. In addition, using the new set inside its nonsingular domain, a classical control law is proven to be optimal for pure rotational motions. The theoretical results and the robustness to points range errors of the new control scheme are validated through simulations and experiments on a 6-DOF robot arm

    Time-optimal large view visual servoing with dynamic sets of SIFT

    Get PDF
    This paper presents a novel approach to large view visual servoing in the context of object manipulation. In many scenarios the features extracted in the reference pose are only perceivable across a limited region of the work space. The limited visibility of features necessitates the introduction of additional intermediate reference views of the object and requires path planning in view space. In our scheme visual control is based on decoupled moments of SIFT-features, which are generic in the sense that the control operates with a dynamic set of feature correspondences rather than a static set of geometric features. The additional flexibility of dynamic feature sets enables flexible path planning in the image space and online selection of optimal reference views during servoing to the goal view. The time to convergence to the goal view is estimated by a neural network based on the residual feature error and the quality of the SIFT feature distribution. The transition among reference views occurs on the basis of this estimated cost which is evaluated online based on the current set of visible features. The dynamic switching scheme achieves robust and nearly time-optimal convergence of the visual control across the entire task space. The effectiveness and robustness of the scheme is confirmed in an experimental evaluation in a virtual reality simulation and on a real robot arm with a eye-in-hand configuration

    Robot Visual Servoing Using Discontinuous Control

    Full text link
    This work presents different proposals to deal with common problems in robot visual servoing based on the application of discontinuous control methods. The feasibility and effectiveness of the proposed approaches are substantiated by simulation results and real experiments using a 6R industrial manipulator. The main contributions are: - Geometric invariance using sliding mode control (Chapter 3): the defined higher-order invariance is used by the proposed approaches to tackle problems in visual servoing. Proofs of invariance condition are presented. - Fulfillment of constraints in visual servoing (Chapter 4): the proposal uses sliding mode methods to satisfy mechanical and visual constraints in visual servoing, while a secondary task is considered to properly track the target object. The main advantages of the proposed approach are: low computational cost, robustness and fully utilization of the allowed space for the constraints. - Robust auto tool change for industrial robots using visual servoing (Chapter 4): visual servoing and the proposed method for constraints fulfillment are applied to an automated solution for tool changing in industrial robots. The robustness of the proposed method is due to the control law of the visual servoing, which uses the information acquired by the vision system to close a feedback control loop. Furthermore, sliding mode control is simultaneously used in a prioritized level to satisfy the aforementioned constraints. Thus, the global control accurately places the tool in the warehouse, but satisfying the robot constraints. - Sliding mode controller for reference tracking (Chapter 5): an approach based on sliding mode control is proposed for reference tracking in robot visual servoing using industrial robot manipulators. The novelty of the proposal is the introduction of a sliding mode controller that uses a high-order discontinuous control signal, i.e., joint accelerations or joint jerks, in order to obtain a smoother behavior and ensure the robot system stability, which is demonstrated with a theoretical proof. - PWM and PFM for visual servoing in fully decoupled approaches (Chapter 6): discontinuous control based on pulse width and pulse frequency modulation is proposed for fully decoupled position based visual servoing approaches, in order to get the same convergence time for camera translation and rotation. Moreover, other results obtained in visual servoing applications are also described.Este trabajo presenta diferentes propuestas para tratar problemas habituales en el control de robots por realimentación visual, basadas en la aplicación de métodos de control discontinuos. La viabilidad y eficacia de las propuestas se fundamenta con resultados en simulación y con experimentos reales utilizando un robot manipulador industrial 6R. Las principales contribuciones son: - Invariancia geométrica utilizando control en modo deslizante (Capítulo 3): la invariancia de alto orden definida aquí es utilizada después por los métodos propuestos, para tratar problemas en control por realimentación visual. Se apuertan pruebas teóricas de la condición de invariancia. - Cumplimiento de restricciones en control por realimentación visual (Capítulo 4): esta propuesta utiliza métodos de control en modo deslizante para satisfacer restricciones mecánicas y visuales en control por realimentación visual, mientras una tarea secundaria se encarga del seguimiento del objeto. Las principales ventajas de la propuesta son: bajo coste computacional, robustez y plena utilización del espacio disponible para las restricciones. - Cambio de herramienta robusto para un robot industrial mediante control por realimentación visual (Capítulo 4): el control por realimentación visual y el método propuesto para el cumplimiento de las restricciones se aplican a una solución automatizada para el cambio de herramienta en robots industriales. La robustez de la propuesta radica en el uso del control por realimentación visual, que utiliza información del sistema de visión para cerrar el lazo de control. Además, el control en modo deslizante se utiliza simultáneamente en un nivel de prioridad superior para satisfacer las restricciones. Así pues, el control es capaz de dejar la herramienta en el intercambiador de herramientas de forma precisa, a la par que satisface las restricciones del robot. - Controlador en modo deslizante para seguimiento de referencia (Capítulo 5): se propone un enfoque basado en el control en modo deslizante para seguimiento de referencia en robots manipuladores industriales controlados por realimentación visual. La novedad de la propuesta radica en la introducción de un controlador en modo deslizante que utiliza la señal de control discontinua de alto orden, i.e. aceleraciones o jerks de las articulaciones, para obtener un comportamiento más suave y asegurar la estabilidad del sistema robótico, lo que se demuestra con una prueba teórica. - Control por realimentación visual mediante PWM y PFM en métodos completamente desacoplados (Capítulo 6): se propone un control discontinuo basado en modulación del ancho y frecuencia del pulso para métodos completamente desacoplados de control por realimentación visual basados en posición, con el objetivo de conseguir el mismo tiempo de convergencia para los movimientos de rotación y traslación de la cámara . Además, se presentan también otros resultados obtenidos en aplicaciones de control por realimentación visual.Aquest treball presenta diferents propostes per a tractar problemes habituals en el control de robots per realimentació visual, basades en l'aplicació de mètodes de control discontinus. La viabilitat i eficàcia de les propostes es fonamenta amb resultats en simulació i amb experiments reals utilitzant un robot manipulador industrial 6R. Les principals contribucions són: - Invariància geomètrica utilitzant control en mode lliscant (Capítol 3): la invariància d'alt ordre definida ací és utilitzada després pels mètodes proposats, per a tractar problemes en control per realimentació visual. S'aporten proves teòriques de la condició d'invariància. - Compliment de restriccions en control per realimentació visual (Capítol 4): aquesta proposta utilitza mètodes de control en mode lliscant per a satisfer restriccions mecàniques i visuals en control per realimentació visual, mentre una tasca secundària s'encarrega del seguiment de l'objecte. Els principals avantatges de la proposta són: baix cost computacional, robustesa i plena utilització de l'espai disponible per a les restriccions. - Canvi de ferramenta robust per a un robot industrial mitjançant control per realimentació visual (Capítol 4): el control per realimentació visual i el mètode proposat per al compliment de les restriccions s'apliquen a una solució automatitzada per al canvi de ferramenta en robots industrials. La robustesa de la proposta radica en l'ús del control per realimentació visual, que utilitza informació del sistema de visió per a tancar el llaç de control. A més, el control en mode lliscant s'utilitza simultàniament en un nivell de prioritat superior per a satisfer les restriccions. Així doncs, el control és capaç de deixar la ferramenta en l'intercanviador de ferramentes de forma precisa, a la vegada que satisfà les restriccions del robot. - Controlador en mode lliscant per a seguiment de referència (Capítol 5): es proposa un enfocament basat en el control en mode lliscant per a seguiment de referència en robots manipuladors industrials controlats per realimentació visual. La novetat de la proposta radica en la introducció d'un controlador en mode lliscant que utilitza senyal de control discontínua d'alt ordre, i.e. acceleracions o jerks de les articulacions, per a obtindre un comportament més suau i assegurar l'estabilitat del sistema robòtic, la qual cosa es demostra amb una prova teòrica. - Control per realimentació visual mitjançant PWM i PFM en mètodes completament desacoblats (Capítol 6): es proposa un control discontinu basat en modulació de l'ample i la freqüència del pols per a mètodes completament desacoblats de control per realimentació visual basats en posició, amb l'objectiu d'aconseguir el mateix temps de convergència per als moviments de rotació i translació de la càmera. A més, es presenten també altres resultats obtinguts en aplicacions de control per realimentació visual.Muñoz Benavent, P. (2017). Robot Visual Servoing Using Discontinuous Control [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90430TESI

    Rotation Free Active Vision

    Get PDF
    International audience— Incremental Structure from Motion (SfM) algorithms require, in general, precise knowledge of the camera linear and angular velocities in the camera frame for estimating the 3D structure of the scene. Since an accurate measurement of the camera own motion may be a non-trivial task in several robotics applications (for instance when the camera is onboard a UAV), we propose in this paper an active SfM scheme fully independent from the camera angular velocity. This is achieved by considering, as visual features, some rotational invariants obtained from the projection of the perceived 3D points onto a virtual unitary sphere (unified camera model). This feature set is then exploited for designing a rotation-free active SfM algorithm able to optimize online the direction of the camera linear velocity for improving the convergence of the structure estimation task. As case study, we apply our framework to the depth estimation of a set of 3D points and discuss several simulations and experimental results for illustrating the approach

    Position and motion estimation for visual robot control with planar targets

    Get PDF
    This paper addresses two problems in visually-controlled robots. The first consists of positioning the end-effector of a robot manipulator on a plane of interest by using a monocular vision system. The problem amounts to estimating the transformation between the coordinates of an image point and its three-dimensional location supposing that only the camera intrinsic parameters are known. The second problem consists of positioning the robot end-effector with respect to an object of interest free to move on a plane, and amounts to estimating the camera displacement in a stereo vision system in the presence of motion constraints. For these problems, some solutions are proposed through dedicated optimizations based on decoupling the effects of rotation and translation and based on an a-priori imposition of the degrees of freedom of the system. These solutions are illustrated via simulations and experiments. ©2009 ACA.published_or_final_versionThe 7th Asian Control Conference (ASCC 2009), Hong Kong, China, 27-29 August 2009. In Proceedings of the Asian Control Conference, 2009, p. 372-37

    Photometric visual servoing for omnidirectional cameras

    Get PDF
    International audience2D visual servoing consists in using data provided by a vision sensor for controlling the motions of a dynamic system. Most of visual servoing approaches has relied on the geometric features that have to be tracked and matched in the image acquired by the camera. Recent works have highlighted the interest of taking into account the photometric information of the entire image. This approach was tackled with images of perspective cameras. We propose, in this paper, to extend this technique to central cameras. This generalization allows to apply this kind of method to catadioptric cameras and wide field of view cameras. Several experiments have been successfully done with a fisheye camera in order to control a 6 degrees of freedom (dof) robot and with a catadioptric camera for a mobile robot navigation task
    • …
    corecore