14 research outputs found

    Photometric moments: New promising candidates for visual servoing

    Get PDF
    International audienceIn this paper, we propose a new type of visual features for visual servoing : photometric moments. These global features do not require any segmentation, matching or tracking steps. The analytical form of the interaction matrix is developed in closed form for these features. Results from experiments carried out with photometric moments have been presented. The results validate our modelling and the control scheme. They perform well for large camera displacements and are endowed with a large convergence domain. From the properties exhibited, photometric moments hold promise as better candidates for IBVS over currently existing geometric and pure luminance features

    Visual Servoing using the Sum of Conditional Variance

    Get PDF
    International audienceIn this paper we propose a new way to achieve direct visual servoing. The novelty is the use of the sum of conditional variance to realize the optimization process of a positioning task. This measure, which has previously been used successfully in the case of visual tracking, has been shown to be invariant to non-linear illumination variations and inexpensive to compute. Compared to other direct approaches of visual servoing, it is a good compromise between techniques using the illumination of pixels which are computationally inexpensive but non robust to illumination variations and other approaches using the mutual information which are more complicated to compute but offer more robustness towards the variations of the scene. This method results in a direct visual servoing task easy and fast to compute and robust towards non-linear illumination variations. This paper describes a visual servoing task based on the sum of conditional variance performed using a Levenberg-Marquardt optimization process. The results are then demonstrated through experimental validations and compared to both photometric-based and entropy-based techniques

    Efficient and secure real-time mobile robots cooperation using visual servoing

    Get PDF
    This paper deals with the challenging problem of navigation in formation of mobiles robots fleet. For that purpose, a secure approach is used based on visual servoing to control velocities (linear and angular) of the multiple robots. To construct our system, we develop the interaction matrix which combines the moments in the image with robots velocities and we estimate the depth between each robot and the targeted object. This is done without any communication between the robots which eliminate the problem of the influence of each robot errors on the whole. For a successful visual servoing, we propose a powerful mechanism to execute safely the robots navigation, exploiting a robot accident reporting system using raspberry Pi3. In addition, in case of problem, a robot accident detection reporting system testbed is used to send an accident notification, in the form of a specifical message. Experimental results are presented using nonholonomic mobiles robots with on-board real time cameras, to show the effectiveness of the proposed method

    Intelligent Sliding Surface Design Methods Applied to an IBVS System for Robot Manipulators

    Get PDF
    The controller of an image‐based visual servoing (IBVS) system is based on the design of the kinematic velocity controller which guarantees exponentially decreasing feature errors. In fact, this controller is using the sliding surface approach of classical Sliding Mode Control (SMC). In SMC, the system dynamics are taken into consideration and the sliding surface is designed according to the physical limitations and desired convergence time. Different design methods are proposed in the literature using adaptive gain, time variations, nonlinear functions, and intelligent methods like fuzzy logic (FL) and genetic algorithms (GA). In this study, five different sliding surface designs with analytical and intelligent methods are modified and applied to an IBVS system to expand these designs to visually guided robot manipulators. The design methods are selected by their convenience and applicability to these types of manipulator systems. To show the performance of the design methods, an IBVS system with six‐DOF manipulator is simulated using MATLAB Simulink, Robotics Toolbox, Machine Vision Toolbox, and Fuzzy Logic Toolbox. A comparison of these design methods according to convergence time, error cost function, defined parameters, and motion characteristics is given

    Motion planning in observations space with learned diffeomorphism models

    Full text link
    We consider the problem of planning motions in observations space, based on learned models of the dynamics that associate to each action a diffeomorphism of the observations domain. For an arbitrary set of diffeomorphisms, this problem must be formulated as a generic search problem. We adapt established algorithms of the graph search family. In this scenario, node expansion is very costly, as each node in the graph is associated to an uncertain diffeomorphism and corresponding predicted observations. We describe several improvements that ameliorate performance: the introduction of better image similarities to use as heuristics; a method to reduce the number of expanded nodes by preliminarily identifying redundant plans; and a method to pre-compute composite actions that make the search efficient in all directions

    A bio-plausible design for visual attitude stabilization

    Get PDF
    We consider the problem of attitude stabilization using exclusively visual sensory input, and we look for a solution which can satisfy the constraints of a "bio-plausible" computation. We obtain a PD controller which is a bilinear form of the goal image, and the current and delayed visual input. Moreover, this controller can be learned using classic neural networks algorithms. The structure of the resulting computation, derived from general principles by imposing a bilinear computation, has striking resemblances with existing models for visual information processing in insects (Reichardt Correlators and lobula plate tangential cells). We validate the algorithms using faithful simulations of the fruit fly visual input

    Visual Servoing using the Sum of Conditional Variance

    Get PDF
    International audienceIn this paper we propose a new way to achieve direct visual servoing. The novelty is the use of the sum of conditional variance to realize the optimization process of a positioning task. This measure, which has previously been used successfully in the case of visual tracking, has been shown to be invariant to non-linear illumination variations and inexpensive to compute. Compared to other direct approaches of visual servoing, it is a good compromise between techniques using the illumination of pixels which are computationally inexpensive but non robust to illumination variations and other approaches using the mutual information which are more complicated to compute but offer more robustness towards the variations of the scene. This method results in a direct visual servoing task easy and fast to compute and robust towards non-linear illumination variations. This paper describes a visual servoing task based on the sum of conditional variance performed using a Levenberg-Marquardt optimization process. The results are then demonstrated through experimental validations and compared to both photometric-based and entropy-based techniques

    Learning the Shape of Image Moments for Optimal 3D Structure Estimation

    Get PDF
    International audience— The selection of a suitable set of visual features for an optimal performance of closed-loop visual control or Structure from Motion (SfM) schemes is still an open problem in the visual servoing community. For instance, when considering integral region-based features such as image moments, only heuristic, partial, or local results are currently available for guiding the selection of an appropriate moment set. The goal of this paper is to propose a novel learning strategy able to automatically optimize online the shape of a given class of image moments as a function of the observed scene for improving the SfM performance in estimating the scene structure. As case study, the problem of recovering the (unknown) 3D parameters of a planar scene from measured moments and known camera motion is considered. The reported simulation results fully confirm the soundness of the approach and its superior performance over more consolidated solutions in increasing the information gain during the estimation task

    Methods for visual servoing of robotic systems: A state of the art survey

    Get PDF
    U ovom preglednom radu su prikazane metode vizuelnog upravljanja robotskih sistema, sa primarnim fokusom na mobilne robote sa diferencijalnim pogonom. Analizirane su standardne metode vizuelnog upravljanja bazirane na (i) greĆĄkama u parametrima slike (engl. Image-Based Visual Servoing - IBVS) i (ii) izdvojenim karakteristikama sa slike neophodnim za estimaciju poloĆŸaja izabranog objekta (engl. Position-Based Visual Servoing - PBVS) i poređene sa novom metodom direktnog vizuelnog upravljanja (engl. Direct Visual Servoing - DVS). U poređenju sa IBVS i PBVS metodama, DVS metod se odlikuje viĆĄom tačnoơću, ali i manjim domenom konvergencije. Zbog ovog razloga je DVS metod upravljanja pogodan za integraciju u hibridne sisteme vizuelnog upravljanja. Takođe, predstavljeni su radovi koji unapređuju sistem vizuelnog upravljanja koriơćenjem stereo sistema (sistem sa dve kamere). Stereo sistem, u poređenju sa alternativnim metodama, omogućava tačniju ocenu dubine karakterističnih objekata sa slike, koja je neophodna za zadatke vizuelnog upravljanja. Predmet analize su i radovi koji integriĆĄu tehnike veĆĄtačke inteligencije u sistem vizuelnog upravljanja. Ovim tehnikama sistemi vizuelnog upravljanja dobijaju mogućnost da uče, čime se njihov domen primene znatno proĆĄiruje. Na kraju, napominje se i mogućnost integracije vizuelne odometrije u sisteme vizuelnog upravljanja, ĆĄto prouzrokuje povećanje robusnosti čitavog robotskog sistema.This paper surveys the methods used for visual servoing of robotic systems, where the main focus is on mobile robot systems. The three main areas of research include the Direct Visual Servoing, stereo vision systems, and artificial intelligence in visual servoing. The standard methods such as Image-Based Visual Servoing (IBVS) and Position-Based Visual Servoing (PBVS) are analyzed and compared with the new method named Direct Visual Servoing (DVS). DVS methods have better accuracy, compared to IBVS and PBVS, but have limited convergence area. Because of their high accuracy, DVS methods are suitable for integration into hybrid systems. Furthermore, the use of the stereo systems for visual servoing is comprehensively analyzed. The main contribution of the stereo system is the accurate depth estimation, which is critical for many visual servoing tasks. The use of artificial intelligence (AI) in visual servoing purposes has also gained popularity over the years. AI techniques give visual servoing controllers the ability to learn by using predefined examples or empirical knowledge. The learning ability is crucial for the implementation of robotic systems in a real-world dynamic manufacturing environment. Also, we analyzed the use of visual odometry in combination with a visual servoing controller for creating more robust and reliable positioning system
    corecore