6 research outputs found

    Visual Servoing using the Sum of Conditional Variance

    Get PDF
    International audienceIn this paper we propose a new way to achieve direct visual servoing. The novelty is the use of the sum of conditional variance to realize the optimization process of a positioning task. This measure, which has previously been used successfully in the case of visual tracking, has been shown to be invariant to non-linear illumination variations and inexpensive to compute. Compared to other direct approaches of visual servoing, it is a good compromise between techniques using the illumination of pixels which are computationally inexpensive but non robust to illumination variations and other approaches using the mutual information which are more complicated to compute but offer more robustness towards the variations of the scene. This method results in a direct visual servoing task easy and fast to compute and robust towards non-linear illumination variations. This paper describes a visual servoing task based on the sum of conditional variance performed using a Levenberg-Marquardt optimization process. The results are then demonstrated through experimental validations and compared to both photometric-based and entropy-based techniques

    Visual Servoing using the Sum of Conditional Variance

    Get PDF
    International audienceIn this paper we propose a new way to achieve direct visual servoing. The novelty is the use of the sum of conditional variance to realize the optimization process of a positioning task. This measure, which has previously been used successfully in the case of visual tracking, has been shown to be invariant to non-linear illumination variations and inexpensive to compute. Compared to other direct approaches of visual servoing, it is a good compromise between techniques using the illumination of pixels which are computationally inexpensive but non robust to illumination variations and other approaches using the mutual information which are more complicated to compute but offer more robustness towards the variations of the scene. This method results in a direct visual servoing task easy and fast to compute and robust towards non-linear illumination variations. This paper describes a visual servoing task based on the sum of conditional variance performed using a Levenberg-Marquardt optimization process. The results are then demonstrated through experimental validations and compared to both photometric-based and entropy-based techniques

    Efficient and secure real-time mobile robots cooperation using visual servoing

    Get PDF
    This paper deals with the challenging problem of navigation in formation of mobiles robots fleet. For that purpose, a secure approach is used based on visual servoing to control velocities (linear and angular) of the multiple robots. To construct our system, we develop the interaction matrix which combines the moments in the image with robots velocities and we estimate the depth between each robot and the targeted object. This is done without any communication between the robots which eliminate the problem of the influence of each robot errors on the whole. For a successful visual servoing, we propose a powerful mechanism to execute safely the robots navigation, exploiting a robot accident reporting system using raspberry Pi3. In addition, in case of problem, a robot accident detection reporting system testbed is used to send an accident notification, in the form of a specifical message. Experimental results are presented using nonholonomic mobiles robots with on-board real time cameras, to show the effectiveness of the proposed method

    Methods for visual servoing of robotic systems: A state of the art survey

    Get PDF
    U ovom preglednom radu su prikazane metode vizuelnog upravljanja robotskih sistema, sa primarnim fokusom na mobilne robote sa diferencijalnim pogonom. Analizirane su standardne metode vizuelnog upravljanja bazirane na (i) greškama u parametrima slike (engl. Image-Based Visual Servoing - IBVS) i (ii) izdvojenim karakteristikama sa slike neophodnim za estimaciju položaja izabranog objekta (engl. Position-Based Visual Servoing - PBVS) i poređene sa novom metodom direktnog vizuelnog upravljanja (engl. Direct Visual Servoing - DVS). U poređenju sa IBVS i PBVS metodama, DVS metod se odlikuje višom tačnošću, ali i manjim domenom konvergencije. Zbog ovog razloga je DVS metod upravljanja pogodan za integraciju u hibridne sisteme vizuelnog upravljanja. Takođe, predstavljeni su radovi koji unapređuju sistem vizuelnog upravljanja korišćenjem stereo sistema (sistem sa dve kamere). Stereo sistem, u poređenju sa alternativnim metodama, omogućava tačniju ocenu dubine karakterističnih objekata sa slike, koja je neophodna za zadatke vizuelnog upravljanja. Predmet analize su i radovi koji integrišu tehnike veštačke inteligencije u sistem vizuelnog upravljanja. Ovim tehnikama sistemi vizuelnog upravljanja dobijaju mogućnost da uče, čime se njihov domen primene znatno proširuje. Na kraju, napominje se i mogućnost integracije vizuelne odometrije u sisteme vizuelnog upravljanja, što prouzrokuje povećanje robusnosti čitavog robotskog sistema.This paper surveys the methods used for visual servoing of robotic systems, where the main focus is on mobile robot systems. The three main areas of research include the Direct Visual Servoing, stereo vision systems, and artificial intelligence in visual servoing. The standard methods such as Image-Based Visual Servoing (IBVS) and Position-Based Visual Servoing (PBVS) are analyzed and compared with the new method named Direct Visual Servoing (DVS). DVS methods have better accuracy, compared to IBVS and PBVS, but have limited convergence area. Because of their high accuracy, DVS methods are suitable for integration into hybrid systems. Furthermore, the use of the stereo systems for visual servoing is comprehensively analyzed. The main contribution of the stereo system is the accurate depth estimation, which is critical for many visual servoing tasks. The use of artificial intelligence (AI) in visual servoing purposes has also gained popularity over the years. AI techniques give visual servoing controllers the ability to learn by using predefined examples or empirical knowledge. The learning ability is crucial for the implementation of robotic systems in a real-world dynamic manufacturing environment. Also, we analyzed the use of visual odometry in combination with a visual servoing controller for creating more robust and reliable positioning system

    Visual Tracking in Robotic Minimally Invasive Surgery

    Get PDF
    Intra-operative imaging and robotics are some of the technologies driving forward better and more effective minimally invasive surgical procedures. To advance surgical practice and capabilities further, one of the key requirements for computationally enhanced interventions is to know how instruments and tissues move during the operation. While endoscopic video captures motion, the complex appearance dynamic effects of surgical scenes are challenging for computer vision algorithms to handle with robustness. Tackling both tissue and instrument motion estimation, this thesis proposes a combined non-rigid surface deformation estimation method to track tissue surfaces robustly and in conditions with poor illumination. For instrument tracking, a keypoint based 2D tracker that relies on the Generalized Hough Transform is developed to initialize a 3D tracker in order to robustly track surgical instruments through long sequences that contain complex motions. To handle appearance changes and occlusion a patch-based adaptive weighting with segmentation and scale tracking framework is developed. It takes a tracking-by-detection approach and a segmentation model is used to assigns weights to template patches in order to suppress back- ground information. The performance of the method is thoroughly evaluated showing that without any offline-training, the tracker works well even in complex environments. Finally, the thesis proposes a novel 2D articulated instrument pose estimation framework, which includes detection-regression fully convolutional network and a multiple instrument parsing component. The framework achieves compelling performance and illustrates interesting properties includ- ing transfer between different instrument types and between ex vivo and in vivo data. In summary, the thesis advances the state-of-the art in visual tracking for surgical applications for both tissue and instrument motion estimation. It contributes to developing the technological capability of full surgical scene understanding from endoscopic video
    corecore