24 research outputs found

    Gluing free assembly of an advanced 3D structure using visual servoing.

    No full text
    International audienceThe paper deals with robotic assembly of 5 parts by their U-grooves to achieve stables 3D MEMS, without any use of soldering effect. The parts and their grooves measure 400 m 400 m 100 m 1.5 m and 100 m 100 m 100 m 1.5 m leading to an assembly clearance ranging from -3 and +3 m. Two visual servo approaches are used simultaneously: 2D visual servo for gripping and release of parts and 3D visual servo for displacement of parts. The results of experiments are presented and analyzed

    [DEMO] Tracking Texture-less, Shiny Objects with Descriptor Fields

    Get PDF
    Our demo demonstrates the method we published at CVPR this year for tracking specular and poorly textured objects, and lets the visitors experiment with it and with their own patterns. Our approach only requires a standard monocular camera (no need for a depth sensor), and can be easily integrated within existing systems to improve their robustness and accuracy. Code is publicly available

    Visual servo control Part I: basic approaches

    Get PDF
    This article is the first of a two-part series on the topic of visual servo control—using computer vision data in the servo loop to control the motion of a robot. In the present article, we describe the basic techniques that are by now well established in the field. We first give a general overview of the formulation of the visual servo control problem. We then describe the two archetypal visual servo control schemes: image-based and position-based visual servo control. Finally, we discuss performance and stability issues that pertain to these two schemes, motivating the second article in the series, in which we consider advanced techniques

    A Novel Method to Increase LinLog CMOS Sensors’ Performance in High Dynamic Range Scenarios

    Get PDF
    Images from high dynamic range (HDR) scenes must be obtained with minimum loss of information. For this purpose it is necessary to take full advantage of the quantification levels provided by the CCD/CMOS image sensor. LinLog CMOS sensors satisfy the above demand by offering an adjustable response curve that combines linear and logarithmic responses. This paper presents a novel method to quickly adjust the parameters that control the response curve of a LinLog CMOS image sensor. We propose to use an Adaptive Proportional-Integral-Derivative controller to adjust the exposure time of the sensor, together with control algorithms based on the saturation level and the entropy of the images. With this method the sensor’s maximum dynamic range (120 dB) can be used to acquire good quality images from HDR scenes with fast, automatic adaptation to scene conditions. Adaptation to a new scene is rapid, with a sensor response adjustment of less than eight frames when working in real time video mode. At least 67% of the scene entropy can be retained with this method

    Dense RGB-D mapping of large scale environments for real-time localisation and autonomous navigation

    Get PDF
    International audienceThis paper presents a method and apparatus for building 3D dense visual maps of large scale environments for real-time localisation and autonomous navigation. We propose a spherical ego-centric representation of the environment which is able to reproduce photo-realistic omnidirectional views of captured environments. This representation is composed of a graph of locally accurate augmented spherical panoramas that allows to generate varying viewpoints through novel view synthesis. The spheres are related by a graph of 6 d.o.f. poses which are estimated through multi-view spherical registration. It is shown that this representation can be used to accurately localise a vehicle navigating within the spherical graph, using only a monocular camera for accurate localisation. To perform this task, an efficient direct image registration technique is employed. This approach directly exploits the advantages of the spherical representation by minimising a photometric error between a current image and a reference sphere. Autonomous navigation results are shown in challenging urban environ- ments, containing pedestrians and other vehicles

    Sampling-Based MPC for Constrained Vision Based Control

    Get PDF
    International audienceVisual servoing control schemes, such as Image-Based (IBVS), Pose Based (PBVS) or Hybrid-Based (HBVS) have been extensively developed over the last decades making possible their uses in a large number of applications. It is well-known that the main problems to be handled concern the presence of local minima or singularities, the visibility constraint, the joint limits, etc. Recently, Model Predictive Path Integral (MPPI) control algorithm has been developed for autonomous robot navigation tasks. In this paper, we propose a MPPI-VS framework applied for the control of a 6-DoF robot with 2D point, 3D point, and Pose Based Visual Servoing techniques. We performed intensive simulations under various operating conditions to show the potential advantages of the proposed control framework compared to the classical schemes. The effectiveness, the robustness and the capability in coping easily with the system constraints of the control framework are shown

    Robust 3D Tracking with Descriptor Fields

    Get PDF
    We introduce a method that can register challenging images from specular and poorly textured 3D environments, on which previous approaches fail. We assume that a small set of reference images of the environment and a partial 3D model are available. Like previous approaches, we register the input images by aligning them with one of the reference images using the 3D information. However, these approaches typically rely on the pixel intensities for the alignment, which is prone to fail in presence of specularities or in absence of texture. Our main contribution is an efficient novel local descriptor that we use to describe each image location. We show that we can rely on this descriptor in place of the intensities to significantly improve the alignment robustness at a minor increase of the computational cost, and we analyze the reasons behind the success of our descriptor

    Comparing algorithms for diffeomorphic registration: Stationary LDDMM and Diffeomorphic Demons

    Get PDF
    International audienceThe stationary parameterization of diffeomorphisms is be- ing increasingly used in computational anatomy. In certain applications it provides similar results to the non-stationary parameterization alle- viating the computational charge. With this characterization for diffeo- morphisms, two different registration algorithms have been recently pro- posed: stationary LDDMM and diffeomorphic Demons. To our knowl- edge, their theoretical and practical differences have not been analyzed yet. In this article we provide a comparison between both algorithms in a common framework. To this end, we have studied the differences in the elements of both registration scenarios. We have analyzed the sen- sitivity of the regularization parameters in the smoothness of the final transformations and compared the performance of the registration re- sults. Moreover, we have studied the potential of both algorithms for the computation of essential operations for further statistical analysis. We have found that both methods have comparable performance in terms of image matching although the transformations are qualitatively different in some cases. Diffeomorphic Demons shows a slight advantage in terms of computational time. However, it does not provide as stationary LD- DMM the vector field in the tangent space needed to compute statistics or exact inverse transformations

    ViSP for visual servoing: a generic software platform with a wide class of robot control skills

    Get PDF
    Special issue on Software Packages for Vision-Based Control of Motion, P. Oh, D. Burschka (Eds.)International audienceViSP (Visual Servoing Platform), a fully functional modular architecture that allows fast development of visual servoing applications, is described. The platform takes the form of a library which can be divided in three main modules: control processes, canonical vision-based tasks that contain the most classical linkages, and real-time tracking. ViSP software environment features independence with respect to the hardware, simplicity, extendibility, and portability. ViSP also features a large library of elementary tasks with various visual features that can be combined together, an image processing library that allows the tracking of visual cues at video rate, a simulator, an interface with various classical framegrabbers, a virtual 6-DOF robot that allows the simulation of visual servoing experiments, etc. The platform is implemented in C++ under Linux
    corecore