13 research outputs found

    Visual Servoing Based on Image Motion

    Get PDF
    International audienceThe general aim of visual servoing is to control the motion of a robot so that visual features acquired by a camera become superimposed with a desired visual pattern. Visual servoing based on geometrical features such as image point coordinates is now well established. Nevertheless, this approach has the drawback that it usually needs visual marks on the observed object to retrieve geometric features. The idea developed in this paper is to use motion in the image as the input of the control scheme since it can be estimated without any a priori knowledge of the observed scene. Thus, more realistic scenes or objects can be considered. Two different methods are presented. In the first method, geometric features are retrieved by integration of motion, which allows the use of classical control laws. This method is applied to a 6 degree-of-freedom positioning task. The authors show that, in such a case, an affine model of 2-D motion is insuffi- cient to ensure convergence and that a quadratic model is needed. In the second method, the principle is to try to obtain a desired 2-D motion field in the image sequence. In usual image-based visual ser- voing, variations of visual features are linearly linked to the camera velocity. In this case, the corresponding relation is more complex, and the authors describe how it is possible to use this relation. This approach is illustrated with two tasks: positioning a camera parallel to a plane and following trajectory

    A direct visual servoing scheme for automatic nanopositioning.

    Get PDF
    International audienceThis paper demonstrates an accurate nanopositioning scheme based on a direct visual servoing process. This technique uses only the pure image signal (photometric information) to design the visual servoing control law. With respect to traditional visual servoing approaches that use geometric visual features (points, lines ...), the visual features used in the control law is the pixel intensity. The proposed approach has been tested in term of accuracy and robustness in several experimental conditions. The obtained results have demonstrated a good behavior of the control law and very good positioning accuracy. The obtained accuracies are 89 nm, 14 nm, and 0.001 degrees in the x, y and axes of a positioning platform, respectively

    Using robust estimation for visual servoing based on dynamic vision

    Get PDF
    International audienceThe aim of this article is to achieve accurate visual servoing tasks when the shape of the object being observed as well as the final image are unknown. More precisely, we want to control the orientation of the tangent plane at a certain point on the object corresponding to the center of a region of interest and to move this point to the principal point to fulfill a fixation task. To do that, we perform a 3D reconstruction phase during the servoing. It is based on the measurement of the 2D displacement in the region of interest and on the measurement of the camera velocity. Since the 2D displacement depends on the scene, we introduce an unified motion model to deal with planar as well with non-planar objects. Unfortunately, this model is only an approximation. Thus, we propose to use robust estimation techniques and a 3D reconstruction based on discrete approach. Experimental results compare both approaches

    ViSP for visual servoing: a generic software platform with a wide class of robot control skills

    Get PDF
    Special issue on Software Packages for Vision-Based Control of Motion, P. Oh, D. Burschka (Eds.)International audienceViSP (Visual Servoing Platform), a fully functional modular architecture that allows fast development of visual servoing applications, is described. The platform takes the form of a library which can be divided in three main modules: control processes, canonical vision-based tasks that contain the most classical linkages, and real-time tracking. ViSP software environment features independence with respect to the hardware, simplicity, extendibility, and portability. ViSP also features a large library of elementary tasks with various visual features that can be combined together, an image processing library that allows the tracking of visual cues at video rate, a simulator, an interface with various classical framegrabbers, a virtual 6-DOF robot that allows the simulation of visual servoing experiments, etc. The platform is implemented in C++ under Linux

    Reactive Motions In A Fully Autonomous CRS Catalyst 5 Robotic Arm Based On RGBD Data

    Get PDF
    This study proposes a method to perform velocity estimation using motion blur in a single image frame along x and y axes in the camera coordinate system and intercept a moving object with a robotic arm. It will be shown that velocity estimation in a single image frame improves the system\u27s performance. The majority of previous studies in this area require at least two image frames to measure the target\u27s velocity. In addition, they mostly employ specialized equipments which are able to generate high torques and accelerations. The setup consists of a 5 degree of freedom robotic arm and a Kinect camera. The RGBD (Red, Green, Blue and Depth) camera provides the RGB and depth information which are used to detect the position of the target. As the object is moving within a single image frame, the image contains motion blur. To recognize and differentiate the object from blurred area, the image intensity profiles are studied. Accordingly, the method determines the blur parameters based on the changes in the intensity profile. The aforementioned blur parameters are the length of the object and the length of the partial blur. Based on motion blur, the velocities along x and y camera coordinate axes are estimated. However, as the depth frame cannot record motion blur, the velocity along axis in the camera coordinate frame is initially unknown. The vectors of position and velocity are transformed into world coordinate frame and subsequently, the prospective position of the object, after a predefined time interval, is predicted. In order to intercept, the end-effector of the robotic arm must reach this predicted position within the time interval as well. For the end-effector to reach the predicted position within the predefined time interval, the robot\u27s joint angles and accelerations are determined through inverse kinematic methods. Then the robotic arm starts its motion. Once the second depth frame is obtained, the object\u27s velocity along z axis can be calculated as well. Accordingly, the predicted position of the object is recalculated, and the motion of the manipulator is modified. The proposed method is compared with existing methods which need at least two image frames to estimate the velocity of the target. It is shown that under identical kinematic conditions, the functionality of the system is improved by times for our setup. In addition, the experiment is repeated for times and the velocity data is recorded. According to the experimental results, there are two major limitations in our system and setup. The system cannot determine the velocity along z in the camera coordinate system from the initial image frame. Consequently, if the object travels faster along this axis, it becomes more susceptible to failure. In addition, our manipulator is an unspecialized equipment which is not designed for producing high torques and accelerations. Accordingly, the task becomes more challenging. The main cause of error in the experiments was operator\u27s. It is necessary to have the object pass through the working volume of the robot. Besides, the object must be still inside the working volume after the predefined time interval. It is possible that the operator throw the object within the designated working volume, but it leaves it earlier than the specified time interval

    Nonlinear Visual Mapping Model for 3-D Visual Tracking With Uncalibrated Eye-in-Hand Robotic System

    Full text link

    Visual Servoing

    Get PDF
    International audienceThis book chapter deals with visual servoing or vision-based control

    Visual servo control on a humanoid robot

    Get PDF
    Includes bibliographical referencesThis thesis deals with the control of a humanoid robot based on visual servoing. It seeks to confer a degree of autonomy to the robot in the achievement of tasks such as reaching a desired position, tracking or/and grasping an object. The autonomy of humanoid robots is considered as crucial for the success of the numerous services that this kind of robots can render with their ability to associate dexterity and mobility in structured, unstructured or even hazardous environments. To achieve this objective, a humanoid robot is fully modeled and the control of its locomotion, conditioned by postural balance and gait stability, is studied. The presented approach is formulated to account for all the joints of the biped robot. As a way to conform the reference commands from visual servoing to the discrete locomotion mode of the robot, this study exploits a reactive omnidirectional walking pattern generator and a visual task Jacobian redefined with respect to a floating base on the humanoid robot, instead of the stance foot. The redundancy problem stemming from the high number of degrees of freedom coupled with the omnidirectional mobility of the robot is handled within the task priority framework, allowing thus to achieve con- figuration dependent sub-objectives such as improving the reachability, the manipulability and avoiding joint limits. Beyond a kinematic formulation of visual servoing, this thesis explores a dynamic visual approach and proposes two new visual servoing laws. Lyapunov theory is used first to prove the stability and convergence of the visual closed loop, then to derive a robust adaptive controller for the combined robot-vision dynamics, yielding thus an ultimate uniform bounded solution. Finally, all proposed schemes are validated in simulation and experimentally on the humanoid robot NAO

    Controlo visual de robĂ´s manipuladores

    Get PDF
    Tese de Doutoramento em Engenharia Mecânica apresentada ao Instituto Superior Técnico da Universidade Técnica de Lisboa.Na presente tese é abordado o controlo visual de robôs manipuladores. Sobre o tema é apresentado o estado da arte e ainda as ferramentas de visão por computador necessárias à sua implementação. São apresentadas seis contribuições ao controlo visual de robôs manipuladores, nomeadamente o desenvolvimento de um aparato experimental, dois controladores visuais dinâmicos, a aplicação de filtros fuzzy ao controlo visual cinemático, a modelação fuzzy do sistema robô-câmara e o controlo fuzzy do sistema baseado no modelo inverso. O aparato experimental desenvolvido é composto por três partes, nomeadamente um robô manipulador planar de dois graus de liberdade, um sistema de visão com 50Hz de frequência de amostragem e o software desenvolvido para controlar e interligar os dois componentes anteriores. O aparato experimental desenvolvido permitiu validar experimentalmente, em tempo real, os controladores propostos nesta tese. O controlo visual dinâmico actua directamente os motores do robô, em contraste com o controlo visual cinemático que gera uma velocidade de junta a seguir pelo robô, através da utilização de um controlo interno em velocidade. A primeira contribuição ao controlo visual dinâmico é um controlador baseado na imagem, especialmente desenvolvido para o robô do aparato experimental, na configuração eye-in-hand. A segunda contribuição é o desenvolvimento de um controlador visual dinâmico baseado em posição para a configuração eye-in-hand, não estando restringido a um número fixo de graus de liberdade do robô. É ainda demonstrada a estabilidade assimptótica de ambos os controladores. A aplicação de lógica fuzzy ao controlo visual cinemático de robôs manipuladores baseado na imagem, revelou três contribuições. Com a aplicação de filtros fuzzy ao controlo visual cinemático, com planeamento de trajectórias ou em regulação, o desempenho do controlador é melhorado, i.e. as velocidades de junta do robô diminuem nos instantes iniciais e o carácter oscilatório destas é atenuado quando o tempo de amostragem de visão é elevado. Foi obtido o modelo inverso do sistema robô-câmara através de modelação fuzzy, tendo sido desenvolvida uma metodologia conducente à obtenção do referido modelo. O modelo inverso fuzzy é utilizado como controlador do sistema robô-câmara, com o objectivo de fornecer as velocidades de junta capazes de mover o robô para a posição desejada. Foi ainda utilizado um compensador fuzzy para compensar eventuais discrepâncias entre o modelo obtido e o sistema real.ABSTRACT: The work in thesis aims at the visual control of robotic manipulators, i.e. visual servoing. It is presented the state-of-the-art on the subject and the computer vision tools needed to its implementation. In this thesis are presented six contributions to visual servoing, namely the development of an experimental apparatus, two dynamic visual servoing con- trollers, the application of fuzzy filters to kinematic visual servoing, the fuzzy modeling of the robot-camera system and the fuzzy control based on the inverse model. The experimental apparatus has three different components, namely a planar robotic manipulator with two degrees of freedom, a 50 Hz vision system and the developed software to control and inter-connect the two previous components. The developed experimental apparatus allowed the real-time experimental validation of the controllers proposed in this thesis. The robot joint actuators are directly driven by dynamic visual servoing, in opposition to kinematic visual servoing that generates the joint velocities needed to drive the robot, by means of an inner velocity control loop. The first contribution to dynamic visual servoing is an image based control law specially developed to the robot of the experimental apparatus, with the eye-in-hand. The second contribution is a position based control law to the eye-in-hand configuration, applicable to robots with more than two degrees of freedom. For both the controllers the asymptotic stability is demonstrated. The application of fuzzy logic to image based kinematic visual servoing, revealed three contributions. With the application of fuzzy filters to path planning and to regulator control, the overall performance of visual servoing is improved. The robot joint velocities diminish at the initial control steps and its oscillatory behavior is also diminished when the vision sample time is high. The inverse model of the robot-camera system is obtained by means of fuzzy modeling. A practical methodology for obtaining the model is also presented. The fuzzy inverse model is directly used as the controller of the robot-camera system, in order to deliver the joint velocities, needed to drive the robot to the desired position. It was also used a fuzzy compensator to compensate possible mismatches between the obtained model and the robot-camera system.Fundação para a Ciência e Tecnologi

    Range Flow: New Algorithm Design and Quantitative and Qualitative Analysis

    Get PDF
    Optical flow computation is one of the oldest and most active research fields in computer vision and image processing. It encompasses the following areas: motion estimation, video compression, object detection and tracking, image dominant plane extraction, movement detection, robot navigation, visual odometry, traffic analysis, and vehicle tracking. Optical flow methods calculate the motion between two image frames. In 2D images, optical flow specifies how far each pixel moves between adjacent frames; in 3D images, it specifies how much each voxel moves between adjacent volumes in the dataset. Since 1980, several algorithms have successfully estimated 2D and 3D optical flow. Notably, scene flow and range flow are special cases of 3D optical flow. Scene flow is the 3D optical flow of pixels on a moving surface. Scene flow uses disparity and disparity gradient maps computed from a stereo sequence and the 2D optical flow of the left and right images in the stereo sequence to compute 3D motion. Range flow is similar to scene flow, but is calculated from depth map sequences or range datasets. There is clear overlap between the algorithms that compute scene flow and range flow. Therefore, we propose new insights that can help range flow algorithms to advance to the next stage. We propose new insights into range flow algorithms by enhancing them to allow large displacements using a hierarchical framework with warping technique. We applied robust statistical formulations to generate robust and dense flow to overcome motion discontinuities and reduce the outliers. Overall, this thesis focuses on the estimation of 2D optical flow and 3D range flow using several algorithms. In addition, we studied depth data gained from different sensors and cameras. These cameras provided RGB-D data that allowed us to compute 3D range flow in two ways: using depth data only, or by combining intensity with depth data to improve the flow. We implemented well-known local approaches LK [1] and global HS [2]algorithms and recast them in the proposed framework to estimate 2D and 3D range flow [3]. Furthermore, combining local and global algorithm (CLG) proposed by Bruhn et al. [4,5] as well as Brox et al. [6] method are implemented to estimate 2D optical flow and 3D range flow. We tested and evaluated these implemented approaches both qualitatively and quantitatively in two different motions (translation and divergence) using several real datasets acquired using Kinect V2, ZED camera, and iPhone X (front and rear) Cameras. We found that CLG and Brox methods gave the best results in our datasets using Kinect V2, ZED and front camera in iPhone X sequences
    corecore