123 research outputs found

    Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain

    Get PDF
    Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors

    Enhanced Image-Based Visual Servoing Dealing with Uncertainties

    Get PDF
    Nowadays, the applications of robots in industrial automation have been considerably increased. There is increasing demand for the dexterous and intelligent robots that can work in unstructured environment. Visual servoing has been developed to meet this need by integration of vision sensors into robotic systems. Although there has been significant development in visual servoing, there still exist some challenges in making it fully functional in the industry environment. The nonlinear nature of visual servoing and also system uncertainties are part of the problems affecting the control performance of visual servoing. The projection of 3D image to 2D image which occurs in the camera creates a source of uncertainty in the system. Another source of uncertainty lies in the camera and robot manipulator's parameters. Moreover, limited field of view (FOV) of the camera is another issues influencing the control performance. There are two main types of visual servoing: position-based and image-based. This project aims to develop a series of new methods of image-based visual servoing (IBVS) which can address the nonlinearity and uncertainty issues and improve the visual servoing performance of industrial robots. The first method is an adaptive switch IBVS controller for industrial robots in which the adaptive law deals with the uncertainties of the monocular camera in eye-in-hand configuration. The proposed switch control algorithm decouples the rotational and translational camera motions and decomposes the IBVS control into three separate stages with different gains. This method can increase the system response speed and improve the tracking performance of IBVS while dealing with camera uncertainties. The second method is an image feature reconstruction algorithm based on the Kalman filter which is proposed to handle the situation where the image features go outside the camera's FOV. The combination of the switch controller and the feature reconstruction algorithm can not only improve the system response speed and tracking performance of IBVS, but also can ensure the success of servoing in the case of the feature loss. Next, in order to deal with the external disturbance and uncertainties due to the depth of the features, the third new control method is designed to combine proportional derivative (PD) control with sliding mode control (SMC) on a 6-DOF manipulator. The properly tuned PD controller can ensure the fast tracking performance and SMC can deal with the external disturbance and depth uncertainties. In the last stage of the thesis, the fourth new semi off-line trajectory planning method is developed to perform IBVS tasks for a 6-DOF robotic manipulator system. In this method, the camera's velocity screw is parametrized using time-based profiles. The parameters of the velocity profile are then determined such that the velocity profile takes the robot to its desired position. This is done by minimizing the error between the initial and desired features. The algorithm for planning the orientation of the robot is decoupled from the position planning of the robot. This allows a convex optimization problem which lead to a faster and more efficient algorithm. The merit of the proposed method is that it respects all of the system constraints. This method also considers the limitation caused by camera's FOV. All the developed algorithms in the thesis are validated via tests on a 6-DOF Denso robot in an eye-in-hand configuration

    Adaptive visual servoing in uncalibrated environments.

    Get PDF
    Wang Hesheng.Thesis (M.Phil.)--Chinese University of Hong Kong, 2004.Includes bibliographical references (leaves 70-73).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement --- p.ivContents --- p.vList of Figures --- p.viiList of Tables --- p.viiiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Visual Servoing --- p.1Chapter 1.1.1 --- Position-based Visual Servoing --- p.4Chapter 1.1.2 --- Image-based Visual Servoing --- p.5Chapter 1.1.3 --- Camera Configurations --- p.7Chapter 1.2 --- Problem Definitions --- p.10Chapter 1.3 --- Related Work --- p.11Chapter 1.4 --- Contribution of This Work --- p.15Chapter 1.5 --- Organization of This Thesis --- p.16Chapter 2 --- System Modeling --- p.18Chapter 2.1 --- The Coordinates Frames --- p.18Chapter 2.2 --- The System Kinematics --- p.20Chapter 2.3 --- The System Dynamics --- p.21Chapter 2.4 --- The Camera Model --- p.23Chapter 2.4.1 --- Eye-in-hand System --- p.28Chapter 2.4.2 --- Eye-and-hand System --- p.32Chapter 3 --- Adaptive Image-based Visual Servoing --- p.35Chapter 3.1 --- Controller Design --- p.35Chapter 3.2 --- Estimation of The Parameters --- p.38Chapter 3.3 --- Stability Analysis --- p.42Chapter 4 --- Simulation --- p.48Chapter 4.1 --- Simulation I --- p.49Chapter 4.2 --- Simulation II --- p.51Chapter 5 --- Experiments --- p.55Chapter 6 --- Conclusions --- p.63Chapter 6.1 --- Conclusions --- p.63Chapter 6.2 --- Feature Work --- p.64Appendix --- p.66Bibliography --- p.7

    Adaptive Neuro-Filtering Based Visual Servo Control of a Robotic Manipulator

    Get PDF
    This paper focuses on the solutions to flexibly regulate robotic by vision. A new visual servoing technique based on the Kalman filtering (KF) combined neural network (NN) is developed, which need not have any calibration parameters of robotic system. The statistic knowledge of the system noise and observation noise are first given by Gaussian white noise sequences, the nonlinear mapping between robotic vision and motor spaces are then on-line identified using standard Kalman recursive equations. In real robotic workshops, the perfect statistic knowledge of the noise is not easy to be derived, thus an adaptive neuro-filtering approach based on KF is also studied for mapping on-line estimation in this paper. The Kalman recursive equations are improved by a feedforward NN, in which the neural estimator dynamic adjusts its weights to minimize estimation error of robotic vision-motor mapping, without the knowledge of noise variances. Finally, the proposed visual servoing based on adaptive neuro-filtering has been successfully implemented in robotic pose regulation, and the experimental results demonstrate its validity and practicality for a six-degree-of-freedom (DOF) robotic system which the hand-eye without calibrated

    Adaptive 3D Visual Servoing of a Scara Robot Manipulator with Unknown Dynamic and Vision System Parameters

    Get PDF
    In the present work, we develop an adaptive dynamic controller based on monocular vision for the tracking of objects with a three-degrees of freedom (DOF) Scara robot manipulator. The main characteristic of the proposed control scheme is that it considers the robot dynamics, the depth of the moving object, and the mounting of the fixed camera to be unknown. The design of the control algorithm is based on an adaptive kinematic visual servo controller whose objective is the tracking of moving objects even with uncertainties in the parameters of the camera and its mounting. The design also includes a dynamic controller in cascade with the former one whose objective is to compensate the dynamics of the manipulator by generating the final control actions to the robot even with uncertainties in the parameters of its dynamic model. Using Lyapunov’s theory, we analyze the two proposed adaptive controllers for stability properties, and, through simulations, the performance of the complete control scheme is shown.Fil: Sarapura, Jorge Antonio. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - San Juan. Instituto de Automática. Universidad Nacional de San Juan. Facultad de Ingeniería. Instituto de Automática; ArgentinaFil: Roberti, Flavio. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - San Juan. Instituto de Automática. Universidad Nacional de San Juan. Facultad de Ingeniería. Instituto de Automática; ArgentinaFil: Carelli Albarracin, Ricardo Oscar. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - San Juan. Instituto de Automática. Universidad Nacional de San Juan. Facultad de Ingeniería. Instituto de Automática; Argentin

    Visual guidance of unmanned aerial manipulators

    Get PDF
    The ability to fly has greatly expanded the possibilities for robots to perform surveillance, inspection or map generation tasks. Yet it was only in recent years that research in aerial robotics was mature enough to allow active interactions with the environment. The robots responsible for these interactions are called aerial manipulators and usually combine a multirotor platform and one or more robotic arms. The main objective of this thesis is to formalize the concept of aerial manipulator and present guidance methods, using visual information, to provide them with autonomous functionalities. A key competence to control an aerial manipulator is the ability to localize it in the environment. Traditionally, this localization has required external infrastructure of sensors (e.g., GPS or IR cameras), restricting the real applications. Furthermore, localization methods with on-board sensors, exported from other robotics fields such as simultaneous localization and mapping (SLAM), require large computational units becoming a handicap in vehicles where size, load, and power consumption are important restrictions. In this regard, this thesis proposes a method to estimate the state of the vehicle (i.e., position, orientation, velocity and acceleration) by means of on-board, low-cost, light-weight and high-rate sensors. With the physical complexity of these robots, it is required to use advanced control techniques during navigation. Thanks to their redundancy on degrees-of-freedom, they offer the possibility to accomplish not only with mobility requirements but with other tasks simultaneously and hierarchically, prioritizing them depending on their impact to the overall mission success. In this work we present such control laws and define a number of these tasks to drive the vehicle using visual information, guarantee the robot integrity during flight, and improve the platform stability or increase arm operability. The main contributions of this research work are threefold: (1) Present a localization technique to allow autonomous navigation, this method is specifically designed for aerial platforms with size, load and computational burden restrictions. (2) Obtain control commands to drive the vehicle using visual information (visual servo). (3) Integrate the visual servo commands into a hierarchical control law by exploiting the redundancy of the robot to accomplish secondary tasks during flight. These tasks are specific for aerial manipulators and they are also provided. All the techniques presented in this document have been validated throughout extensive experimentation with real robotic platforms.La capacitat de volar ha incrementat molt les possibilitats dels robots per a realitzar tasques de vigilància, inspecció o generació de mapes. Tot i això, no és fins fa pocs anys que la recerca en robòtica aèria ha estat prou madura com per començar a permetre interaccions amb l’entorn d’una manera activa. Els robots per a fer-ho s’anomenen manipuladors aeris i habitualment combinen una plataforma multirotor i un braç robòtic. L’objectiu d’aquesta tesi és formalitzar el concepte de manipulador aeri i presentar mètodes de guiatge, utilitzant informació visual, per dotar d’autonomia aquest tipus de vehicles. Una competència clau per controlar un manipulador aeri és la capacitat de localitzar-se en l’entorn. Tradicionalment aquesta localització ha requerit d’infraestructura sensorial externa (GPS, càmeres IR, etc.), limitant així les aplicacions reals. Pel contrari, sistemes de localització exportats d’altres camps de la robòtica basats en sensors a bord, com per exemple mètodes de localització i mapejat simultànis (SLAM), requereixen de gran capacitat de còmput, característica que penalitza molt en vehicles on la mida, pes i consum elèctric son grans restriccions. En aquest sentit, aquesta tesi proposa un mètode d’estimació d’estat del robot (posició, velocitat, orientació i acceleració) a partir de sensors instal·lats a bord, de baix cost, baix consum computacional i que proporcionen mesures a alta freqüència. Degut a la complexitat física d’aquests robots, és necessari l’ús de tècniques de control avançades. Gràcies a la seva redundància de graus de llibertat, aquests robots ens ofereixen la possibilitat de complir amb els requeriments de mobilitat i, simultàniament, realitzar tasques de manera jeràrquica, ordenant-les segons l’impacte en l’acompliment de la missió. En aquest treball es presenten aquestes lleis de control, juntament amb la descripció de tasques per tal de guiar visualment el vehicle, garantir la integritat del robot durant el vol, millorar de l’estabilitat del vehicle o augmentar la manipulabilitat del braç. Aquesta tesi es centra en tres aspectes fonamentals: (1) Presentar una tècnica de localització per dotar d’autonomia el robot. Aquest mètode està especialment dissenyat per a plataformes amb restriccions de capacitat computacional, mida i pes. (2) Obtenir les comandes de control necessàries per guiar el vehicle a partir d’informació visual. (3) Integrar aquestes accions dins una estructura de control jeràrquica utilitzant la redundància del robot per complir altres tasques durant el vol. Aquestes tasques son específiques per a manipuladors aeris i també es defineixen en aquest document. Totes les tècniques presentades en aquesta tesi han estat avaluades de manera experimental amb plataformes robòtiques real
    corecore