192 research outputs found

    Visual guidance of unmanned aerial manipulators

    Get PDF
    The ability to fly has greatly expanded the possibilities for robots to perform surveillance, inspection or map generation tasks. Yet it was only in recent years that research in aerial robotics was mature enough to allow active interactions with the environment. The robots responsible for these interactions are called aerial manipulators and usually combine a multirotor platform and one or more robotic arms. The main objective of this thesis is to formalize the concept of aerial manipulator and present guidance methods, using visual information, to provide them with autonomous functionalities. A key competence to control an aerial manipulator is the ability to localize it in the environment. Traditionally, this localization has required external infrastructure of sensors (e.g., GPS or IR cameras), restricting the real applications. Furthermore, localization methods with on-board sensors, exported from other robotics fields such as simultaneous localization and mapping (SLAM), require large computational units becoming a handicap in vehicles where size, load, and power consumption are important restrictions. In this regard, this thesis proposes a method to estimate the state of the vehicle (i.e., position, orientation, velocity and acceleration) by means of on-board, low-cost, light-weight and high-rate sensors. With the physical complexity of these robots, it is required to use advanced control techniques during navigation. Thanks to their redundancy on degrees-of-freedom, they offer the possibility to accomplish not only with mobility requirements but with other tasks simultaneously and hierarchically, prioritizing them depending on their impact to the overall mission success. In this work we present such control laws and define a number of these tasks to drive the vehicle using visual information, guarantee the robot integrity during flight, and improve the platform stability or increase arm operability. The main contributions of this research work are threefold: (1) Present a localization technique to allow autonomous navigation, this method is specifically designed for aerial platforms with size, load and computational burden restrictions. (2) Obtain control commands to drive the vehicle using visual information (visual servo). (3) Integrate the visual servo commands into a hierarchical control law by exploiting the redundancy of the robot to accomplish secondary tasks during flight. These tasks are specific for aerial manipulators and they are also provided. All the techniques presented in this document have been validated throughout extensive experimentation with real robotic platforms.La capacitat de volar ha incrementat molt les possibilitats dels robots per a realitzar tasques de vigilància, inspecció o generació de mapes. Tot i això, no és fins fa pocs anys que la recerca en robòtica aèria ha estat prou madura com per començar a permetre interaccions amb l’entorn d’una manera activa. Els robots per a fer-ho s’anomenen manipuladors aeris i habitualment combinen una plataforma multirotor i un braç robòtic. L’objectiu d’aquesta tesi és formalitzar el concepte de manipulador aeri i presentar mètodes de guiatge, utilitzant informació visual, per dotar d’autonomia aquest tipus de vehicles. Una competència clau per controlar un manipulador aeri és la capacitat de localitzar-se en l’entorn. Tradicionalment aquesta localització ha requerit d’infraestructura sensorial externa (GPS, càmeres IR, etc.), limitant així les aplicacions reals. Pel contrari, sistemes de localització exportats d’altres camps de la robòtica basats en sensors a bord, com per exemple mètodes de localització i mapejat simultànis (SLAM), requereixen de gran capacitat de còmput, característica que penalitza molt en vehicles on la mida, pes i consum elèctric son grans restriccions. En aquest sentit, aquesta tesi proposa un mètode d’estimació d’estat del robot (posició, velocitat, orientació i acceleració) a partir de sensors instal·lats a bord, de baix cost, baix consum computacional i que proporcionen mesures a alta freqüència. Degut a la complexitat física d’aquests robots, és necessari l’ús de tècniques de control avançades. Gràcies a la seva redundància de graus de llibertat, aquests robots ens ofereixen la possibilitat de complir amb els requeriments de mobilitat i, simultàniament, realitzar tasques de manera jeràrquica, ordenant-les segons l’impacte en l’acompliment de la missió. En aquest treball es presenten aquestes lleis de control, juntament amb la descripció de tasques per tal de guiar visualment el vehicle, garantir la integritat del robot durant el vol, millorar de l’estabilitat del vehicle o augmentar la manipulabilitat del braç. Aquesta tesi es centra en tres aspectes fonamentals: (1) Presentar una tècnica de localització per dotar d’autonomia el robot. Aquest mètode està especialment dissenyat per a plataformes amb restriccions de capacitat computacional, mida i pes. (2) Obtenir les comandes de control necessàries per guiar el vehicle a partir d’informació visual. (3) Integrar aquestes accions dins una estructura de control jeràrquica utilitzant la redundància del robot per complir altres tasques durant el vol. Aquestes tasques son específiques per a manipuladors aeris i també es defineixen en aquest document. Totes les tècniques presentades en aquesta tesi han estat avaluades de manera experimental amb plataformes robòtiques real

    Sliding mode control for robust and smooth reference tracking in robot visual servoing

    Full text link
    [EN] An approach based on sliding mode is proposed in this work for reference tracking in robot visual servoing. In particular, 2 sliding mode controls are obtained depending on whether joint accelerations or joint jerks are considered as the discontinuous control action. Both sliding mode controls are extensively compared in a 3D-simulated environment with their equivalent well-known continuous controls, which can be found in the literature, to highlight their similarities and differences. The main advantages of the proposed method are smoothness, robustness, and low computational cost. The applicability and robustness of the proposed approach are substantiated by experimental results using a conventional 6R industrial manipulator (KUKA KR 6 R900 sixx [AGILUS]) for positioning and tracking tasks.Spanish Government, Grant/Award Number: BES-2010-038486; Generalitat Valenciana, Grant/Award Number: BEST/2017/029 and APOSTD/2016/044Muñoz-Benavent, P.; Gracia, L.; Solanes, JE.; Esparza, A.; Tornero, J. (2018). Sliding mode control for robust and smooth reference tracking in robot visual servoing. International Journal of Robust and Nonlinear Control. 28(5):1728-1756. https://doi.org/10.1002/rnc.3981S17281756285Hutchinson, S., Hager, G. D., & Corke, P. I. (1996). A tutorial on visual servo control. IEEE Transactions on Robotics and Automation, 12(5), 651-670. doi:10.1109/70.538972Chaumette, F., & Hutchinson, S. (2008). Visual Servoing and Visual Tracking. Springer Handbook of Robotics, 563-583. doi:10.1007/978-3-540-30301-5_25Corke, P. (2011). Robotics, Vision and Control. Springer Tracts in Advanced Robotics. doi:10.1007/978-3-642-20144-8RYAN, E. P., & CORLESS, M. (1984). Ultimate Boundedness and Asymptotic Stability of a Class of Uncertain Dynamical Systems via Continuous and Discontinuous Feedback Control. IMA Journal of Mathematical Control and Information, 1(3), 223-242. doi:10.1093/imamci/1.3.223Chaumette, F., & Hutchinson, S. (2006). Visual servo control. I. Basic approaches. IEEE Robotics & Automation Magazine, 13(4), 82-90. doi:10.1109/mra.2006.250573Chaumette, F., & Hutchinson, S. (2007). Visual servo control. II. Advanced approaches [Tutorial]. IEEE Robotics & Automation Magazine, 14(1), 109-118. doi:10.1109/mra.2007.339609Bonfe M Mainardi E Fantuzzi C Variable structure PID based visual servoing for robotic tracking and manipulation 2002 Lausanne, Switzerland https://doi.org/10.1109/IRDS.2002.1041421Solanes, J. E., Muñoz-Benavent, P., Girbés, V., Armesto, L., & Tornero, J. (2015). On improving robot image-based visual servoing based on dual-rate reference filtering control strategy. Robotica, 34(12), 2842-2859. doi:10.1017/s0263574715000454Elena M Cristiano M Damiano F Bonfe M Variable structure PID controller for cooperative eye-in-hand/eye-to-hand visual servoing 2003 Istanbul, Turkey https://doi.org/10.1109/CCA.2003.1223145Hashimoto, K., Ebine, T., & Kimura, H. (1996). Visual servoing with hand-eye manipulator-optimal control approach. IEEE Transactions on Robotics and Automation, 12(5), 766-774. doi:10.1109/70.538981Chan A Leonard S Croft EA Little JJ Collision-free visual servoing of an eye-in-hand manipulator via constraint-aware planning and control 2011 San Francisco, CA, USA https://doi.org/10.1109/ACC.2011.5991008Allibert, G., Courtial, E., & Chaumette, F. (2010). Visual Servoing via Nonlinear Predictive Control. Lecture Notes in Control and Information Sciences, 375-393. doi:10.1007/978-1-84996-089-2_20Kragic, D., & Christensen, H. I. (2003). Robust Visual Servoing. The International Journal of Robotics Research, 22(10-11), 923-939. doi:10.1177/027836490302210009Mezouar Y Chaumette F Path planning in image space for robust visual servoing 2000 San Francisco, CA, USA https://doi.org/10.1109/ROBOT.2000.846445Morel, G., Zanne, P., & Plestan, F. (2005). Robust visual servoing: bounding the task function tracking errors. IEEE Transactions on Control Systems Technology, 13(6), 998-1009. doi:10.1109/tcst.2005.857409Hammouda, L., Kaaniche, K., Mekki, H., & Chtourou, M. (2015). Robust visual servoing using global features based on random process. International Journal of Computational Vision and Robotics, 5(2), 138. doi:10.1504/ijcvr.2015.068803Yang YX Liu D Liu H Robot-self-learning visual servoing algorithm using neural networks 2002 Beijing, China https://doi.org/10.1109/ICMLC.2002.1174473Sadeghzadeh, M., Calvert, D., & Abdullah, H. A. (2014). Self-Learning Visual Servoing of Robot Manipulator Using Explanation-Based Fuzzy Neural Networks and Q-Learning. Journal of Intelligent & Robotic Systems, 78(1), 83-104. doi:10.1007/s10846-014-0151-5Lee AX Levine S Abbeel P Learning Visual Servoing With Deep Features and Fitted Q-Iteration 2017Fakhry, H. H., & Wilson, W. J. (1996). A modified resolved acceleration controller for position-based visual servoing. Mathematical and Computer Modelling, 24(5-6), 1-9. doi:10.1016/0895-7177(96)00112-4Keshmiri, M., Wen-Fang Xie, & Mohebbi, A. (2014). Augmented Image-Based Visual Servoing of a Manipulator Using Acceleration Command. IEEE Transactions on Industrial Electronics, 61(10), 5444-5452. doi:10.1109/tie.2014.2300048Edwards, C., & Spurgeon, S. (1998). Sliding Mode Control. doi:10.1201/9781498701822Zanne P Morel G Piestan F Robust vision based 3D trajectory tracking using sliding mode control 2000 San Francisco, CA, USAOliveira TR Peixoto AJ Leite AC Hsu L Sliding mode control of uncertain multivariable nonlinear systems applied to uncalibrated robotics visual servoing 2009 St. Louis, MO, USAOliveira, T. R., Leite, A. C., Peixoto, A. J., & Hsu, L. (2014). Overcoming Limitations of Uncalibrated Robotics Visual Servoing by means of Sliding Mode Control and Switching Monitoring Scheme. Asian Journal of Control, 16(3), 752-764. doi:10.1002/asjc.899Li, F., & Xie, H.-L. (2010). Sliding mode variable structure control for visual servoing system. International Journal of Automation and Computing, 7(3), 317-323. doi:10.1007/s11633-010-0509-5Kim J Kim D Choi S Won S Image-based visual servoing using sliding mode control 2006 Busan, South KoreaBurger W Dean-Leon E Cheng G Robust second order sliding mode control for 6D position based visual servoing with a redundant mobile manipulator 2015 Seoul, South KoreaBecerra, H. M., López-Nicolás, G., & Sagüés, C. (2011). A Sliding-Mode-Control Law for Mobile Robots Based on Epipolar Visual Servoing From Three Views. IEEE Transactions on Robotics, 27(1), 175-183. doi:10.1109/tro.2010.2091750Parsapour, M., & Taghirad, H. D. (2015). Kernel-based sliding mode control for visual servoing system. IET Computer Vision, 9(3), 309-320. doi:10.1049/iet-cvi.2013.0310Xin J Ran BJ Ma XM Robot visual sliding mode servoing using SIFT features 2016 Chengdu, ChinaZhao, Y. M., Lin, Y., Xi, F., Guo, S., & Ouyang, P. (2016). Switch-Based Sliding Mode Control for Position-Based Visual Servoing of Robotic Riveting System. Journal of Manufacturing Science and Engineering, 139(4). doi:10.1115/1.4034681Moosavian, S. A. A., & Papadopoulos, E. (2007). Modified transpose Jacobian control of robotic systems. Automatica, 43(7), 1226-1233. doi:10.1016/j.automatica.2006.12.029Sagara, S., & Taira, Y. (2008). Digital control of space robot manipulators with velocity type joint controller using transpose of generalized Jacobian matrix. Artificial Life and Robotics, 13(1), 355-358. doi:10.1007/s10015-008-0584-7Khalaji, A. K., & Moosavian, S. A. A. (2015). Modified transpose Jacobian control of a tractor-trailer wheeled robot. Journal of Mechanical Science and Technology, 29(9), 3961-3969. doi:10.1007/s12206-015-0841-3Utkin, V., Guldner, J., & Shi, J. (2017). Sliding Mode Control in Electro-Mechanical Systems. doi:10.1201/9781420065619Utkin, V. (2016). Discussion Aspects of High-Order Sliding Mode Control. IEEE Transactions on Automatic Control, 61(3), 829-833. doi:10.1109/tac.2015.2450571Romdhane, H., Dehri, K., & Nouri, A. S. (2016). Discrete second-order sliding mode control based on optimal sliding function vector for multivariable systems with input-output representation. International Journal of Robust and Nonlinear Control, 26(17), 3806-3830. doi:10.1002/rnc.3536Sharma, N. K., & Janardhanan, S. (2017). Optimal discrete higher-order sliding mode control of uncertain LTI systems with partial state information. International Journal of Robust and Nonlinear Control. doi:10.1002/rnc.3785LEVANT, A. (1993). Sliding order and sliding accuracy in sliding mode control. International Journal of Control, 58(6), 1247-1263. doi:10.1080/00207179308923053Levant, A. (2003). Higher-order sliding modes, differentiation and output-feedback control. International Journal of Control, 76(9-10), 924-941. doi:10.1080/0020717031000099029Bartolini, G., Ferrara, A., & Usai, E. (1998). Chattering avoidance by second-order sliding mode control. IEEE Transactions on Automatic Control, 43(2), 241-246. doi:10.1109/9.661074Siciliano, B., Sciavicco, L., Villani, L., & Oriolo, G. (2009). Robotics. Advanced Textbooks in Control and Signal Processing. doi:10.1007/978-1-84628-642-1Deo, A. S., & Walker, I. D. (1995). Overview of damped least-squares methods for inverse kinematics of robot manipulators. Journal of Intelligent & Robotic Systems, 14(1), 43-68. doi:10.1007/bf01254007WHEELER, G., SU, C.-Y., & STEPANENKO, Y. (1998). A Sliding Mode Controller with Improved Adaptation Laws for the Upper Bounds on the Norm of Uncertainties. Automatica, 34(12), 1657-1661. doi:10.1016/s0005-1098(98)80024-1Yu-Sheng Lu. (2009). Sliding-Mode Disturbance Observer With Switching-Gain Adaptation and Its Application to Optical Disk Drives. IEEE Transactions on Industrial Electronics, 56(9), 3743-3750. doi:10.1109/tie.2009.2025719Chen, X., Shen, W., Cao, Z., & Kapoor, A. (2014). A novel approach for state of charge estimation based on adaptive switching gain sliding mode observer in electric vehicles. Journal of Power Sources, 246, 667-678. doi:10.1016/j.jpowsour.2013.08.039Cong, B. L., Chen, Z., & Liu, X. D. (2012). On adaptive sliding mode control without switching gain overestimation. International Journal of Robust and Nonlinear Control, 24(3), 515-531. doi:10.1002/rnc.2902Taleb, M., Plestan, F., & Bououlid, B. (2014). An adaptive solution for robust control based on integral high-order sliding mode concept. International Journal of Robust and Nonlinear Control, 25(8), 1201-1213. doi:10.1002/rnc.3135Zhu, J., & Khayati, K. (2016). On a new adaptive sliding mode control for MIMO nonlinear systems with uncertainties of unknown bounds. International Journal of Robust and Nonlinear Control, 27(6), 942-962. doi:10.1002/rnc.3608Hafez AHA Cervera E Jawahar CV Hybrid visual servoing by boosting IBVS and PBVS 2008 Damascus, SyriaKermorgant O Chaumette F Combining IBVS and PBVS to ensure the visibility constraint 2011 San Francisco, CA, USACorke, P. I., & Hutchinson, S. A. (2001). A new partitioned approach to image-based visual servo control. IEEE Transactions on Robotics and Automation, 17(4), 507-515. doi:10.1109/70.954764Yang, Z., & Shen, S. (2017). Monocular Visual–Inertial State Estimation With Online Initialization and Camera–IMU Extrinsic Calibration. IEEE Transactions on Automation Science and Engineering, 14(1), 39-51. doi:10.1109/tase.2016.2550621Chesi G Hashimoto K Static-eye against hand-eye visual servoing 2002 Las Vegas, NV, USABourdis N Marraud D Sahbi H Camera pose estimation using visual servoing for aerial video change detection 2012 Munich, GermanyShademan A Janabi-Sharifi F Sensitivity analysis of EKF and iterated EKF pose estimation for position-based visual servoing 2005 USAMalis, E., Mezouar, Y., & Rives, P. (2010). Robustness of Image-Based Visual Servoing With a Calibrated Camera in the Presence of Uncertainties in the Three-Dimensional Structure. IEEE Transactions on Robotics, 26(1), 112-120. doi:10.1109/tro.2009.2033332Chen J Behal A Dawson D Dixon W Adaptive visual servoing in the presence of intrinsic calibration uncertainty 2003 USAMezouar Y Malis E Robustness of central catadioptric image-based visual servoing to uncertainties on 3D parameters 2004 Sendai, JapanMarchand, E., Spindler, F., & Chaumette, F. (2005). ViSP for visual servoing: a generic software platform with a wide class of robot control skills. IEEE Robotics & Automation Magazine, 12(4), 40-52. doi:10.1109/mra.2005.157702

    Unfalsified visual servoing for simultaneous object recognition and pose tracking

    Get PDF
    In a complex environment, simultaneous object recognition and tracking has been one of the challenging topics in computer vision and robotics. Current approaches are usually fragile due to spurious feature matching and local convergence for pose determination. Once a failure happens, these approaches lack a mechanism to recover automatically. In this paper, data-driven unfalsified control is proposed for solving this problem in visual servoing. It recognizes a target through matching image features with a 3-D model and then tracks them through dynamic visual servoing. The features can be falsified or unfalsified by a supervisory mechanism according to their tracking performance. Supervisory visual servoing is repeated until a consensus between the model and the selected features is reached, so that model recognition and object tracking are accomplished. Experiments show the effectiveness and robustness of the proposed algorithm to deal with matching and tracking failures caused by various disturbances, such as fast motion, occlusions, and illumination variation

    Adaptive tracking control of a wheeled mobile robot via an uncalibrated camera system

    Full text link

    A Multi-Sensorial Hybrid Control for Robotic Manipulation in Human-Robot Workspaces

    Get PDF
    Autonomous manipulation in semi-structured environments where human operators can interact is an increasingly common task in robotic applications. This paper describes an intelligent multi-sensorial approach that solves this issue by providing a multi-robotic platform with a high degree of autonomy and the capability to perform complex tasks. The proposed sensorial system is composed of a hybrid visual servo control to efficiently guide the robot towards the object to be manipulated, an inertial motion capture system and an indoor localization system to avoid possible collisions between human operators and robots working in the same workspace, and a tactile sensor algorithm to correctly manipulate the object. The proposed controller employs the whole multi-sensorial system and combines the measurements of each one of the used sensors during two different phases considered in the robot task: a first phase where the robot approaches the object to be grasped, and a second phase of manipulation of the object. In both phases, the unexpected presence of humans is taken into account. This paper also presents the successful results obtained in several experimental setups which verify the validity of the proposed approach

    Modelling the Xbox 360 Kinect for visual servo control applications

    Get PDF
    A research report submitted to the faculty of Engineering and the built environment, University of the Witwatersrand, Johannesburg, in partial fulfilment of the requirements for the degree of Master of Science in Engineering. Johannesburg, August 2016There has been much interest in using the Microsoft Xbox 360 Kinect cameras for visual servo control applications. It is a relatively cheap device with expected shortcomings. This work contributes to the practical considerations of using the Kinect for visual servo control applications. A comprehensive characterisation of the Kinect is synthesised from existing literature and results from a nonlinear calibration procedure. The Kinect reduces computational overhead on image processing stages, such as pose estimation or depth estimation. It is limited by its 0.8m to 3.5m practical depth range and quadratic depth resolution of 1.8mm to 35mm, respectively. Since the Kinect uses an infra-red (IR) projector, a class one laser, it should not be used outdoors, due to IR saturation, and objects belonging to classes of non- IR-friendly surfaces should be avoided, due to IR refraction, absorption, or specular reflection. Problems of task stability due to invalid depth measurements in Kinect depth maps and practical depth range limitations can be reduced by using depth map preprocessing and activating classical visual servoing techniques when Kinect-based approaches are near task failure.MT201

    Robot Visual Servoing Using Discontinuous Control

    Full text link
    This work presents different proposals to deal with common problems in robot visual servoing based on the application of discontinuous control methods. The feasibility and effectiveness of the proposed approaches are substantiated by simulation results and real experiments using a 6R industrial manipulator. The main contributions are: - Geometric invariance using sliding mode control (Chapter 3): the defined higher-order invariance is used by the proposed approaches to tackle problems in visual servoing. Proofs of invariance condition are presented. - Fulfillment of constraints in visual servoing (Chapter 4): the proposal uses sliding mode methods to satisfy mechanical and visual constraints in visual servoing, while a secondary task is considered to properly track the target object. The main advantages of the proposed approach are: low computational cost, robustness and fully utilization of the allowed space for the constraints. - Robust auto tool change for industrial robots using visual servoing (Chapter 4): visual servoing and the proposed method for constraints fulfillment are applied to an automated solution for tool changing in industrial robots. The robustness of the proposed method is due to the control law of the visual servoing, which uses the information acquired by the vision system to close a feedback control loop. Furthermore, sliding mode control is simultaneously used in a prioritized level to satisfy the aforementioned constraints. Thus, the global control accurately places the tool in the warehouse, but satisfying the robot constraints. - Sliding mode controller for reference tracking (Chapter 5): an approach based on sliding mode control is proposed for reference tracking in robot visual servoing using industrial robot manipulators. The novelty of the proposal is the introduction of a sliding mode controller that uses a high-order discontinuous control signal, i.e., joint accelerations or joint jerks, in order to obtain a smoother behavior and ensure the robot system stability, which is demonstrated with a theoretical proof. - PWM and PFM for visual servoing in fully decoupled approaches (Chapter 6): discontinuous control based on pulse width and pulse frequency modulation is proposed for fully decoupled position based visual servoing approaches, in order to get the same convergence time for camera translation and rotation. Moreover, other results obtained in visual servoing applications are also described.Este trabajo presenta diferentes propuestas para tratar problemas habituales en el control de robots por realimentación visual, basadas en la aplicación de métodos de control discontinuos. La viabilidad y eficacia de las propuestas se fundamenta con resultados en simulación y con experimentos reales utilizando un robot manipulador industrial 6R. Las principales contribuciones son: - Invariancia geométrica utilizando control en modo deslizante (Capítulo 3): la invariancia de alto orden definida aquí es utilizada después por los métodos propuestos, para tratar problemas en control por realimentación visual. Se apuertan pruebas teóricas de la condición de invariancia. - Cumplimiento de restricciones en control por realimentación visual (Capítulo 4): esta propuesta utiliza métodos de control en modo deslizante para satisfacer restricciones mecánicas y visuales en control por realimentación visual, mientras una tarea secundaria se encarga del seguimiento del objeto. Las principales ventajas de la propuesta son: bajo coste computacional, robustez y plena utilización del espacio disponible para las restricciones. - Cambio de herramienta robusto para un robot industrial mediante control por realimentación visual (Capítulo 4): el control por realimentación visual y el método propuesto para el cumplimiento de las restricciones se aplican a una solución automatizada para el cambio de herramienta en robots industriales. La robustez de la propuesta radica en el uso del control por realimentación visual, que utiliza información del sistema de visión para cerrar el lazo de control. Además, el control en modo deslizante se utiliza simultáneamente en un nivel de prioridad superior para satisfacer las restricciones. Así pues, el control es capaz de dejar la herramienta en el intercambiador de herramientas de forma precisa, a la par que satisface las restricciones del robot. - Controlador en modo deslizante para seguimiento de referencia (Capítulo 5): se propone un enfoque basado en el control en modo deslizante para seguimiento de referencia en robots manipuladores industriales controlados por realimentación visual. La novedad de la propuesta radica en la introducción de un controlador en modo deslizante que utiliza la señal de control discontinua de alto orden, i.e. aceleraciones o jerks de las articulaciones, para obtener un comportamiento más suave y asegurar la estabilidad del sistema robótico, lo que se demuestra con una prueba teórica. - Control por realimentación visual mediante PWM y PFM en métodos completamente desacoplados (Capítulo 6): se propone un control discontinuo basado en modulación del ancho y frecuencia del pulso para métodos completamente desacoplados de control por realimentación visual basados en posición, con el objetivo de conseguir el mismo tiempo de convergencia para los movimientos de rotación y traslación de la cámara . Además, se presentan también otros resultados obtenidos en aplicaciones de control por realimentación visual.Aquest treball presenta diferents propostes per a tractar problemes habituals en el control de robots per realimentació visual, basades en l'aplicació de mètodes de control discontinus. La viabilitat i eficàcia de les propostes es fonamenta amb resultats en simulació i amb experiments reals utilitzant un robot manipulador industrial 6R. Les principals contribucions són: - Invariància geomètrica utilitzant control en mode lliscant (Capítol 3): la invariància d'alt ordre definida ací és utilitzada després pels mètodes proposats, per a tractar problemes en control per realimentació visual. S'aporten proves teòriques de la condició d'invariància. - Compliment de restriccions en control per realimentació visual (Capítol 4): aquesta proposta utilitza mètodes de control en mode lliscant per a satisfer restriccions mecàniques i visuals en control per realimentació visual, mentre una tasca secundària s'encarrega del seguiment de l'objecte. Els principals avantatges de la proposta són: baix cost computacional, robustesa i plena utilització de l'espai disponible per a les restriccions. - Canvi de ferramenta robust per a un robot industrial mitjançant control per realimentació visual (Capítol 4): el control per realimentació visual i el mètode proposat per al compliment de les restriccions s'apliquen a una solució automatitzada per al canvi de ferramenta en robots industrials. La robustesa de la proposta radica en l'ús del control per realimentació visual, que utilitza informació del sistema de visió per a tancar el llaç de control. A més, el control en mode lliscant s'utilitza simultàniament en un nivell de prioritat superior per a satisfer les restriccions. Així doncs, el control és capaç de deixar la ferramenta en l'intercanviador de ferramentes de forma precisa, a la vegada que satisfà les restriccions del robot. - Controlador en mode lliscant per a seguiment de referència (Capítol 5): es proposa un enfocament basat en el control en mode lliscant per a seguiment de referència en robots manipuladors industrials controlats per realimentació visual. La novetat de la proposta radica en la introducció d'un controlador en mode lliscant que utilitza senyal de control discontínua d'alt ordre, i.e. acceleracions o jerks de les articulacions, per a obtindre un comportament més suau i assegurar l'estabilitat del sistema robòtic, la qual cosa es demostra amb una prova teòrica. - Control per realimentació visual mitjançant PWM i PFM en mètodes completament desacoblats (Capítol 6): es proposa un control discontinu basat en modulació de l'ample i la freqüència del pols per a mètodes completament desacoblats de control per realimentació visual basats en posició, amb l'objectiu d'aconseguir el mateix temps de convergència per als moviments de rotació i translació de la càmera. A més, es presenten també altres resultats obtinguts en aplicacions de control per realimentació visual.Muñoz Benavent, P. (2017). Robot Visual Servoing Using Discontinuous Control [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90430TESI

    From plain visualisation to vibration sensing: using a camera to control the flexibilities in the ITER remote handling equipment

    Get PDF
    Thermonuclear fusion is expected to play a key role in the energy market during the second half of this century, reaching 20% of the electricity generation by 2100. For many years, fusion scientists and engineers have been developing the various technologies required to build nuclear power stations allowing a sustained fusion reaction. To the maximum possible extent, maintenance operations in fusion reactors are performed manually by qualified workers in full accordance with the "as low as reasonably achievable" (ALARA) principle. However, the option of hands-on maintenance becomes impractical, difficult or simply impossible in many circumstances, such as high biological dose rates. In this case, maintenance tasks will be performed with remote handling (RH) techniques. The International Thermonuclear Experimental Reactor ITER, to be commissioned in southern France around 2025, will be the first fusion experiment producing more power from fusion than energy necessary to heat the plasma. Its main objective is “to demonstrate the scientific and technological feasibility of fusion power for peaceful purposes”. However ITER represents an unequalled challenge in terms of RH system design, since it will be much more demanding and complex than any other remote maintenance system previously designed. The introduction of man-in-the-loop capabilities in the robotic systems designed for ITER maintenance would provide useful assistance during inspection, i.e. by providing the operator the ability and flexibility to locate and examine unplanned targets, or during handling operations, i.e. by making peg-in-hole tasks easier. Unfortunately, most transmission technologies able to withstand the very specific and extreme environmental conditions existing inside a fusion reactor are based on gears, screws, cables and chains, which make the whole system very flexible and subject to vibrations. This effect is further increased as structural parts of the maintenance equipment are generally lightweight and slender structures due to the size and the arduous accessibility to the reactor. Several methodologies aiming at avoiding or limiting the effects of vibrations on RH system performance have been investigated over the past decade. These methods often rely on the use of vibration sensors such as accelerometers. However, reviewing market shows that there is no commercial off-the-shelf (COTS) accelerometer that meets the very specific requirements for vibration sensing in the ITER in-vessel RH equipment (resilience to high total integrated dose, high sensitivity). The customisation and qualification of existing products or investigation of new concepts might be considered. However, these options would inevitably involve high development costs. While an extensive amount of work has been published on the modelling and control of flexible manipulators in the 1980s and 1990s, the possibility to use vision devices to stabilise an oscillating robotic arm has only been considered very recently and this promising solution has not been discussed at length. In parallel, recent developments on machine vision systems in nuclear environment have been very encouraging. Although they do not deal directly with vibration sensing, they open up new prospects in the use of radiation tolerant cameras. This thesis aims to demonstrate that vibration control of remote maintenance equipment operating in harsh environments such as ITER can be achieved without considering any extra sensor besides the embarked rad-hardened cameras that will inevitably be used to provide real-time visual feedback to the operators. In other words it is proposed to consider the radiation-tolerant vision devices as full sensors providing quantitative data that can be processed by the control scheme and not only as plain video feedback providing qualitative information. The work conducted within the present thesis has confirmed that methods based on the tracking of visual features from an unknown environment are effective candidates for the real-time control of vibrations. Oscillations induced at the end effector are estimated by exploiting a simple physical model of the manipulator. Using a camera mounted in an eye-in-hand configuration, this model is adjusted using direct measurement of the tip oscillations with respect to the static environment. The primary contribution of this thesis consists of implementing a markerless tracker to determine the velocity of a tip-mounted camera in an untrimmed environment in order to stabilise an oscillating long-reach robotic arm. In particular, this method implies modifying an existing online interaction matrix estimator to make it self-adjustable and deriving a multimode dynamic model of a flexible rotating beam. An innovative vision-based method using sinusoidal regression to sense low-frequency oscillations is also proposed and tested. Finally, the problem of online estimation of the image capture delay for visual servoing applications with high dynamics is addressed and an original approach based on the concept of cross-correlation is presented and experimentally validated

    Design and Control of an Articulated Robotic Arm Using Visual Inspection for Replacement Activities

    Get PDF
    Design of robotic systems and their control for inspection and maintenance tasks is highly complex activity involving coordination of various sub-systems. In application like inspections in fusion reactor vessels and deep-mining works, a regular off-line maintenance is necessary in certain locations. Due to the hostile environments inside, robotic systems are to be deployed for such internal observations. In this regard, current work focuses on the methodology for maintenance of the first wall blanket modules in a fusion reactor vessel using a manipulator system. A design is proposed for wall tile inspections in an ideal environment in which vacuum and temperature conditions are not accounted and wall surface curvature is not accounted initially. The entire design has four important modules: (i) mathematical modelling (ii) control system design (iii) machine vision and image processing, (iv) hardware development and testing. A five- axis articulated manipulator equipped with a vision camera in eye-to-hand configuration is designed for performing the pick and place operations of the defected tiles in a systematic manner. Kinematic and dynamics analysis of the system are first carried-out and a scaled prototype is fabricated for testing various operating issues. Forward kinematics of manipulator allows in estimation of robot workspace and in knowing the singular regions during operation, while the inverse kinematics of the manipulator would be needed for real time manipulator control task. Dynamics of manipulator is required for design of model-based controllers. Interactive programs are developed in Matlab for kinematics and dynamics and three-dimensional manipulator assembly configuration is modelled in SolidWorks software. Motion analysis is conducted in ADAMS software in order to compare the results obtained from the classical kinematics. Two types of model-based control schemes (namely Computed Torque Control and Proportional Derivative-Sliding Mode Control approach) with and without external disturbances are implemented to study trajectory tracking performance of the arm with different input trajectories. A disturbance observer model is employed in minimizing the tracking errors during the action of external disturbances such as joint friction and payload. In order to experimentally understand the inspection and replacement activities, a test set-up is developed using vision camera and microcontroller platform to guide the robot joint servos so as to perform defected object replacement activity. Presence of crack and the coordinate of the region are indicated with the use of image-processing operations. Using a high resolution Basler camera mounted at fixed distance from the tile surface, the surface images are acquired and image processing module identifies the crack details using edge detection algorithms. Necessary motion of the end-effector will be provided based on the pose calculations using coordinate transformations. Both visual inspection and joint guidance are combined in a single application and the results are presented with a test case of tile replacement activity. The results are presented sequentially using a test surface with uniform rectangular tiles
    corecore