378 research outputs found

    Measurement errors in visual servoing

    Get PDF
    Abstract — In recent years, a number of hybrid visual servoing control algorithms have been proposed and evaluated. For some time now, it has been clear that classical control approaches — image and position based —- have some inherent problems. Hybrid approaches try to combine them in order to overcome these problems. However, most of the proposed approaches concentrate mainly on the design of the control law, neglecting the issue of errors resulting from the sensory system. This work deals with the effect of measurement errors in visual servoing. The particular contribution of this paper is the analysis of the propagation of image error through pose estimation and visual servoing control law. We have chosen to investigate the properties of the vision system and their effect to the performance of the control system. Two approaches are evaluated: i) position, and ii) 2 1/2 D visual servoing. We believe that our evaluation offers a valid tool to build and analyze hybrid control systems based on, for example, switching [1] or partitioning [2]. I

    Visual servoing of mobile robots using non-central catadioptric cameras

    Get PDF
    This paper presents novel contributions on image-based control of a mobile robot using a general catadioptric camera model. A catadioptric camera is usually made up by a combination of a conventional camera and a curved mirror resulting in an omnidirectional sensor capable of providing 360° panoramic views of a scene. Modeling such cameras has been the subject of significant research interest in the computer vision community leading to a deeper understanding of the image properties and also to different models for different types of configurations. Visual servoing applications using catadioptric cameras have essentially been using central cameras and the corresponding unified projection model. So far only in a few cases more general models have been used. In this paper we address the problem of visual servoing using the so-called radial model. The radial model can be applied to many camera configurations and in particular to non-central catadioptric systems with mirrors that are symmetric around an axis coinciding with the optical axis. In this case, we show that the radial model can be used with a non-central catadioptric camera to allow effective image-based visual servoing (IBVS) of a mobile robot. Using this model, which is valid for a large set of catadioptric cameras (central or non-central), new visual features are proposed to control the degrees of freedom of a mobile robot moving on a plane. In addition to several simulation results, a set of experiments was carried out on Robot Operating System (ROS)-based platform which validates the applicability, effectiveness and robustness of the proposed method for image-based control of a non-holonomic robot

    Sliding mode control for robust and smooth reference tracking in robot visual servoing

    Full text link
    [EN] An approach based on sliding mode is proposed in this work for reference tracking in robot visual servoing. In particular, 2 sliding mode controls are obtained depending on whether joint accelerations or joint jerks are considered as the discontinuous control action. Both sliding mode controls are extensively compared in a 3D-simulated environment with their equivalent well-known continuous controls, which can be found in the literature, to highlight their similarities and differences. The main advantages of the proposed method are smoothness, robustness, and low computational cost. The applicability and robustness of the proposed approach are substantiated by experimental results using a conventional 6R industrial manipulator (KUKA KR 6 R900 sixx [AGILUS]) for positioning and tracking tasks.Spanish Government, Grant/Award Number: BES-2010-038486; Generalitat Valenciana, Grant/Award Number: BEST/2017/029 and APOSTD/2016/044Muñoz-Benavent, P.; Gracia, L.; Solanes, JE.; Esparza, A.; Tornero, J. (2018). Sliding mode control for robust and smooth reference tracking in robot visual servoing. International Journal of Robust and Nonlinear Control. 28(5):1728-1756. https://doi.org/10.1002/rnc.3981S17281756285Hutchinson, S., Hager, G. D., & Corke, P. I. (1996). A tutorial on visual servo control. IEEE Transactions on Robotics and Automation, 12(5), 651-670. doi:10.1109/70.538972Chaumette, F., & Hutchinson, S. (2008). Visual Servoing and Visual Tracking. Springer Handbook of Robotics, 563-583. doi:10.1007/978-3-540-30301-5_25Corke, P. (2011). Robotics, Vision and Control. Springer Tracts in Advanced Robotics. doi:10.1007/978-3-642-20144-8RYAN, E. P., & CORLESS, M. (1984). Ultimate Boundedness and Asymptotic Stability of a Class of Uncertain Dynamical Systems via Continuous and Discontinuous Feedback Control. IMA Journal of Mathematical Control and Information, 1(3), 223-242. doi:10.1093/imamci/1.3.223Chaumette, F., & Hutchinson, S. (2006). Visual servo control. I. Basic approaches. IEEE Robotics & Automation Magazine, 13(4), 82-90. doi:10.1109/mra.2006.250573Chaumette, F., & Hutchinson, S. (2007). Visual servo control. II. Advanced approaches [Tutorial]. IEEE Robotics & Automation Magazine, 14(1), 109-118. doi:10.1109/mra.2007.339609Bonfe M Mainardi E Fantuzzi C Variable structure PID based visual servoing for robotic tracking and manipulation 2002 Lausanne, Switzerland https://doi.org/10.1109/IRDS.2002.1041421Solanes, J. E., Muñoz-Benavent, P., Girbés, V., Armesto, L., & Tornero, J. (2015). On improving robot image-based visual servoing based on dual-rate reference filtering control strategy. Robotica, 34(12), 2842-2859. doi:10.1017/s0263574715000454Elena M Cristiano M Damiano F Bonfe M Variable structure PID controller for cooperative eye-in-hand/eye-to-hand visual servoing 2003 Istanbul, Turkey https://doi.org/10.1109/CCA.2003.1223145Hashimoto, K., Ebine, T., & Kimura, H. (1996). Visual servoing with hand-eye manipulator-optimal control approach. IEEE Transactions on Robotics and Automation, 12(5), 766-774. doi:10.1109/70.538981Chan A Leonard S Croft EA Little JJ Collision-free visual servoing of an eye-in-hand manipulator via constraint-aware planning and control 2011 San Francisco, CA, USA https://doi.org/10.1109/ACC.2011.5991008Allibert, G., Courtial, E., & Chaumette, F. (2010). Visual Servoing via Nonlinear Predictive Control. Lecture Notes in Control and Information Sciences, 375-393. doi:10.1007/978-1-84996-089-2_20Kragic, D., & Christensen, H. I. (2003). Robust Visual Servoing. The International Journal of Robotics Research, 22(10-11), 923-939. doi:10.1177/027836490302210009Mezouar Y Chaumette F Path planning in image space for robust visual servoing 2000 San Francisco, CA, USA https://doi.org/10.1109/ROBOT.2000.846445Morel, G., Zanne, P., & Plestan, F. (2005). Robust visual servoing: bounding the task function tracking errors. IEEE Transactions on Control Systems Technology, 13(6), 998-1009. doi:10.1109/tcst.2005.857409Hammouda, L., Kaaniche, K., Mekki, H., & Chtourou, M. (2015). Robust visual servoing using global features based on random process. International Journal of Computational Vision and Robotics, 5(2), 138. doi:10.1504/ijcvr.2015.068803Yang YX Liu D Liu H Robot-self-learning visual servoing algorithm using neural networks 2002 Beijing, China https://doi.org/10.1109/ICMLC.2002.1174473Sadeghzadeh, M., Calvert, D., & Abdullah, H. A. (2014). Self-Learning Visual Servoing of Robot Manipulator Using Explanation-Based Fuzzy Neural Networks and Q-Learning. Journal of Intelligent & Robotic Systems, 78(1), 83-104. doi:10.1007/s10846-014-0151-5Lee AX Levine S Abbeel P Learning Visual Servoing With Deep Features and Fitted Q-Iteration 2017Fakhry, H. H., & Wilson, W. J. (1996). A modified resolved acceleration controller for position-based visual servoing. Mathematical and Computer Modelling, 24(5-6), 1-9. doi:10.1016/0895-7177(96)00112-4Keshmiri, M., Wen-Fang Xie, & Mohebbi, A. (2014). Augmented Image-Based Visual Servoing of a Manipulator Using Acceleration Command. IEEE Transactions on Industrial Electronics, 61(10), 5444-5452. doi:10.1109/tie.2014.2300048Edwards, C., & Spurgeon, S. (1998). Sliding Mode Control. doi:10.1201/9781498701822Zanne P Morel G Piestan F Robust vision based 3D trajectory tracking using sliding mode control 2000 San Francisco, CA, USAOliveira TR Peixoto AJ Leite AC Hsu L Sliding mode control of uncertain multivariable nonlinear systems applied to uncalibrated robotics visual servoing 2009 St. Louis, MO, USAOliveira, T. R., Leite, A. C., Peixoto, A. J., & Hsu, L. (2014). Overcoming Limitations of Uncalibrated Robotics Visual Servoing by means of Sliding Mode Control and Switching Monitoring Scheme. Asian Journal of Control, 16(3), 752-764. doi:10.1002/asjc.899Li, F., & Xie, H.-L. (2010). Sliding mode variable structure control for visual servoing system. International Journal of Automation and Computing, 7(3), 317-323. doi:10.1007/s11633-010-0509-5Kim J Kim D Choi S Won S Image-based visual servoing using sliding mode control 2006 Busan, South KoreaBurger W Dean-Leon E Cheng G Robust second order sliding mode control for 6D position based visual servoing with a redundant mobile manipulator 2015 Seoul, South KoreaBecerra, H. M., López-Nicolás, G., & Sagüés, C. (2011). A Sliding-Mode-Control Law for Mobile Robots Based on Epipolar Visual Servoing From Three Views. IEEE Transactions on Robotics, 27(1), 175-183. doi:10.1109/tro.2010.2091750Parsapour, M., & Taghirad, H. D. (2015). Kernel-based sliding mode control for visual servoing system. IET Computer Vision, 9(3), 309-320. doi:10.1049/iet-cvi.2013.0310Xin J Ran BJ Ma XM Robot visual sliding mode servoing using SIFT features 2016 Chengdu, ChinaZhao, Y. M., Lin, Y., Xi, F., Guo, S., & Ouyang, P. (2016). Switch-Based Sliding Mode Control for Position-Based Visual Servoing of Robotic Riveting System. Journal of Manufacturing Science and Engineering, 139(4). doi:10.1115/1.4034681Moosavian, S. A. A., & Papadopoulos, E. (2007). Modified transpose Jacobian control of robotic systems. Automatica, 43(7), 1226-1233. doi:10.1016/j.automatica.2006.12.029Sagara, S., & Taira, Y. (2008). Digital control of space robot manipulators with velocity type joint controller using transpose of generalized Jacobian matrix. Artificial Life and Robotics, 13(1), 355-358. doi:10.1007/s10015-008-0584-7Khalaji, A. K., & Moosavian, S. A. A. (2015). Modified transpose Jacobian control of a tractor-trailer wheeled robot. Journal of Mechanical Science and Technology, 29(9), 3961-3969. doi:10.1007/s12206-015-0841-3Utkin, V., Guldner, J., & Shi, J. (2017). Sliding Mode Control in Electro-Mechanical Systems. doi:10.1201/9781420065619Utkin, V. (2016). Discussion Aspects of High-Order Sliding Mode Control. IEEE Transactions on Automatic Control, 61(3), 829-833. doi:10.1109/tac.2015.2450571Romdhane, H., Dehri, K., & Nouri, A. S. (2016). Discrete second-order sliding mode control based on optimal sliding function vector for multivariable systems with input-output representation. International Journal of Robust and Nonlinear Control, 26(17), 3806-3830. doi:10.1002/rnc.3536Sharma, N. K., & Janardhanan, S. (2017). Optimal discrete higher-order sliding mode control of uncertain LTI systems with partial state information. International Journal of Robust and Nonlinear Control. doi:10.1002/rnc.3785LEVANT, A. (1993). Sliding order and sliding accuracy in sliding mode control. International Journal of Control, 58(6), 1247-1263. doi:10.1080/00207179308923053Levant, A. (2003). Higher-order sliding modes, differentiation and output-feedback control. International Journal of Control, 76(9-10), 924-941. doi:10.1080/0020717031000099029Bartolini, G., Ferrara, A., & Usai, E. (1998). Chattering avoidance by second-order sliding mode control. IEEE Transactions on Automatic Control, 43(2), 241-246. doi:10.1109/9.661074Siciliano, B., Sciavicco, L., Villani, L., & Oriolo, G. (2009). Robotics. Advanced Textbooks in Control and Signal Processing. doi:10.1007/978-1-84628-642-1Deo, A. S., & Walker, I. D. (1995). Overview of damped least-squares methods for inverse kinematics of robot manipulators. Journal of Intelligent & Robotic Systems, 14(1), 43-68. doi:10.1007/bf01254007WHEELER, G., SU, C.-Y., & STEPANENKO, Y. (1998). A Sliding Mode Controller with Improved Adaptation Laws for the Upper Bounds on the Norm of Uncertainties. Automatica, 34(12), 1657-1661. doi:10.1016/s0005-1098(98)80024-1Yu-Sheng Lu. (2009). Sliding-Mode Disturbance Observer With Switching-Gain Adaptation and Its Application to Optical Disk Drives. IEEE Transactions on Industrial Electronics, 56(9), 3743-3750. doi:10.1109/tie.2009.2025719Chen, X., Shen, W., Cao, Z., & Kapoor, A. (2014). A novel approach for state of charge estimation based on adaptive switching gain sliding mode observer in electric vehicles. Journal of Power Sources, 246, 667-678. doi:10.1016/j.jpowsour.2013.08.039Cong, B. L., Chen, Z., & Liu, X. D. (2012). On adaptive sliding mode control without switching gain overestimation. International Journal of Robust and Nonlinear Control, 24(3), 515-531. doi:10.1002/rnc.2902Taleb, M., Plestan, F., & Bououlid, B. (2014). An adaptive solution for robust control based on integral high-order sliding mode concept. International Journal of Robust and Nonlinear Control, 25(8), 1201-1213. doi:10.1002/rnc.3135Zhu, J., & Khayati, K. (2016). On a new adaptive sliding mode control for MIMO nonlinear systems with uncertainties of unknown bounds. International Journal of Robust and Nonlinear Control, 27(6), 942-962. doi:10.1002/rnc.3608Hafez AHA Cervera E Jawahar CV Hybrid visual servoing by boosting IBVS and PBVS 2008 Damascus, SyriaKermorgant O Chaumette F Combining IBVS and PBVS to ensure the visibility constraint 2011 San Francisco, CA, USACorke, P. I., & Hutchinson, S. A. (2001). A new partitioned approach to image-based visual servo control. IEEE Transactions on Robotics and Automation, 17(4), 507-515. doi:10.1109/70.954764Yang, Z., & Shen, S. (2017). Monocular Visual–Inertial State Estimation With Online Initialization and Camera–IMU Extrinsic Calibration. IEEE Transactions on Automation Science and Engineering, 14(1), 39-51. doi:10.1109/tase.2016.2550621Chesi G Hashimoto K Static-eye against hand-eye visual servoing 2002 Las Vegas, NV, USABourdis N Marraud D Sahbi H Camera pose estimation using visual servoing for aerial video change detection 2012 Munich, GermanyShademan A Janabi-Sharifi F Sensitivity analysis of EKF and iterated EKF pose estimation for position-based visual servoing 2005 USAMalis, E., Mezouar, Y., & Rives, P. (2010). Robustness of Image-Based Visual Servoing With a Calibrated Camera in the Presence of Uncertainties in the Three-Dimensional Structure. IEEE Transactions on Robotics, 26(1), 112-120. doi:10.1109/tro.2009.2033332Chen J Behal A Dawson D Dixon W Adaptive visual servoing in the presence of intrinsic calibration uncertainty 2003 USAMezouar Y Malis E Robustness of central catadioptric image-based visual servoing to uncertainties on 3D parameters 2004 Sendai, JapanMarchand, E., Spindler, F., & Chaumette, F. (2005). ViSP for visual servoing: a generic software platform with a wide class of robot control skills. IEEE Robotics & Automation Magazine, 12(4), 40-52. doi:10.1109/mra.2005.157702

    A group-theoretic approach to formalizing bootstrapping problems

    Get PDF
    The bootstrapping problem consists in designing agents that learn a model of themselves and the world, and utilize it to achieve useful tasks. It is different from other learning problems as the agent starts with uninterpreted observations and commands, and with minimal prior information about the world. In this paper, we give a mathematical formalization of this aspect of the problem. We argue that the vague constraint of having "no prior information" can be recast as a precise algebraic condition on the agent: that its behavior is invariant to particular classes of nuisances on the world, which we show can be well represented by actions of groups (diffeomorphisms, permutations, linear transformations) on observations and commands. We then introduce the class of bilinear gradient dynamics sensors (BGDS) as a candidate for learning generic robotic sensorimotor cascades. We show how framing the problem as rejection of group nuisances allows a compact and modular analysis of typical preprocessing stages, such as learning the topology of the sensors. We demonstrate learning and using such models on real-world range-finder and camera data from publicly available datasets

    Robotic micromanipulation for microassembly : modelling by sequencial function chart and achievement by multiple scale visual servoings.

    No full text
    International audienceThe paper investigates robotic assembly by focusing on the manipulation of microparts. This task is formalized through the notion of basic tasks which are organized in a logical sequence represented by a function chart and interpreted as the model of the behavior of the experimental setup. The latter includes a robotic system, a gripping system, an imaging system, and a clean environment. The imaging system is a photon videomicroscope able to work at multiple scales. It is modelled by a linear projective model where the relation between the scale factor and the magnification or zoom is explicitly established. So, the usual visual control law is modified in order to take into account this relation. The manipulation of some silicon microparts (400 μm×400 μm×100 μm) by means of a distributed robotic system (xyθ system, ϕz system), a two-finger gripping system and a controllable zoom and focus videomicroscope shows the relevance of the concepts. The 30 % of failure rate comes mainly from the physical phenomena (electrostatic and capillary forces) instead of the accuracy of control or the occultations of microparts

    Learning and Acting in Peripersonal Space: Moving, Reaching, and Grasping

    Get PDF
    The young infant explores its body, its sensorimotor system, and the immediately accessible parts of its environment, over the course of a few months creating a model of peripersonal space useful for reaching and grasping objects around it. Drawing on constraints from the empirical literature on infant behavior, we present a preliminary computational model of this learning process, implemented and evaluated on a physical robot. The learning agent explores the relationship between the configuration space of the arm, sensing joint angles through proprioception, and its visual perceptions of the hand and grippers. The resulting knowledge is represented as the peripersonal space (PPS) graph, where nodes represent states of the arm, edges represent safe movements, and paths represent safe trajectories from one pose to another. In our model, the learning process is driven by intrinsic motivation. When repeatedly performing an action, the agent learns the typical result, but also detects unusual outcomes, and is motivated to learn how to make those unusual results reliable. Arm motions typically leave the static background unchanged, but occasionally bump an object, changing its static position. The reach action is learned as a reliable way to bump and move an object in the environment. Similarly, once a reliable reach action is learned, it typically makes a quasi-static change in the environment, moving an object from one static position to another. The unusual outcome is that the object is accidentally grasped (thanks to the innate Palmar reflex), and thereafter moves dynamically with the hand. Learning to make grasps reliable is more complex than for reaches, but we demonstrate significant progress. Our current results are steps toward autonomous sensorimotor learning of motion, reaching, and grasping in peripersonal space, based on unguided exploration and intrinsic motivation.Comment: 35 pages, 13 figure

    Overview of some Command Modes for Human-Robot Interaction Systems

    Get PDF
    Interaction and command modes as well as their combination are essential features of modern and futuristic robotic systems interacting with human beings in various dynamical environments. This paper presents a synthetic overview concerning the most command modes used in Human-Robot Interaction Systems (HRIS). It includes the first historical command modes which are namely tele-manipulation, off-line robot programming, and traditional elementary teaching by demonstration. It then introduces the most recent command modes which have been fostered later on by the use of artificial intelligence techniques implemented on more powerful computers. In this context, we will consider specifically the following modes: interactive programming based on the graphical-user-interfaces, voice-based, pointing-on-image-based, gesture-based, and finally brain-based commands.info:eu-repo/semantics/publishedVersio

    MULTI-RATE VISUAL FEEDBACK ROBOT CONTROL

    Full text link
    [EN] This thesis deals with two characteristic problems in visual feedback robot control: 1) sensor latency; 2) providing suitable trajectories for the robot and for the measurement in the image. All the approaches presented in this work are analyzed and implemented on a 6 DOF industrial robot manipulator or/and a wheeled robot. Focusing on the sensor latency problem, this thesis proposes the use of dual-rate high order holds within the control loop of robots. In this sense, the main contributions are: - Dual-rate high order holds based on primitive functions for robot control (Chapter 3): analysis of the system performance with and without the use of this multi-rate technique from non-conventional control. In addition, as consequence of the use of dual-rate holds, this work obtains and validates multi-rate controllers, especially dual-rate PIDs. - Asynchronous dual-rate high order holds based on primitive functions with time delay compensation (Chapter 3): generalization of asynchronous dual-rate high order holds incorporating an input signal time delay compensation component, improving thus the inter-sampling estimations computed by the hold. It is provided an analysis of the properties of such dual-rate holds with time delay compensation, comparing them with estimations obtained by the equivalent dual-rate holds without this compensation, as well as their implementation and validation within the control loop of a 6 DOF industrial robot manipulator. - Multi-rate nonlinear high order holds (Chapter 4): generalization of the concept of dual-rate high order holds with nonlinear estimation models, which include information about the plant to be controlled, the controller(s) and sensor(s) used, obtained from machine learning techniques. Thus, in order to obtain such a nonlinear hold, it is described a methodology non dependent of the machine technique used, although validated using artificial neural networks. Finally, an analysis of the properties of these new holds is carried out, comparing them with their equivalents based on primitive functions, as well as their implementation and validation within the control loop of an industrial robot manipulator and a wheeled robot. With respect to the problem of providing suitable trajectories for the robot and for the measurement in the image, this thesis presents the novel reference features filtering control strategy and its generalization from a multi-rate point of view. The main contributions in this regard are: - Reference features filtering control strategy (Chapter 5): a new control strategy is proposed to enlarge significantly the solution task reachability of robot visual feedback control. The main idea is to use optimal trajectories proposed by a non-linear EKF predictor-smoother (ERTS), based on Rauch-Tung-Striebel (RTS) algorithm, as new feature references for an underlying visual feedback controller. In this work it is provided both the description of the implementation algorithm and its implementation and validation utilizing an industrial robot manipulator. - Dual-rate Reference features filtering control strategy (Chapter 5): a generalization of the reference features filtering approach from a multi-rate point of view, and a dual Kalman-smoother step based on the relation of the sensor and controller frequencies of the reference filtering control strategy is provided, reducing the computational cost of the former algorithm, as well as addressing the problem of the sensor latency. The implementation algorithms, as well as its analysis, are described.[ES] La presente tesis propone soluciones para dos problemas característicos de los sistemas robóticos cuyo bucle de control se cierra únicamente empleando sensores de visión artificial: 1) la latencia del sensor; 2) la obtención de trayectorias factibles tanto para el robot así como para las medidas obtenidas en la imagen. Todos los métodos propuestos en este trabajo son analizados, validados e implementados utilizando brazo robot industrial de 6 grados de libertad y/o en un robot con ruedas. Atendiendo al problema de la latencia del sensor, esta tesis propone el uso de retenedores bi-frequencia de orden alto dentro de los lazos de control de robots. En este aspecto las principales contribuciones son: -Retenedores bi-frecuencia de orden alto basados en funciones primitivas dentro de lazos de control de robots (Capítulo 3): análisis del comportamiento del sistema con y sin el uso de esta técnica de control no convencional. Además, como consecuencia del empleo de los retenedores, obtención y validación de controladores multi-frequencia, concretamente de PIDs bi-frecuencia. -Retenedores bi-frecuencia asíncronos de orden alto basados en funciones primitivas con compensación de retardos (Capítulo 3): generalización de los retenedores bi-frecuencia asíncronos de orden alto incluyendo una componente de compensación del retardo en la señal de entrada, mejorando así las estimaciones inter-muestreo calculadas por el retenedor. Se proporciona un análisis de las propiedades de los retenedores con compensación del retardo, comparándolas con las obtenidas por sus predecesores sin compensación, así como su implementación y validación en un brazo robot de 6 grados de libertad. -Retenedores multi-frecuencia no lineales de orden alto (Capítulo 4): generalización del concepto de retenedor bi-frecuencia de orden alto con modelos de estimación no lineales, los cuales incluyen información tanto de la planta a controlar, como del controlador(es) y sensor(es) empleado(s), obtenida a partir de técnicas de aprendizaje. Así pues, para obtener dicho retenedor no lineal, se describe una metodología independiente de la herramienta de aprendizaje utilizada, aunque validada con el uso de redes neuronales artificiales. Finalmente se realiza un análisis de las propiedades de estos nuevos retenedores, comparándolos con sus predecesores basados en funciones primitivas, así como su implementación y validación en un brazo robot de 6 grados de libertad y en un robot móvil con ruedas. Por lo que respecta al problema de generación de trayectorias factibles para el robot y para la medida en la imagen, esta tesis propone la nueva estrategia de control basada en el filtrado de la referencia y su generalización desde el punto de vista multi-frecuencial. -Estrategia de control basada en el filtrado de la referencia (Capítulo 5): una nueva estrategia de control se propone para ampliar significativamente el espacio de soluciones de los sistemas robóticos realimentados con sensores de visión artificial. La principal idea es utilizar las trayectorias óptimas obtenidas por una trayectoria predicha por un filtro de Kalman seguido de un suavizado basado en el algoritmo Rauch-Tung-Striebel (RTS) como nuevas referencias para un controlador dado. En este trabajo se proporciona tanto la descripción del algoritmo como su implementación y validación empleando un brazo robótico industrial. -Estrategia de control bi-frecuencia basada en el filtrado de la referencia (Capítulo 5): generalización de la estrategia de control basada en filtrado de la referencia desde un punto de vista multi-frecuencial, con un filtro de Kalman multi-frecuencia y un Kalman-smoother dual basado en la relación existente entre las frecuencias del sensor y del controlador, reduciendo así el coste computacional del algoritmo y, al mismo tiempo, dando solución al problema de la latencia del sensor. La validación se realiza utilizando un barzo robot industria asi[CA] La present tesis proposa solucions per a dos problemes característics dels sistemes robòtics el els que el bucle de control es tanca únicament utilitzant sensors de visió artificial: 1) la latència del sensor; 2) l'obtenció de trajectòries factibles tant per al robot com per les mesures en la imatge. Tots els mètodes proposats en aquest treball son analitzats, validats e implementats utilitzant un braç robot industrial de 6 graus de llibertat i/o un robot amb rodes. Atenent al problema de la latència del sensor, esta tesis proposa l'ús de retenidors bi-freqüència d'ordre alt a dins del llaços de control de robots. Al respecte, les principals contribucions son: - Retenidors bi-freqüència d'ordre alt basats en funcions primitives a dintre dels llaços de control de robots (Capítol 3): anàlisis del comportament del sistema amb i sense l'ús d'aquesta tècnica de control no convencional. A més a més, com a conseqüència de l'ús dels retenidors, obtenció i validació de controladors multi-freqüència, concretament de PIDs bi-freqüència. - Retenidors bi-freqüència asíncrons d'ordre alt basats en funcions primitives amb compensació de retards (Capítol 3): generalització dels retenidors bi-freqüència asíncrons d'ordre alt inclouen una component de compensació del retràs en la senyal d'entrada al retenidor, millorant així les estimacions inter-mostreig calculades per el retenidor. Es proporciona un anàlisis de les propietats dels retenidors amb compensació del retràs, comparant-les amb les obtingudes per el seus predecessors sense la compensació, així com la seua implementació i validació en un braç robot industrial de 6 graus de llibertat. - Retenidors multi-freqüència no-lineals d'ordre alt (Capítol 4): generalització del concepte de retenidor bi-freqüència d'ordre alt amb models d'estimació no lineals, incloent informació tant de la planta a controlar, com del controlador(s) i sensor(s) utilitzat(s), obtenint-la a partir de tècniques d'aprenentatge. Així doncs, per obtindre el retenidor no lineal, es descriu una metodologia independent de la ferramenta d'aprenentatge utilitzada, però validada amb l'ús de rets neuronals artificials. Finalment es realitza un anàlisis de les propietats d'aquestos nous retenidors, comparant-los amb els seus predecessors basats amb funcions primitives, així com la seua implementació i validació amb un braç robot de 6 graus de llibertat i amb un robot mòbil de rodes. Per el que respecta al problema de generació de trajectòries factibles per al robot i per la mesura en la imatge, aquesta tesis proposa la nova estratègia de control basada amb el filtrat de la referència i la seua generalització des de el punt de vista multi-freqüència. - Estratègia de control basada amb el filtrat de la referència (Capítol 5): una nova estratègia de control es proposada per ampliar significativament l'espai de solucions dels sistemes robòtics realimentats amb sensors de visió artificial. La principal idea es la d'utilitzar les trajectòries optimes obtingudes per una trajectòria predita per un filtre de Kalman seguit d'un suavitzat basat en l'algoritme Rauch-Tung-Striebel (RTS) com noves referències per a un control donat. En aquest treball es proporciona tant la descripció del algoritme així com la seua implementació i validació utilitzant un braç robòtic industrial de 6 graus de llibertat. - Estratègia de control bi-freqüència basada en el filtrat (Capítol 5): generalització de l'estratègia de control basada am filtrat de la referència des de un punt de vista multi freqüència, amb un filtre de Kalman multi freqüència i un Kalman-Smoother dual basat amb la relació existent entre les freqüències del sensor i del controlador, reduint així el cost computacional de l'algoritme i, al mateix temps, donant solució al problema de la latència del sensor. L'algoritme d'implementació d'aquesta tècnica, així com la seua validaciSolanes Galbis, JE. (2015). MULTI-RATE VISUAL FEEDBACK ROBOT CONTROL [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/57951TESI
    corecore