1,299 research outputs found

    Visual Servoing from Deep Neural Networks

    Get PDF
    We present a deep neural network-based method to perform high-precision, robust and real-time 6 DOF visual servoing. The paper describes how to create a dataset simulating various perturbations (occlusions and lighting conditions) from a single real-world image of the scene. A convolutional neural network is fine-tuned using this dataset to estimate the relative pose between two images of the same scene. The output of the network is then employed in a visual servoing control scheme. The method converges robustly even in difficult real-world settings with strong lighting variations and occlusions.A positioning error of less than one millimeter is obtained in experiments with a 6 DOF robot.Comment: fixed authors lis

    Sim2Real View Invariant Visual Servoing by Recurrent Control

    Full text link
    Humans are remarkably proficient at controlling their limbs and tools from a wide range of viewpoints and angles, even in the presence of optical distortions. In robotics, this ability is referred to as visual servoing: moving a tool or end-point to a desired location using primarily visual feedback. In this paper, we study how viewpoint-invariant visual servoing skills can be learned automatically in a robotic manipulation scenario. To this end, we train a deep recurrent controller that can automatically determine which actions move the end-point of a robotic arm to a desired object. The problem that must be solved by this controller is fundamentally ambiguous: under severe variation in viewpoint, it may be impossible to determine the actions in a single feedforward operation. Instead, our visual servoing system must use its memory of past movements to understand how the actions affect the robot motion from the current viewpoint, correcting mistakes and gradually moving closer to the target. This ability is in stark contrast to most visual servoing methods, which either assume known dynamics or require a calibration phase. We show how we can learn this recurrent controller using simulated data and a reinforcement learning objective. We then describe how the resulting model can be transferred to a real-world robot by disentangling perception from control and only adapting the visual layers. The adapted model can servo to previously unseen objects from novel viewpoints on a real-world Kuka IIWA robotic arm. For supplementary videos, see: https://fsadeghi.github.io/Sim2RealViewInvariantServoComment: Supplementary video: https://fsadeghi.github.io/Sim2RealViewInvariantServ

    Sliding mode control for robust and smooth reference tracking in robot visual servoing

    Full text link
    [EN] An approach based on sliding mode is proposed in this work for reference tracking in robot visual servoing. In particular, 2 sliding mode controls are obtained depending on whether joint accelerations or joint jerks are considered as the discontinuous control action. Both sliding mode controls are extensively compared in a 3D-simulated environment with their equivalent well-known continuous controls, which can be found in the literature, to highlight their similarities and differences. The main advantages of the proposed method are smoothness, robustness, and low computational cost. The applicability and robustness of the proposed approach are substantiated by experimental results using a conventional 6R industrial manipulator (KUKA KR 6 R900 sixx [AGILUS]) for positioning and tracking tasks.Spanish Government, Grant/Award Number: BES-2010-038486; Generalitat Valenciana, Grant/Award Number: BEST/2017/029 and APOSTD/2016/044Muñoz-Benavent, P.; Gracia, L.; Solanes, JE.; Esparza, A.; Tornero, J. (2018). Sliding mode control for robust and smooth reference tracking in robot visual servoing. International Journal of Robust and Nonlinear Control. 28(5):1728-1756. https://doi.org/10.1002/rnc.3981S17281756285Hutchinson, S., Hager, G. D., & Corke, P. I. (1996). A tutorial on visual servo control. IEEE Transactions on Robotics and Automation, 12(5), 651-670. doi:10.1109/70.538972Chaumette, F., & Hutchinson, S. (2008). Visual Servoing and Visual Tracking. Springer Handbook of Robotics, 563-583. doi:10.1007/978-3-540-30301-5_25Corke, P. (2011). Robotics, Vision and Control. Springer Tracts in Advanced Robotics. doi:10.1007/978-3-642-20144-8RYAN, E. P., & CORLESS, M. (1984). Ultimate Boundedness and Asymptotic Stability of a Class of Uncertain Dynamical Systems via Continuous and Discontinuous Feedback Control. IMA Journal of Mathematical Control and Information, 1(3), 223-242. doi:10.1093/imamci/1.3.223Chaumette, F., & Hutchinson, S. (2006). Visual servo control. I. Basic approaches. IEEE Robotics & Automation Magazine, 13(4), 82-90. doi:10.1109/mra.2006.250573Chaumette, F., & Hutchinson, S. (2007). Visual servo control. II. Advanced approaches [Tutorial]. IEEE Robotics & Automation Magazine, 14(1), 109-118. doi:10.1109/mra.2007.339609Bonfe M Mainardi E Fantuzzi C Variable structure PID based visual servoing for robotic tracking and manipulation 2002 Lausanne, Switzerland https://doi.org/10.1109/IRDS.2002.1041421Solanes, J. E., Muñoz-Benavent, P., GirbĂ©s, V., Armesto, L., & Tornero, J. (2015). On improving robot image-based visual servoing based on dual-rate reference filtering control strategy. Robotica, 34(12), 2842-2859. doi:10.1017/s0263574715000454Elena M Cristiano M Damiano F Bonfe M Variable structure PID controller for cooperative eye-in-hand/eye-to-hand visual servoing 2003 Istanbul, Turkey https://doi.org/10.1109/CCA.2003.1223145Hashimoto, K., Ebine, T., & Kimura, H. (1996). Visual servoing with hand-eye manipulator-optimal control approach. IEEE Transactions on Robotics and Automation, 12(5), 766-774. doi:10.1109/70.538981Chan A Leonard S Croft EA Little JJ Collision-free visual servoing of an eye-in-hand manipulator via constraint-aware planning and control 2011 San Francisco, CA, USA https://doi.org/10.1109/ACC.2011.5991008Allibert, G., Courtial, E., & Chaumette, F. (2010). Visual Servoing via Nonlinear Predictive Control. Lecture Notes in Control and Information Sciences, 375-393. doi:10.1007/978-1-84996-089-2_20Kragic, D., & Christensen, H. I. (2003). Robust Visual Servoing. The International Journal of Robotics Research, 22(10-11), 923-939. doi:10.1177/027836490302210009Mezouar Y Chaumette F Path planning in image space for robust visual servoing 2000 San Francisco, CA, USA https://doi.org/10.1109/ROBOT.2000.846445Morel, G., Zanne, P., & Plestan, F. (2005). Robust visual servoing: bounding the task function tracking errors. IEEE Transactions on Control Systems Technology, 13(6), 998-1009. doi:10.1109/tcst.2005.857409Hammouda, L., Kaaniche, K., Mekki, H., & Chtourou, M. (2015). Robust visual servoing using global features based on random process. International Journal of Computational Vision and Robotics, 5(2), 138. doi:10.1504/ijcvr.2015.068803Yang YX Liu D Liu H Robot-self-learning visual servoing algorithm using neural networks 2002 Beijing, China https://doi.org/10.1109/ICMLC.2002.1174473Sadeghzadeh, M., Calvert, D., & Abdullah, H. A. (2014). Self-Learning Visual Servoing of Robot Manipulator Using Explanation-Based Fuzzy Neural Networks and Q-Learning. Journal of Intelligent & Robotic Systems, 78(1), 83-104. doi:10.1007/s10846-014-0151-5Lee AX Levine S Abbeel P Learning Visual Servoing With Deep Features and Fitted Q-Iteration 2017Fakhry, H. H., & Wilson, W. J. (1996). A modified resolved acceleration controller for position-based visual servoing. Mathematical and Computer Modelling, 24(5-6), 1-9. doi:10.1016/0895-7177(96)00112-4Keshmiri, M., Wen-Fang Xie, & Mohebbi, A. (2014). Augmented Image-Based Visual Servoing of a Manipulator Using Acceleration Command. IEEE Transactions on Industrial Electronics, 61(10), 5444-5452. doi:10.1109/tie.2014.2300048Edwards, C., & Spurgeon, S. (1998). Sliding Mode Control. doi:10.1201/9781498701822Zanne P Morel G Piestan F Robust vision based 3D trajectory tracking using sliding mode control 2000 San Francisco, CA, USAOliveira TR Peixoto AJ Leite AC Hsu L Sliding mode control of uncertain multivariable nonlinear systems applied to uncalibrated robotics visual servoing 2009 St. Louis, MO, USAOliveira, T. R., Leite, A. C., Peixoto, A. J., & Hsu, L. (2014). Overcoming Limitations of Uncalibrated Robotics Visual Servoing by means of Sliding Mode Control and Switching Monitoring Scheme. Asian Journal of Control, 16(3), 752-764. doi:10.1002/asjc.899Li, F., & Xie, H.-L. (2010). Sliding mode variable structure control for visual servoing system. International Journal of Automation and Computing, 7(3), 317-323. doi:10.1007/s11633-010-0509-5Kim J Kim D Choi S Won S Image-based visual servoing using sliding mode control 2006 Busan, South KoreaBurger W Dean-Leon E Cheng G Robust second order sliding mode control for 6D position based visual servoing with a redundant mobile manipulator 2015 Seoul, South KoreaBecerra, H. M., LĂłpez-NicolĂĄs, G., & SagĂŒĂ©s, C. (2011). A Sliding-Mode-Control Law for Mobile Robots Based on Epipolar Visual Servoing From Three Views. IEEE Transactions on Robotics, 27(1), 175-183. doi:10.1109/tro.2010.2091750Parsapour, M., & Taghirad, H. D. (2015). Kernel-based sliding mode control for visual servoing system. IET Computer Vision, 9(3), 309-320. doi:10.1049/iet-cvi.2013.0310Xin J Ran BJ Ma XM Robot visual sliding mode servoing using SIFT features 2016 Chengdu, ChinaZhao, Y. M., Lin, Y., Xi, F., Guo, S., & Ouyang, P. (2016). Switch-Based Sliding Mode Control for Position-Based Visual Servoing of Robotic Riveting System. Journal of Manufacturing Science and Engineering, 139(4). doi:10.1115/1.4034681Moosavian, S. A. A., & Papadopoulos, E. (2007). Modified transpose Jacobian control of robotic systems. Automatica, 43(7), 1226-1233. doi:10.1016/j.automatica.2006.12.029Sagara, S., & Taira, Y. (2008). Digital control of space robot manipulators with velocity type joint controller using transpose of generalized Jacobian matrix. Artificial Life and Robotics, 13(1), 355-358. doi:10.1007/s10015-008-0584-7Khalaji, A. K., & Moosavian, S. A. A. (2015). Modified transpose Jacobian control of a tractor-trailer wheeled robot. Journal of Mechanical Science and Technology, 29(9), 3961-3969. doi:10.1007/s12206-015-0841-3Utkin, V., Guldner, J., & Shi, J. (2017). Sliding Mode Control in Electro-Mechanical Systems. doi:10.1201/9781420065619Utkin, V. (2016). Discussion Aspects of High-Order Sliding Mode Control. IEEE Transactions on Automatic Control, 61(3), 829-833. doi:10.1109/tac.2015.2450571Romdhane, H., Dehri, K., & Nouri, A. S. (2016). Discrete second-order sliding mode control based on optimal sliding function vector for multivariable systems with input-output representation. International Journal of Robust and Nonlinear Control, 26(17), 3806-3830. doi:10.1002/rnc.3536Sharma, N. K., & Janardhanan, S. (2017). Optimal discrete higher-order sliding mode control of uncertain LTI systems with partial state information. International Journal of Robust and Nonlinear Control. doi:10.1002/rnc.3785LEVANT, A. (1993). Sliding order and sliding accuracy in sliding mode control. International Journal of Control, 58(6), 1247-1263. doi:10.1080/00207179308923053Levant, A. (2003). Higher-order sliding modes, differentiation and output-feedback control. International Journal of Control, 76(9-10), 924-941. doi:10.1080/0020717031000099029Bartolini, G., Ferrara, A., & Usai, E. (1998). Chattering avoidance by second-order sliding mode control. IEEE Transactions on Automatic Control, 43(2), 241-246. doi:10.1109/9.661074Siciliano, B., Sciavicco, L., Villani, L., & Oriolo, G. (2009). Robotics. Advanced Textbooks in Control and Signal Processing. doi:10.1007/978-1-84628-642-1Deo, A. S., & Walker, I. D. (1995). Overview of damped least-squares methods for inverse kinematics of robot manipulators. Journal of Intelligent & Robotic Systems, 14(1), 43-68. doi:10.1007/bf01254007WHEELER, G., SU, C.-Y., & STEPANENKO, Y. (1998). A Sliding Mode Controller with Improved Adaptation Laws for the Upper Bounds on the Norm of Uncertainties. Automatica, 34(12), 1657-1661. doi:10.1016/s0005-1098(98)80024-1Yu-Sheng Lu. (2009). Sliding-Mode Disturbance Observer With Switching-Gain Adaptation and Its Application to Optical Disk Drives. IEEE Transactions on Industrial Electronics, 56(9), 3743-3750. doi:10.1109/tie.2009.2025719Chen, X., Shen, W., Cao, Z., & Kapoor, A. (2014). A novel approach for state of charge estimation based on adaptive switching gain sliding mode observer in electric vehicles. Journal of Power Sources, 246, 667-678. doi:10.1016/j.jpowsour.2013.08.039Cong, B. L., Chen, Z., & Liu, X. D. (2012). On adaptive sliding mode control without switching gain overestimation. International Journal of Robust and Nonlinear Control, 24(3), 515-531. doi:10.1002/rnc.2902Taleb, M., Plestan, F., & Bououlid, B. (2014). An adaptive solution for robust control based on integral high-order sliding mode concept. International Journal of Robust and Nonlinear Control, 25(8), 1201-1213. doi:10.1002/rnc.3135Zhu, J., & Khayati, K. (2016). On a new adaptive sliding mode control for MIMO nonlinear systems with uncertainties of unknown bounds. International Journal of Robust and Nonlinear Control, 27(6), 942-962. doi:10.1002/rnc.3608Hafez AHA Cervera E Jawahar CV Hybrid visual servoing by boosting IBVS and PBVS 2008 Damascus, SyriaKermorgant O Chaumette F Combining IBVS and PBVS to ensure the visibility constraint 2011 San Francisco, CA, USACorke, P. I., & Hutchinson, S. A. (2001). A new partitioned approach to image-based visual servo control. IEEE Transactions on Robotics and Automation, 17(4), 507-515. doi:10.1109/70.954764Yang, Z., & Shen, S. (2017). Monocular Visual–Inertial State Estimation With Online Initialization and Camera–IMU Extrinsic Calibration. IEEE Transactions on Automation Science and Engineering, 14(1), 39-51. doi:10.1109/tase.2016.2550621Chesi G Hashimoto K Static-eye against hand-eye visual servoing 2002 Las Vegas, NV, USABourdis N Marraud D Sahbi H Camera pose estimation using visual servoing for aerial video change detection 2012 Munich, GermanyShademan A Janabi-Sharifi F Sensitivity analysis of EKF and iterated EKF pose estimation for position-based visual servoing 2005 USAMalis, E., Mezouar, Y., & Rives, P. (2010). Robustness of Image-Based Visual Servoing With a Calibrated Camera in the Presence of Uncertainties in the Three-Dimensional Structure. IEEE Transactions on Robotics, 26(1), 112-120. doi:10.1109/tro.2009.2033332Chen J Behal A Dawson D Dixon W Adaptive visual servoing in the presence of intrinsic calibration uncertainty 2003 USAMezouar Y Malis E Robustness of central catadioptric image-based visual servoing to uncertainties on 3D parameters 2004 Sendai, JapanMarchand, E., Spindler, F., & Chaumette, F. (2005). ViSP for visual servoing: a generic software platform with a wide class of robot control skills. IEEE Robotics & Automation Magazine, 12(4), 40-52. doi:10.1109/mra.2005.157702

    Robot eye-hand coordination learning by watching human demonstrations: a task function approximation approach

    Full text link
    We present a robot eye-hand coordination learning method that can directly learn visual task specification by watching human demonstrations. Task specification is represented as a task function, which is learned using inverse reinforcement learning(IRL) by inferring differential rewards between state changes. The learned task function is then used as continuous feedbacks in an uncalibrated visual servoing(UVS) controller designed for the execution phase. Our proposed method can directly learn from raw videos, which removes the need for hand-engineered task specification. It can also provide task interpretability by directly approximating the task function. Besides, benefiting from the use of a traditional UVS controller, our training process is efficient and the learned policy is independent from a particular robot platform. Various experiments were designed to show that, for a certain DOF task, our method can adapt to task/environment variances in target positions, backgrounds, illuminations, and occlusions without prior retraining.Comment: Accepted in ICRA 201

    Deep Drone Racing: From Simulation to Reality with Domain Randomization

    Full text link
    Dynamically changing environments, unreliable state estimation, and operation under severe resource constraints are fundamental challenges that limit the deployment of small autonomous drones. We address these challenges in the context of autonomous, vision-based drone racing in dynamic environments. A racing drone must traverse a track with possibly moving gates at high speed. We enable this functionality by combining the performance of a state-of-the-art planning and control system with the perceptual awareness of a convolutional neural network (CNN). The resulting modular system is both platform- and domain-independent: it is trained in simulation and deployed on a physical quadrotor without any fine-tuning. The abundance of simulated data, generated via domain randomization, makes our system robust to changes of illumination and gate appearance. To the best of our knowledge, our approach is the first to demonstrate zero-shot sim-to-real transfer on the task of agile drone flight. We extensively test the precision and robustness of our system, both in simulation and on a physical platform, and show significant improvements over the state of the art.Comment: Accepted as a Regular Paper to the IEEE Transactions on Robotics Journal. arXiv admin note: substantial text overlap with arXiv:1806.0854

    Learning visual docking for non-holonomic autonomous vehicles

    Get PDF
    This paper presents a new method of learning visual docking skills for non-holonomic vehicles by direct interaction with the environment. The method is based on a reinforcement algorithm, which speeds up Q-learning by applying memorybased sweeping and enforcing the “adjoining property”, a filtering mechanism to only allow transitions between states that satisfy a fixed distance. The method overcomes some limitations of reinforcement learning techniques when they are employed in applications with continuous non-linear systems, such as car-like vehicles. In particular, a good approximation to the optimal behaviour is obtained by a small look-up table. The algorithm is tested within an image-based visual servoing framework on a docking task. The training time was less than 1 hour on the real vehicle. In experiments, we show the satisfactory performance of the algorithm
    • 

    corecore