137 research outputs found

    Vision-Based Control of Flexible Robot Systems

    Get PDF
    This thesis covers the controlling of flexible robot systems by using a camera as a measurement device. To accomplish the purpose of the study, the estimation process of dynamic state variables of flexible link robot has been examined based on camera measurements. For the purpose of testing two application examples for flexible link have been applied, an algorithm for the dynamic state variables estimation is proposed. Flexible robots can have very complex dynamic behavior during their operations, which can lead to induced vibrations. Since the vibrations and its derivative are not all measurable, therefore the estimation of state variables plays a significant role in the state feedback control of flexible link robots. A vision sensor (i.e. camera) realizing a contact-less measurement sensor can be used to measure the deflection of flexible robot arm. Using a vision sensor, however, would generate new effects such as limited accuracy and time delay, which are the main inherent problems of the application of vision sensors within the context. These effects and related compensation approaches are studied in this thesis. An indirect method for link deflection (i.e. system states) sensing is presented. It uses a vision system consisting of a CCD camera and an image processing unit. The main purpose of this thesis is to develop an estimation approach combining suitable measurement devices which are easy to realize with improved reliability. It includes designing two state estimators; the first one for the traditional sensor type (negligible noise and time delay) and the second one is for the camera measurement which account for the dynamic error due to the time delay. The estimation approach is applied first using a single link flexible robot; the dynamic model of the flexible link is derived using a finite element method. Based on the suggested estimation approach, the first observer estimates the vibrations using strain gauge (fast and complete dynamics), and the second observer estimates the vibrations using vision data (slow dynamical parts). In order to achieve an optimal estimation, a proper combination process of the two estimated dynamical parts of the system dynamics is described. The simulation results for the estimations based on vision measurements show that the slow dynamical states can be estimated and the observer can compensate the time delay dynamic errors. It is also observed that an optimal estimation can be attained by combining slow dynamical estimated states with those of fast observer-based on strain gauge measurement. Based on suggested estimation approach a vision-based control for elastic shipmounted crane is designed to regulate the motion of the payload. For the observers and the controller design, a linear dynamic model of elastic-ship mounted crane incorporating a finite element technique for modeling flexible link is employed. In order to estimate the dynamic states variables and the unknown disturbance two state observers are designed. The first one estimates the state variables using camera measurement (augmented Kalman filter). The second one used potentiometers measurement (PI-Observer). To realize a multi-model approach of elastic-ship mounted crane, a variable gain controller and variable gain observers are designed. The variable gain controller is used to generate the required damping to control the system based on the estimated states and the roll angle. Simulation results show that the variable gain observers can adequately estimate the states and the unknown disturbance acting on the payload. It is further observed that the variable gain controller can effectively reduce the payload pendulations. Experiments are conducted using the camera to measure the link deflection of scaled elastic ship-mounted crane system. The results shown that the variable gain controller based on the combined states observers mitigated the vibrations of the system and the swinging of the payload. The presented material above is embedded into an interrelated thesis. A concise introduction to the vision-based control and state estimation problems is attached in the first chapter. An extensive survey of available visual servoing algorithms that include the rigid robot system and the flexible robot system is also presented. The conclusions of the work and suggestions for the future research are provided at the last chapter of this thesis

    Robust fulfillment of constraints in robot visual servoing

    Full text link
    [EN] In this work, an approach based on sliding mode ideas is proposed to satisfy constraints in robot visual servoing. In particular, different types of constraints are defined in order to: fulfill the visibility constraints (camera fieldof-view and occlusions) for the image features of the detected object; to avoid exceeding the joint range limits and maximum joint speeds; and to avoid forbidden areas in the robot workspace. Moreover, another task with low-priority is considered to track the target object. The main advantages of the proposed approach are low computational cost, robustness and fully utilization of the allowed space for the constraints. The applicability and effectiveness of the proposed approach is demonstrated by simulation results for a simple 2D case and a complex 3D case study. Furthermore, the feasibility and robustness of the proposed approach is substantiated by experimental results using a conventional 6R industrial manipulator.This work was supported in part by the Spanish Government under grants BES-2010-038486 and Project DPI2013-42302-R, and the Generalitat Valenciana under grants VALi+d APOSTD/2016/044 and BEST/2017/029.Muñoz-Benavent, P.; Gracia Calandin, LI.; Solanes Galbis, JE.; Esparza Peidro, A.; Tornero Montserrat, J. (2018). Robust fulfillment of constraints in robot visual servoing. Control Engineering Practice. 71(1):79-95. https://doi.org/10.1016/j.conengprac.2017.10.017S799571

    Control of robot-camera system with actuator's dynamics to track moving object

    Get PDF
    This study presents a solution to the control of robot–camera system with actuator's dynamics to track a moving object where many uncertain parameters exist in the system’s dynamics. After modeling and analyzing the system, this paper suggests a new control method using an on-line learning neural network in closed-loop to control the Pan-Tilt platform that moves the Camera to keep track an unknown moving object. The control structure based on the image feature error determines the necessary rotational velocities on the Pan joint and Tilt joint and computes the voltage controlling the DC motor in joints such that the object image should always be at the center point in the image plane. The global asymptotic stability of the closed-loop is proven by the Lyapunov direct stability theory. Simulation results on Matlab show the system tracking fast and stable

    Brain–Machine Interface and Visual Compressive Sensing-Based Teleoperation Control of an Exoskeleton Robot

    Get PDF
    This paper presents a teleoperation control for an exoskeleton robotic system based on the brain-machine interface and vision feedback. Vision compressive sensing, brain-machine reference commands, and adaptive fuzzy controllers in joint-space have been effectively integrated to enable the robot performing manipulation tasks guided by human operator's mind. First, a visual-feedback link is implemented by a video captured by a camera, allowing him/her to visualize the manipulator's workspace and movements being executed. Then, the compressed images are used as feedback errors in a nonvector space for producing steady-state visual evoked potentials electroencephalography (EEG) signals, and it requires no prior information on features in contrast to the traditional visual servoing. The proposed EEG decoding algorithm generates control signals for the exoskeleton robot using features extracted from neural activity. Considering coupled dynamics and actuator input constraints during the robot manipulation, a local adaptive fuzzy controller has been designed to drive the exoskeleton tracking the intended trajectories in human operator's mind and to provide a convenient way of dynamics compensation with minimal knowledge of the dynamics parameters of the exoskeleton robot. Extensive experiment studies employing three subjects have been performed to verify the validity of the proposed method

    Toward Vision-based Control of Heavy-Duty and Long-Reach Robotic Manipulators

    Get PDF
    Heavy-duty mobile machines are an important part of the industry, and they are used for various work tasks in mining, construction, forestry, and agriculture. Many of these machines have heavy-duty, long-reach (HDLR) manipulators attached to them, which are used for work tasks such as drilling, lifting, and grabbing. A robotic manipulator, by definition, is a device used for manipulating materials without direct physical contact by a human operator. HDLR manipulators differ from manipulators of conventional industrial robots in the sense that they are subject to much larger kinematic and non-kinematic errors, which hinder the overall accuracy and repeatability of the robot’s tool center point (TCP). Kinematic errors result from modeling inaccuracies, while non-kinematic errors include structural flexibility and bending, thermal effects, backlash, and sensor resolution. Furthermore, conventional six degrees of freedom (DOF) industrial robots are more general-purpose systems, whereas HDLR manipulators are mostly designed for special (or single) purposes. HDLR manipulators are typically built as lightweight as possible while being able to handle significant load masses. Consequently, they have long reaches and high payload-to-own-weight ratios, which contribute to the increased errors compared to conventional industrial robots. For example, a joint angle measurement error of 0.5◦ associated with a 5-m-long rigid link results in an error of approximately 4.4 cm at the end of the link, with further errors resulting from flexibility and other non-kinematic aspects. The target TCP positioning accuracy for HDLR manipulators is in the sub-centimeter range, which is very difficult to achieve in practical systems. These challenges have somewhat delayed the automation of HDLR manipulators, while conventional industrial robots have long been commercially available. This is also attributed to the fact that machines with HDLR manipulators have much lower production volumes, and the work tasks are more non-repetitive in nature compared to conventional industrial robots in factories. Sensors are a key requirement in order to achieve automated operations and eventually full autonomy. For example, humans mostly rely on their visual perception in work tasks, while the collected information is processed in the brain. Much like humans, autonomous machines also require both sensing and intelligent processing of the collected sensor data. This dissertation investigates new visual sensing solutions for HDLR manipulators, which are striving toward increased automation levels in various work tasks. The focus is on visual perception and generic 6 DOF TCP pose estimation of HDLR manipulators in unknown (or unstructured) environments. Methods for increasing the robustness and reliability of visual perception systems are examined by exploiting sensor redundancy and data fusion. Vision-aided control using targetless, motion-based local calibration between an HDLR manipulator and a visual sensor is also proposed to improve the absolute positioning accuracy of the TCP despite the kinematic and non-kinematic errors present in the system. It is experimentally shown that a sub-centimeter TCP positioning accuracy was reliably achieved in the tested cases using a developed trajectory-matching-based method. Overall, this compendium thesis includes four publications and one unpublished manuscript related to these topics. Two main research problems, inspired by the industry, are considered and investigated in the presented publications. The outcome of this thesis provides insight into possible applications and benefits of advanced visual perception systems for HDLR manipulators in dynamic, unstructured environments. The main contribution is related to achieving sub-centimeter TCP positioning accuracy for an HDLR manipulator using a low-cost camera. The numerous challenges and complexities related to HDLR manipulators and visual sensing are also highlighted and discussed

    Industrial Robotics

    Get PDF
    This book covers a wide range of topics relating to advanced industrial robotics, sensors and automation technologies. Although being highly technical and complex in nature, the papers presented in this book represent some of the latest cutting edge technologies and advancements in industrial robotics technology. This book covers topics such as networking, properties of manipulators, forward and inverse robot arm kinematics, motion path-planning, machine vision and many other practical topics too numerous to list here. The authors and editor of this book wish to inspire people, especially young ones, to get involved with robotic and mechatronic engineering technology and to develop new and exciting practical applications, perhaps using the ideas and concepts presented herein

    High-Speed Vision and Force Feedback for Motion-Controlled Industrial Manipulators

    Get PDF
    Over the last decades, both force sensors and cameras have emerged as useful sensors for different applications in robotics. This thesis considers a number of dynamic visual tracking and control problems, as well as the integration of these techniques with contact force control. Different topics ranging from basic theory to system implementation and applications are treated. A new interface developed for external sensor control is presented, designed by making non-intrusive extensions to a standard industrial robot control system. The structure of these extensions are presented, the system properties are modeled and experimentally verified, and results from force-controlled stub grinding and deburring experiments are presented. A novel system for force-controlled drilling using a standard industrial robot is also demonstrated. The solution is based on the use of force feedback to control the contact forces and the sliding motions of the pressure foot, which would otherwise occur during the drilling phase. Basic methods for feature-based tracking and servoing are presented, together with an extension for constrained motion estimation based on a dual quaternion pose parametrization. A method for multi-camera real-time rigid body tracking with time constraints is also presented, based on an optimal selection of the measured features. The developed tracking methods are used as the basis for two different approaches to vision/force control, which are illustrated in experiments. Intensity-based techniques for tracking and vision-based control are also developed. A dynamic visual tracking technique based directly on the image intensity measurements is presented, together with new stability-based methods suitable for dynamic tracking and feedback problems. The stability-based methods outperform the previous methods in many situations, as shown in simulations and experiments
    corecore