9 research outputs found

    Sliding Mode Control (SMC) of Image‐Based Visual Servoing for a 6DOF Manipulator

    Get PDF
    The accuracy and stability are two fundamental concerns of the visual servoing control system. This chapter presents a sliding mode controller for image‐based visual servoing (IBVS) which can increase the accuracy of 6DOF robotic system with guaranteed stability. The proposed controller combines proportional derivative (PD) control with sliding mode control (SMC) for a 6DOF manipulator. Compared with conventional proportional or SMC controller, this approach owns faster convergence and better disturbance rejection ability. Both simulation and experimental results show that the proposed controller can increase the accuracy and robustness of a 6DOF robotic system

    Enhanced Image-Based Visual Servoing Dealing with Uncertainties

    Get PDF
    Nowadays, the applications of robots in industrial automation have been considerably increased. There is increasing demand for the dexterous and intelligent robots that can work in unstructured environment. Visual servoing has been developed to meet this need by integration of vision sensors into robotic systems. Although there has been significant development in visual servoing, there still exist some challenges in making it fully functional in the industry environment. The nonlinear nature of visual servoing and also system uncertainties are part of the problems affecting the control performance of visual servoing. The projection of 3D image to 2D image which occurs in the camera creates a source of uncertainty in the system. Another source of uncertainty lies in the camera and robot manipulator's parameters. Moreover, limited field of view (FOV) of the camera is another issues influencing the control performance. There are two main types of visual servoing: position-based and image-based. This project aims to develop a series of new methods of image-based visual servoing (IBVS) which can address the nonlinearity and uncertainty issues and improve the visual servoing performance of industrial robots. The first method is an adaptive switch IBVS controller for industrial robots in which the adaptive law deals with the uncertainties of the monocular camera in eye-in-hand configuration. The proposed switch control algorithm decouples the rotational and translational camera motions and decomposes the IBVS control into three separate stages with different gains. This method can increase the system response speed and improve the tracking performance of IBVS while dealing with camera uncertainties. The second method is an image feature reconstruction algorithm based on the Kalman filter which is proposed to handle the situation where the image features go outside the camera's FOV. The combination of the switch controller and the feature reconstruction algorithm can not only improve the system response speed and tracking performance of IBVS, but also can ensure the success of servoing in the case of the feature loss. Next, in order to deal with the external disturbance and uncertainties due to the depth of the features, the third new control method is designed to combine proportional derivative (PD) control with sliding mode control (SMC) on a 6-DOF manipulator. The properly tuned PD controller can ensure the fast tracking performance and SMC can deal with the external disturbance and depth uncertainties. In the last stage of the thesis, the fourth new semi off-line trajectory planning method is developed to perform IBVS tasks for a 6-DOF robotic manipulator system. In this method, the camera's velocity screw is parametrized using time-based profiles. The parameters of the velocity profile are then determined such that the velocity profile takes the robot to its desired position. This is done by minimizing the error between the initial and desired features. The algorithm for planning the orientation of the robot is decoupled from the position planning of the robot. This allows a convex optimization problem which lead to a faster and more efficient algorithm. The merit of the proposed method is that it respects all of the system constraints. This method also considers the limitation caused by camera's FOV. All the developed algorithms in the thesis are validated via tests on a 6-DOF Denso robot in an eye-in-hand configuration

    Visual Servoing in Robotics

    Get PDF
    Visual servoing is a well-known approach to guide robots using visual information. Image processing, robotics, and control theory are combined in order to control the motion of a robot depending on the visual information extracted from the images captured by one or several cameras. With respect to vision issues, a number of issues are currently being addressed by ongoing research, such as the use of different types of image features (or different types of cameras such as RGBD cameras), image processing at high velocity, and convergence properties. As shown in this book, the use of new control schemes allows the system to behave more robustly, efficiently, or compliantly, with fewer delays. Related issues such as optimal and robust approaches, direct control, path tracking, or sensor fusion are also addressed. Additionally, we can currently find visual servoing systems being applied in a number of different domains. This book considers various aspects of visual servoing systems, such as the design of new strategies for their application to parallel robots, mobile manipulators, teleoperation, and the application of this type of control system in new areas

    Hybrid Vision and Force Control in Robotic Manufacturing Systems

    Get PDF
    The ability to provide a physical interaction between an industrial robot and a workpiece in the environment is essential for a successful manipulation task. In this context, a wide range of operations such as deburring, pushing, and polishing are considered. The key factor to successfully accomplish such operations by a robot is to simultaneously control the position of the tool-tip of the end-effector and interaction force between the tool and the workpiece, which is a challenging task. This thesis aims to develop new reliable control strategies combining vision and force feedbacks to track a path on the workpiece while controlling the contacting force. In order to fulfill this task, the novel robust hybrid vision and force control approaches are presented for industrial robots subject to uncertainties and interacting with unknown workpieces. The main contributions of this thesis lie in several parts. In the first part of the thesis, a robust cascade vision and force approach is suggested to control industrial robots interacting with unknown workpieces considering model uncertainties. This cascade structure, consisting of an inner vision loop and an outer force loop, avoids the conflict between the force and vision control in traditional hybrid methods without decoupling force and vision systems. In the second part of the thesis, a novel image-based task-sequence/path planning scheme coupled with a robust vision and force control method for solving the multi-task operation problem of an eye-in-hand (EIH) industrial robot interacting with a workpiece is suggested. Each task is defined as tracking a predefined path or positioning to a single point on the workpiece’s surface with a desired interacting force signal, i.e., interaction with the workpiece. The proposed method suggests an optimal task sequence planning scheme to carry out all the tasks and an optimal path planning method to generate a collision-free path between the tasks, i.e., when the robot performs free-motion (pure vision control). In the third part of the project, a novel multi-stage method for robust hybrid vision and force control of industrial robots, subject to model uncertainties is proposed. It aims to improve the performance of the three phases of the control process: a) free-motion using the image-based visual servoing (IBVS) before the interaction with the workpiece; b) the moment that the end-effector touches the workpiece; and c) hybrid vision and force control during the interaction with the workpiece. In the fourth part of the thesis, a novel approach for hybrid vision and force control of eye-in-hand industrial robots is presented which addresses the problem of camera’s field-of-view (FOV) limitation. The merit of the proposed method is that it is capable of expanding the workpiece for eye-in-hand industrial robots to cope with the FOV limitation of the interaction tasks on the workpiece. All the developed algorithms in the thesis are validated via tests on a 6-DOF Denso robot in an eye-in-hand configuration

    Robust Position-based Visual Servoing of Industrial Robots

    Get PDF
    Recently, the researchers have tried to use dynamic pose correction methods to improve the accuracy of industrial robots. The application of dynamic path tracking aims at adjusting the end-effector’s pose by using a photogrammetry sensor and eye-to-hand PBVS scheme. In this study, the research aims to enhance the accuracy of industrial robot by designing a chattering-free digital sliding mode controller integrated with a novel adaptive robust Kalman filter (ARKF) validated on Puma 560 model on simulation. This study includes Gaussian noise generation, pose estimation, design of adaptive robust Kalman filter, and design of chattering-free sliding mode controller. The designed control strategy has been validated and compared with other control strategies in Matlab 2018a Simulink on a 64bits PC computer. The main contributions of the research work are summarized as follows. First, the noise removal in the pose estimation is carried out by the novel ARKF. The proposed ARKF deals with experimental noise generated from photogrammetry observation sensor C-track 780. It exploits the advantages of adaptive estimation method for states noise covariance (Q), least square identification for measurement noise covariance (R) and a robust mechanism for state variables error covariance (P). The Gaussian noise generation is based on the collected data from the C-track when the robot is in a stationary status. A novel method for estimating covariance matrix R considering both effects of the velocity and pose is suggested. Next, a robust PBVS approach for industrial robots based on fast discrete sliding mode controller (FDSMC) and ARKF is proposed. The FDSMC takes advantage of a nonlinear reaching law which results in faster and more accurate trajectory tracking compared to standard DSMC. Substituting the switching function with a continuous nonlinear reaching law leads to a continuous output and thus eliminating the chattering. Additionally, the sliding surface dynamics is considered to be a nonlinear one, which results in increasing the convergence speed and accuracy. Finally, the analysis techniques related to various types of sliding mode controller have been used for comparison. Also, the kinematic and dynamic models with revolutionary joints for Puma 560 are built for simulation validation. Based on the computed indicators results, it is proven that after tuning the parameters of designed controller, the chattering-free FDSMC integrated with ARKF can essentially reduce the effect of uncertainties on robot dynamic model and improve the tracking accuracy of the 6 degree-of-freedom (DOF) robot

    Adaptive Finite-Time Model Estimation and Control for Manipulator Visual Servoing using Sliding Mode Control and Neural Networks

    Full text link
    The image-based visual servoing without models of system is challenging since it is hard to fetch an accurate estimation of hand-eye relationship via merely visual measurement. Whereas, the accuracy of estimated hand-eye relationship expressed in local linear format with Jacobian matrix is important to whole system's performance. In this article, we proposed a finite-time controller as well as a Jacobian matrix estimator in a combination of online and offline way. The local linear formulation is formulated first. Then, we use a combination of online and offline method to boost the estimation of the highly coupled and nonlinear hand-eye relationship with data collected via depth camera. A neural network (NN) is pre-trained to give a relative reasonable initial estimation of Jacobian matrix. Then, an online updating method is carried out to modify the offline trained NN for a more accurate estimation. Moreover, sliding mode control algorithm is introduced to realize a finite-time controller. Compared with previous methods, our algorithm possesses better convergence speed. The proposed estimator possesses excellent performance in the accuracy of initial estimation and powerful tracking capabilities for time-varying estimation for Jacobian matrix compared with other data-driven estimators. The proposed scheme acquires the combination of neural network and finite-time control effect which drives a faster convergence speed compared with the exponentially converge ones. Another main feature of our algorithm is that the state signals in system is proved to be semi-global practical finite-time stable. Several experiments are carried out to validate proposed algorithm's performance.Comment: 24 pages, 10 figure

    UAV or Drones for Remote Sensing Applications in GPS/GNSS Enabled and GPS/GNSS Denied Environments

    Get PDF
    The design of novel UAV systems and the use of UAV platforms integrated with robotic sensing and imaging techniques, as well as the development of processing workflows and the capacity of ultra-high temporal and spatial resolution data, have enabled a rapid uptake of UAVs and drones across several industries and application domains.This book provides a forum for high-quality peer-reviewed papers that broaden awareness and understanding of single- and multiple-UAV developments for remote sensing applications, and associated developments in sensor technology, data processing and communications, and UAV system design and sensing capabilities in GPS-enabled and, more broadly, Global Navigation Satellite System (GNSS)-enabled and GPS/GNSS-denied environments.Contributions include:UAV-based photogrammetry, laser scanning, multispectral imaging, hyperspectral imaging, and thermal imaging;UAV sensor applications; spatial ecology; pest detection; reef; forestry; volcanology; precision agriculture wildlife species tracking; search and rescue; target tracking; atmosphere monitoring; chemical, biological, and natural disaster phenomena; fire prevention, flood prevention; volcanic monitoring; pollution monitoring; microclimates; and land use;Wildlife and target detection and recognition from UAV imagery using deep learning and machine learning techniques;UAV-based change detection

    XLIII Jornadas de Automática: libro de actas: 7, 8 y 9 de septiembre de 2022, Logroño (La Rioja)

    Get PDF
    [Resumen] Las Jornadas de Automática (JA) son el evento más importante del Comité Español de Automática (CEA), entidad científico-técnica con más de cincuenta años de vida y destinada a la difusión e implantación de la Automática en la sociedad. Este año se celebra la cuadragésima tercera edición de las JA, que constituyen el punto de encuentro de la comunidad de Automática de nuestro país. La presente edición permitirá dar visibilidad a los nuevos retos y resultados del ámbito, y su uso en un gran número de aplicaciones, entre otras, las energías renovables, la bioingeniería o la robótica asistencial. Además de la componente científica, que se ve reflejada en este libro de actas, las JA son un punto de encuentro de las diferentes generaciones de profesores, investigadores y profesionales, incluyendo la componente social que es de vital importancia. Esta edición 2022 de las JA se celebra en Logroño, capital de La Rioja, región mundialmente conocida por la calidad de sus vinos de Denominación de Origen y que ha asumido el desafío de poder ganar competitividad a través de la transformación verde y digital. Pero también por ser la cuna del castellano e impulsar el Valle de la Lengua con la ayuda de las nuevas tecnologías, entre ellas la Automática Inteligente. Los organizadores de estas JA, pertenecientes al Área de Ingeniería de Sistemas y Automática del Departamento de Ingeniería Eléctrica de la Universidad de La Rioja (UR), constituyen un pilar fundamental en el apoyo a la región para el estudio, implementación y difusión de estos retos. Esta edición, la primera en formato íntegramente presencial después de la pandemia de la covid-19, cuenta con más de 200 asistentes y se celebra a caballo entre el Edificio Politécnico de la Escuela Técnica Superior de Ingeniería Industrial y el Monasterio de Yuso situado en San Millán de la Cogolla, dos marcos excepcionales para la realización de las JA. Como parte del programa científico, dos sesiones plenarias harán hincapié, respectivamente, sobre soluciones de control para afrontar los nuevos retos energéticos, y sobre la calidad de los datos para una inteligencia artificial (IA) imparcial y confiable. También, dos mesas redondas debatirán aplicaciones de la IA y la implantación de la tecnología digital en la actividad profesional. Adicionalmente, destacaremos dos clases magistrales alineadas con tecnología de última generación que serán impartidas por profesionales de la empresa. Las JA también van a albergar dos competiciones: CEABOT, con robots humanoides, y el Concurso de Ingeniería de Control, enfocado a UAVs. A todas estas actividades hay que añadir las reuniones de los grupos temáticos de CEA, las exhibiciones de pósteres con las comunicaciones presentadas a las JA y los expositores de las empresas. Por último, durante el evento se va a proceder a la entrega del “Premio Nacional de Automática” (edición 2022) y del “Premio CEA al Talento Femenino en Automática”, patrocinado por el Gobierno de La Rioja (en su primera edición), además de diversos galardones enmarcados dentro de las actividades de los grupos temáticos de CEA. Las actas de las XLIII Jornadas de Automática están formadas por un total de 143 comunicaciones, organizadas en torno a los nueve Grupos Temáticos y a las dos Líneas Estratégicas de CEA. Los trabajos seleccionados han sido sometidos a un proceso de revisión por pares

    Desarrollo y validación de un modelo dinámico para una pila de combustible tipo PEM

    Get PDF
    JORNADAS DE AUTOMÁTICA (27) (27.2006.ALMERÍA)El objetivo de este trabajo es realizar un modelo dinámico detallado de una pila de combustible tipo PEM de 1.2 kW de potencia nominal. El modelo desarrollado incluye efectos como el ’flooding’ y la dinámica de la temperatura y es de utilidad para poder diseñar y ensayar controles tanto de la válvula de purga como de la refrigeración de la pila mediante un ventilador. Se ha desarrollado un novedoso tratamiento de la ecuación experimental que modela la curva de polarización que simplifica considerablemente su caracterización. Por último el modelo realizado ha sido validado con datos tomados de una pila real
    corecore