287 research outputs found
Visual Servoing
The goal of this book is to introduce the visional application by excellent researchers in the world currently and offer the knowledge that can also be applied to another field widely. This book collects the main studies about machine vision currently in the world, and has a powerful persuasion in the applications employed in the machine vision. The contents, which demonstrate that the machine vision theory, are realized in different field. For the beginner, it is easy to understand the development in the vision servoing. For engineer, professor and researcher, they can study and learn the chapters, and then employ another application method
A survey on fractional order control techniques for unmanned aerial and ground vehicles
In recent years, numerous applications of science and engineering for modeling and control of unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) systems based on fractional calculus have been realized. The extra fractional order derivative terms allow to optimizing the performance of the systems. The review presented in this paper focuses on the control problems of the UAVs and UGVs that have been addressed by the fractional order techniques over the last decade
Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain
Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors
Fast and robust image feature matching methods for computer vision applications
Service robotic systems are designed to solve tasks such as recognizing and manipulating objects, understanding natural scenes, navigating in dynamic and populated environments. It's immediately evident that such tasks cannot be modeled in all necessary details as easy as it is with industrial robot tasks; therefore, service robotic system has to have the ability to sense and interact with the surrounding physical environment through a multitude of sensors and actuators. Environment sensing is one of the core problems that limit the deployment of mobile service robots since existing sensing systems are either too slow or too expensive. Visual sensing is the most promising way to provide a cost effective solution to the mobile robot sensing problem. It's usually achieved using one or several digital cameras placed on the robot or distributed in its environment. Digital cameras are information rich sensors and are relatively inexpensive and can be used to solve a number of key problems for robotics and other autonomous intelligent systems, such as visual servoing, robot navigation, object recognition, pose estimation, and much more. The key challenges to taking advantage of this powerful and inexpensive sensor is to come up with algorithms that can reliably and quickly extract and match the useful visual information necessary to automatically interpret the environment in real-time. Although considerable research has been conducted in recent years on the development of algorithms for computer and robot vision problems, there are still open research challenges in the context of the reliability, accuracy and processing time. Scale Invariant Feature Transform (SIFT) is one of the most widely used methods that has recently attracted much attention in the computer vision community due to the fact that SIFT features are highly distinctive, and invariant to scale, rotation and illumination changes. In addition, SIFT features are relatively easy to extract and to match against a large database of local features. Generally, there are two main drawbacks of SIFT algorithm, the first drawback is that the computational complexity of the algorithm increases rapidly with the number of key-points, especially at the matching step due to the high dimensionality of the SIFT feature descriptor. The other one is that the SIFT features are not robust to large viewpoint changes. These drawbacks limit the reasonable use of SIFT algorithm for robot vision applications since they require often real-time performance and dealing with large viewpoint changes. This dissertation proposes three new approaches to address the constraints faced when using SIFT features for robot vision applications, Speeded up SIFT feature matching, robust SIFT feature matching and the inclusion of the closed loop control structure into object recognition and pose estimation systems. The proposed methods are implemented and tested on the FRIEND II/III service robotic system. The achieved results are valuable to adapt SIFT algorithm to the robot vision applications
Robust Position-based Visual Servoing of Industrial Robots
Recently, the researchers have tried to use dynamic pose correction methods to improve the accuracy of industrial robots. The application of dynamic path tracking aims at adjusting the end-effectorâs pose by using a photogrammetry sensor and eye-to-hand PBVS scheme. In this study, the research aims to enhance the accuracy of industrial robot by designing a chattering-free digital sliding mode controller integrated with a novel adaptive robust Kalman filter (ARKF) validated on Puma 560 model on simulation. This study includes Gaussian noise generation, pose estimation, design of adaptive robust Kalman filter, and design of chattering-free sliding mode controller. The designed control strategy has been validated and compared with other control strategies in Matlab 2018a Simulink on a 64bits PC computer. The main contributions of the research work are summarized as follows.
First, the noise removal in the pose estimation is carried out by the novel ARKF. The proposed ARKF deals with experimental noise generated from photogrammetry observation sensor C-track 780. It exploits the advantages of adaptive estimation method for states noise covariance (Q), least square identification for measurement noise covariance (R) and a robust mechanism for state variables error covariance (P). The Gaussian noise generation is based on the collected data from the C-track when the robot is in a stationary status. A novel method for estimating covariance matrix R considering both effects of the velocity and pose is suggested.
Next, a robust PBVS approach for industrial robots based on fast discrete sliding mode controller (FDSMC) and ARKF is proposed. The FDSMC takes advantage of a nonlinear reaching law which results in faster and more accurate trajectory tracking compared to standard DSMC. Substituting the switching function with a continuous nonlinear reaching law leads to a continuous output and thus eliminating the chattering. Additionally, the sliding surface dynamics is considered to be a nonlinear one, which results in increasing the convergence speed and accuracy.
Finally, the analysis techniques related to various types of sliding mode controller have been used for comparison. Also, the kinematic and dynamic models with revolutionary joints for Puma 560 are built for simulation validation. Based on the computed indicators results, it is proven that after tuning the parameters of designed controller, the chattering-free FDSMC integrated with ARKF can essentially reduce the effect of uncertainties on robot dynamic model and improve the tracking accuracy of the 6 degree-of-freedom (DOF) robot
Recommended from our members
Visual Feedback Stabilisation of a Cart Inverted Pendulum A
Vision-based object stabilisation is an exciting and challenging area of research, and is one that promises great technical advancements in the field of computer vision. As humans, we are capable of a tremendous array of skilful interactions, particularly when balancing unstable objects that have complex, non-linear dynamics. These complex dynamics impose a difficult control problem, since the object must be stabilised through collaboration between applied forces and vision-based feedback. To coordinate our actions and facilitate delivery of precise amounts of muscle torque, we primarily use our eyes to provide feedback in a closed-loop control scheme. This ability to control an inherently unstable object by vision-only feedback demonstrates an exceptionally high degree of voluntary motor skill. Despite the pervasiveness of vision-based stabilisation in humans and animals, relatively little is known about the neural strategies used to achieve this task.
In the last few decades, with advancements in technology, we have tried to impart the skill of vision-based object stabilisation to machines, with varying degrees of success. Within the context of this research, we continue this pursuit by employing the classic Cart Inverted Pendulum; an inherently unstable, non-linear system to investigate dynamic object balancing by vision-only feedback. The Inverted Pendulum is considered to be one of the most fundamental benchmark systems in control theory; as a platform, it provides us with a strong, well established test bed for this research.
We seek to discover what strategies are used to stabilise the Cart Inverted Pendulum, and to determine if these strategies can be deployed in Real-Time, using cost-effective solutions. The thesis confronts, and overcomes the problems imposed by low-bandwidth USB cameras; such as poor colour-balance, image noise and low frame rates etc., to successfully achieve vision-based stabilisation.
The thesis presents a comprehensive vision-based control system that is capable of balancing an inverted pendulum with a resting oscillation of approximately ±1Âș. We employ a novel, segment-based location and tracking algorithm, which was found to have excellent noise immunity and enhanced robustness. We successfully demonstrate the resilience of the tracking and pose estimation algorithm against visual disturbances in Real-Time, and with minimal recovery delay. The algorithm was evaluated against peer reviewed research; in terms of processing time, amplitude of oscillation, measurement accuracy and resting oscillation. For each key performance indicator, our system was found to be superior in many cases to that found in the literature.
The thesis also delivers a complete test software environment, where vision-based algorithms can be evaluated. This environment includes a flexible tracking model generator to allow customisation of visual markers used by the system. We conclude by successfully performing off-line optimization of our method by means of Artificial Neural Networks, to achieve a significant improvement in angle measurement accuracy.Goodrich Engine Control Systems and Balfour Beatty Rail Technologie
- âŠ