27 research outputs found

    Hybrid Vision and Force Control in Robotic Manufacturing Systems

    Get PDF
    The ability to provide a physical interaction between an industrial robot and a workpiece in the environment is essential for a successful manipulation task. In this context, a wide range of operations such as deburring, pushing, and polishing are considered. The key factor to successfully accomplish such operations by a robot is to simultaneously control the position of the tool-tip of the end-effector and interaction force between the tool and the workpiece, which is a challenging task. This thesis aims to develop new reliable control strategies combining vision and force feedbacks to track a path on the workpiece while controlling the contacting force. In order to fulfill this task, the novel robust hybrid vision and force control approaches are presented for industrial robots subject to uncertainties and interacting with unknown workpieces. The main contributions of this thesis lie in several parts. In the first part of the thesis, a robust cascade vision and force approach is suggested to control industrial robots interacting with unknown workpieces considering model uncertainties. This cascade structure, consisting of an inner vision loop and an outer force loop, avoids the conflict between the force and vision control in traditional hybrid methods without decoupling force and vision systems. In the second part of the thesis, a novel image-based task-sequence/path planning scheme coupled with a robust vision and force control method for solving the multi-task operation problem of an eye-in-hand (EIH) industrial robot interacting with a workpiece is suggested. Each task is defined as tracking a predefined path or positioning to a single point on the workpiece’s surface with a desired interacting force signal, i.e., interaction with the workpiece. The proposed method suggests an optimal task sequence planning scheme to carry out all the tasks and an optimal path planning method to generate a collision-free path between the tasks, i.e., when the robot performs free-motion (pure vision control). In the third part of the project, a novel multi-stage method for robust hybrid vision and force control of industrial robots, subject to model uncertainties is proposed. It aims to improve the performance of the three phases of the control process: a) free-motion using the image-based visual servoing (IBVS) before the interaction with the workpiece; b) the moment that the end-effector touches the workpiece; and c) hybrid vision and force control during the interaction with the workpiece. In the fourth part of the thesis, a novel approach for hybrid vision and force control of eye-in-hand industrial robots is presented which addresses the problem of camera’s field-of-view (FOV) limitation. The merit of the proposed method is that it is capable of expanding the workpiece for eye-in-hand industrial robots to cope with the FOV limitation of the interaction tasks on the workpiece. All the developed algorithms in the thesis are validated via tests on a 6-DOF Denso robot in an eye-in-hand configuration

    Multi-Robot Systems: Challenges, Trends and Applications

    Get PDF
    This book is a printed edition of the Special Issue entitled “Multi-Robot Systems: Challenges, Trends, and Applications” that was published in Applied Sciences. This Special Issue collected seventeen high-quality papers that discuss the main challenges of multi-robot systems, present the trends to address these issues, and report various relevant applications. Some of the topics addressed by these papers are robot swarms, mission planning, robot teaming, machine learning, immersive technologies, search and rescue, and social robotics

    Visual Servoing in Robotics

    Get PDF
    Visual servoing is a well-known approach to guide robots using visual information. Image processing, robotics, and control theory are combined in order to control the motion of a robot depending on the visual information extracted from the images captured by one or several cameras. With respect to vision issues, a number of issues are currently being addressed by ongoing research, such as the use of different types of image features (or different types of cameras such as RGBD cameras), image processing at high velocity, and convergence properties. As shown in this book, the use of new control schemes allows the system to behave more robustly, efficiently, or compliantly, with fewer delays. Related issues such as optimal and robust approaches, direct control, path tracking, or sensor fusion are also addressed. Additionally, we can currently find visual servoing systems being applied in a number of different domains. This book considers various aspects of visual servoing systems, such as the design of new strategies for their application to parallel robots, mobile manipulators, teleoperation, and the application of this type of control system in new areas

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    Enhanced Image-Based Visual Servoing Dealing with Uncertainties

    Get PDF
    Nowadays, the applications of robots in industrial automation have been considerably increased. There is increasing demand for the dexterous and intelligent robots that can work in unstructured environment. Visual servoing has been developed to meet this need by integration of vision sensors into robotic systems. Although there has been significant development in visual servoing, there still exist some challenges in making it fully functional in the industry environment. The nonlinear nature of visual servoing and also system uncertainties are part of the problems affecting the control performance of visual servoing. The projection of 3D image to 2D image which occurs in the camera creates a source of uncertainty in the system. Another source of uncertainty lies in the camera and robot manipulator's parameters. Moreover, limited field of view (FOV) of the camera is another issues influencing the control performance. There are two main types of visual servoing: position-based and image-based. This project aims to develop a series of new methods of image-based visual servoing (IBVS) which can address the nonlinearity and uncertainty issues and improve the visual servoing performance of industrial robots. The first method is an adaptive switch IBVS controller for industrial robots in which the adaptive law deals with the uncertainties of the monocular camera in eye-in-hand configuration. The proposed switch control algorithm decouples the rotational and translational camera motions and decomposes the IBVS control into three separate stages with different gains. This method can increase the system response speed and improve the tracking performance of IBVS while dealing with camera uncertainties. The second method is an image feature reconstruction algorithm based on the Kalman filter which is proposed to handle the situation where the image features go outside the camera's FOV. The combination of the switch controller and the feature reconstruction algorithm can not only improve the system response speed and tracking performance of IBVS, but also can ensure the success of servoing in the case of the feature loss. Next, in order to deal with the external disturbance and uncertainties due to the depth of the features, the third new control method is designed to combine proportional derivative (PD) control with sliding mode control (SMC) on a 6-DOF manipulator. The properly tuned PD controller can ensure the fast tracking performance and SMC can deal with the external disturbance and depth uncertainties. In the last stage of the thesis, the fourth new semi off-line trajectory planning method is developed to perform IBVS tasks for a 6-DOF robotic manipulator system. In this method, the camera's velocity screw is parametrized using time-based profiles. The parameters of the velocity profile are then determined such that the velocity profile takes the robot to its desired position. This is done by minimizing the error between the initial and desired features. The algorithm for planning the orientation of the robot is decoupled from the position planning of the robot. This allows a convex optimization problem which lead to a faster and more efficient algorithm. The merit of the proposed method is that it respects all of the system constraints. This method also considers the limitation caused by camera's FOV. All the developed algorithms in the thesis are validated via tests on a 6-DOF Denso robot in an eye-in-hand configuration

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas

    Data-Driven Methods to Build Robust Legged Robots

    Full text link
    For robots to ever achieve signicant autonomy, they need to be able to mitigate performance loss due to uncertainty, typically from a novel environment or morphological variation of their bodies. Legged robots, with their complex dynamics, are particularly challenging to control with principled theory. Hybrid events, uncertainty, and high dimension are all confounding factors for direct analysis of models. On the other hand, direct data-driven methods have proven to be equally dicult to employ. The high dimension and mechanical complexity of legged robots have proven challenging for hardware-in-the-loop strategies to exploit without signicant eort by human operators. We advocate that we can exploit both perspectives by capitalizing on qualitative features of mathematical models applicable to legged robots, and use that knowledge to strongly inform data-driven methods. We show that the existence of these simple structures can greatly facilitate robust design of legged robots from a data-driven perspective. We begin by demonstrating that the factorial complexity of hybrid models can be elegantly resolved with computationally tractable algorithms, and establish that a novel form of distributed control is predicted. We then continue by demonstrating that a relaxed version of the famous templates and anchors hypothesis can be used to encode performance objectives in a highly redundant way, allowing robots that have suffered damage to autonomously compensate. We conclude with a deadbeat stabilization result that is quite general, and can be determined without equations of motion.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155053/1/gcouncil_1.pd

    Autonomous Visual Servo Robotic Capture of Non-cooperative Target

    Get PDF
    This doctoral research develops and validates experimentally a vision-based control scheme for the autonomous capture of a non-cooperative target by robotic manipulators for active space debris removal and on-orbit servicing. It is focused on the final capture stage by robotic manipulators after the orbital rendezvous and proximity maneuver being completed. Two challenges have been identified and investigated in this stage: the dynamic estimation of the non-cooperative target and the autonomous visual servo robotic control. First, an integrated algorithm of photogrammetry and extended Kalman filter is proposed for the dynamic estimation of the non-cooperative target because it is unknown in advance. To improve the stability and precision of the algorithm, the extended Kalman filter is enhanced by dynamically correcting the distribution of the process noise of the filter. Second, the concept of incremental kinematic control is proposed to avoid the multiple solutions in solving the inverse kinematics of robotic manipulators. The proposed target motion estimation and visual servo control algorithms are validated experimentally by a custom built visual servo manipulator-target system. Electronic hardware for the robotic manipulator and computer software for the visual servo are custom designed and developed. The experimental results demonstrate the effectiveness and advantages of the proposed vision-based robotic control for the autonomous capture of a non-cooperative target. Furthermore, a preliminary study is conducted for future extension of the robotic control with consideration of flexible joints
    corecore