69 research outputs found

    Direct Visual Servoing Framework based on Optimal Control for Redundant Joint Structures

    Get PDF
    This paper presents a new framework based on optimal control to define new dynamic visual controllers to carry out the guidance of any serial link structure. The proposed general method employs optimal control to obtain the desired behaviour in the joint space based on an indicated cost function which determines how the control effort is distributed over the joints. The proposed approach allows the development of new direct visual controllers for any mechanical joint system with redundancy. Finally, authors show experimental results and verifications on a real robotic system for some derived controllers obtained from the control framework.This work was funded by the Spanish Ministry of Economy, the European FEDER funds and the Valencia Regional Government, through the research projects DPI2012-32390 and PROMETEO/2013/085

    Dynamic visual servo control of a 4-axis joint tool to track image trajectories during machining complex shapes

    Get PDF
    A large part of the new generation of computer numerical control systems has adopted an architecture based on robotic systems. This architecture improves the implementation of many manufacturing processes in terms of flexibility, efficiency, accuracy and velocity. This paper presents a 4-axis robot tool based on a joint structure whose primary use is to perform complex machining shapes in some non-contact processes. A new dynamic visual controller is proposed in order to control the 4-axis joint structure, where image information is used in the control loop to guide the robot tool in the machining task. In addition, this controller eliminates the chaotic joint behavior which appears during tracking of the quasi-repetitive trajectories required in machining processes. Moreover, this robot tool can be coupled to a manipulator robot in order to form a multi-robot platform for complex manufacturing tasks. Therefore, the robot tool could perform a machining task using a piece grasped from the workspace by a manipulator robot. This manipulator robot could be guided by using visual information given by the robot tool, thereby obtaining an intelligent multi-robot platform controlled by only one camera.This work was funded by the Ministry of Science and Innovation of Spain Government through the research project DPI2011-22766 and DPI2012-32390

    Optimal Image-Based Guidance of Mobile Manipulators using Direct Visual Servoing

    Get PDF
    This paper presents a direct image-based controller to perform the guidance of a mobile manipulator using image-based control. An eye-in-hand camera is employed to perform the guidance of a mobile differential platform with a seven degrees-of-freedom robot arm. The presented approach is based on an optimal control framework and it is employed to control mobile manipulators during the tracking of image trajectories taking into account robot dynamics. The direct approach allows us to take both the manipulator and base dynamics into account. The proposed image-based controllers consider the optimization of the motor signals sent to the mobile manipulator during the tracking of image trajectories by minimizing the control force and torque. As the results show, the proposed direct visual servoing system uses the eye-in-hand camera images for concurrently controlling both the base platform and robot arm. The use of the optimal framework allows us to derive different visual controllers with different dynamical behaviors during the tracking of image trajectories.This research was supported by the Valencia Regional Government through project GV/2018/050

    Visual servo control on a humanoid robot

    Get PDF
    Includes bibliographical referencesThis thesis deals with the control of a humanoid robot based on visual servoing. It seeks to confer a degree of autonomy to the robot in the achievement of tasks such as reaching a desired position, tracking or/and grasping an object. The autonomy of humanoid robots is considered as crucial for the success of the numerous services that this kind of robots can render with their ability to associate dexterity and mobility in structured, unstructured or even hazardous environments. To achieve this objective, a humanoid robot is fully modeled and the control of its locomotion, conditioned by postural balance and gait stability, is studied. The presented approach is formulated to account for all the joints of the biped robot. As a way to conform the reference commands from visual servoing to the discrete locomotion mode of the robot, this study exploits a reactive omnidirectional walking pattern generator and a visual task Jacobian redefined with respect to a floating base on the humanoid robot, instead of the stance foot. The redundancy problem stemming from the high number of degrees of freedom coupled with the omnidirectional mobility of the robot is handled within the task priority framework, allowing thus to achieve con- figuration dependent sub-objectives such as improving the reachability, the manipulability and avoiding joint limits. Beyond a kinematic formulation of visual servoing, this thesis explores a dynamic visual approach and proposes two new visual servoing laws. Lyapunov theory is used first to prove the stability and convergence of the visual closed loop, then to derive a robust adaptive controller for the combined robot-vision dynamics, yielding thus an ultimate uniform bounded solution. Finally, all proposed schemes are validated in simulation and experimentally on the humanoid robot NAO

    Visual Servoing in Robotics

    Get PDF
    Visual servoing is a well-known approach to guide robots using visual information. Image processing, robotics, and control theory are combined in order to control the motion of a robot depending on the visual information extracted from the images captured by one or several cameras. With respect to vision issues, a number of issues are currently being addressed by ongoing research, such as the use of different types of image features (or different types of cameras such as RGBD cameras), image processing at high velocity, and convergence properties. As shown in this book, the use of new control schemes allows the system to behave more robustly, efficiently, or compliantly, with fewer delays. Related issues such as optimal and robust approaches, direct control, path tracking, or sensor fusion are also addressed. Additionally, we can currently find visual servoing systems being applied in a number of different domains. This book considers various aspects of visual servoing systems, such as the design of new strategies for their application to parallel robots, mobile manipulators, teleoperation, and the application of this type of control system in new areas

    GNC architecture solutions for robust operations of a free-floating space manipulator via image based visual servoing

    Get PDF
    On-orbit servicing often requires the use of robotic arms, and a key asset in this kind of operations is autonomy. In this framework, the use of optical devices is a solution, already analyzed in many researches both for autonomous rendezvous and docking and for the evaluation of the control of the manipulator. In the present paper, simulations for assessing the controller performance are realized in a high-fidelity purposely developed software architecture, in which not only the selected 6 DOF space manipulator is modeled, but also a virtual camera, acquiring in the loop images of the target CAD model imported, is included in the GNC loop. This approach allows to emphasis several problems that would not emerge in simulations with images characterized by easily-identifiable, purposely-created markers. At the scope, a specific GNC architecture is developed, based on finite-state machine logic. According to this approach, two different Image Based Visual Servoing strategies are alternatively performed, commanding only linear or angular velocity of the camera, switching between the two control techniques when the “stack” or “divergence” condition is triggered. In this way a stable and robust accomplishment of the tasks is achieved for many configurations and for different target models

    A neural network-based exploratory learning and motor planning system for co-robots

    Get PDF
    Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object

    A quasi-static model-based control methodology for articulated mechanical systems

    Get PDF
    Hazardous environments encountered in nuclear clean-up tasks mandate the use of complex robotic systems in many situations. The operation of these systems is now performed primarily under teleoperation. This is, at best, five times slower than equivalent direct human contact operations.One way to increase remote work efficiency is to use automation for specific tasks. However, the unstructured, complex nature of the environment along with the inherent structural flexibility of mobile robot work systems makes task automation difficult and in meiny cases impossible.This research considers a quasi-static macroscopic modeling methodology that could be combined with sensor-guided manipulation schemes to achieve the needed operational accuracies for remote work task automation. Application of this methodology begins with an off-line analysis phase in which the system is identified in terms of the ideal D-H parameters and its structural elements. Themanipulator is modeled with fundamental components (i.e. beam elements, hydraulic elements, etc)and then analyzed to determine load dependent functions that predict deflections at each joint and the end of each link. Next, forces applied at the end-effector and gravity loads are projected into local link coordinates using the undeflected pose of the manipulator. These local loads are then used to calculate deflections which are expressed as 4 by 4 homogeneous transformations and inserted into the original manipulator transformations to predict end-effector position and orientation (anderror/deflection vector). The error/deflection vector is then used to determine corrective actions based on the manipulator flexibilities, pose and loading. This corrective action alters the manipulator commands such that the manipulator end-effector is moved to the desired location based on the error between the model predictions and commanded position using the ideal kinematics.The modeling methodology can readily be applied to any kinematic chain. This allows analysis of a conceptual system in terms of basic mechanics and structural deflections. The methodology allows components such as actuators or links to be interchanged in simulation so that alternative designs may be tested. This capability could help avoid potentially costly conceptual design flaws at a very early stage in the design process.Real-time compensation strategies have been developed so as to lessen concerns with structural deformation during use. The compensation strategies presented here show that the modeling methods can be used to increase the end-effector accuracy by calculating the deflections and command adjustments iteratively in real-time. The iterations show rapid convergence of the adjusted command positions to reach the desired end-effector location. The compensation methods discussed are easily altered to fit systems of any complexity, only requiring changes in the number of variables and the number of equations to solve. Most importantly, however, is that the modeling methodology,in conjunction with the compensation methods, can be used to correct for a significant fraction of the errors associated with manipulator flexibility effects. Implementation in a real-time system only involves changes in path planning, not in low-level control.The modeling methods and deflection predictions were verified using a sub-system of the OakRidge National Laboratory\u27s Dual Arm Work Platform. The experimental method used simple,non-contact measurement devices that are minimally intrusive to the manipulator\u27s workspace. The Results show good correlation between model and experimental results for some configurations. Experimental results can be extrapolated to predict that errors could be reduced from several inches to several tenths of an inch for systems like the Dual Arm Work Platform in some configurations.Continuing work will investigate applications to selective automation for Decontamination and Dismantlement tasks, using this work as a necessary foundation

    Biomimetic Manipulator Control Design for Bimanual Tasks in the Natural Environment

    Get PDF
    As robots become more prolific in the human environment, it is important that safe operational procedures are introduced at the same time; typical robot control methods are often very stiff to maintain good positional tracking, but this makes contact (purposeful or accidental) with the robot dangerous. In addition, if robots are to work cooperatively with humans, natural interaction between agents will make tasks easier to perform with less effort and learning time. Stability of the robot is particularly important in this situation, especially as outside forces are likely to affect the manipulator when in a close working environment; for example, a user leaning on the arm, or task-related disturbance at the end-effector. Recent research has discovered the mechanisms of how humans adapt the applied force and impedance during tasks. Studies have been performed to apply this adaptation to robots, with promising results showing an improvement in tracking and effort reduction over other adaptive methods. The basic algorithm is straightforward to implement, and allows the robot to be compliant most of the time and only stiff when required by the task. This allows the robot to work in an environment close to humans, but also suggests that it could create a natural work interaction with a human. In addition, no force sensor is needed, which means the algorithm can be implemented on almost any robot. This work develops a stable control method for bimanual robot tasks, which could also be applied to robot-human interactive tasks. A dynamic model of the Baxter robot is created and verified, which is then used for controller simulations. The biomimetic control algorithm forms the basis of the controller, which is developed into a hybrid control system to improve both task-space and joint-space control when the manipulator is disturbed in the natural environment. Fuzzy systems are implemented to remove the need for repetitive and time consuming parameter tuning, and also allows the controller to actively improve performance during the task. Experimental simulations are performed, and demonstrate how the hybrid task/joint-space controller performs better than either of the component parts under the same conditions. The fuzzy tuning method is then applied to the hybrid controller, which is shown to slightly improve performance as well as automating the gain tuning process. In summary, a novel biomimetic hybrid controller is presented, with a fuzzy mechanism to avoid the gain tuning process, finalised with a demonstration of task-suitability in a bimanual-type situation.EPSR

    Robot Manipulators

    Get PDF
    Robot manipulators are developing more in the direction of industrial robots than of human workers. Recently, the applications of robot manipulators are spreading their focus, for example Da Vinci as a medical robot, ASIMO as a humanoid robot and so on. There are many research topics within the field of robot manipulators, e.g. motion planning, cooperation with a human, and fusion with external sensors like vision, haptic and force, etc. Moreover, these include both technical problems in the industry and theoretical problems in the academic fields. This book is a collection of papers presenting the latest research issues from around the world
    corecore