69 research outputs found

    Optimal Motion of Flexible Objects with Oscillations Elimination at the Final Point

    Get PDF
    International audienceIn this article, a theoretical justification of one type of skew-symmetric optimal translational motion (moving in the minimal acceptable time) of a flexible object carried by a robot from its initial to its final position of absolute quiescence with the exception of the oscillations at the end of the motion is presented. The Hamilton-Ostrogradsky principle is used as a criterion for searching an optimal control. The data of experimental verification of the control are presented using the Orthoglide robot for translational motions and several masses were attached to a flexible beam

    Autonomous Robots for Active Removal of Orbital Debris

    Full text link
    This paper presents a vision guidance and control method for autonomous robotic capture and stabilization of orbital objects in a time-critical manner. The method takes into account various operational and physical constraints, including ensuring a smooth capture, handling line-of-sight (LOS) obstructions of the target, and staying within the acceleration, force, and torque limits of the robot. Our approach involves the development of an optimal control framework for an eye-to-hand visual servoing method, which integrates two sequential sub-maneuvers: a pre-capturing maneuver and a post-capturing maneuver, aimed at achieving the shortest possible capture time. Integrating both control strategies enables a seamless transition between them, allowing for real-time switching to the appropriate control system. Moreover, both controllers are adaptively tuned through vision feedback to account for the unknown dynamics of the target. The integrated estimation and control architecture also facilitates fault detection and recovery of the visual feedback in situations where the feedback is temporarily obstructed. The experimental results demonstrate the successful execution of pre- and post-capturing operations on a tumbling and drifting target, despite multiple operational constraints

    Image Based Visual Servoing Using Trajectory Planning and Augmented Visual Servoing Controller

    Get PDF
    Robots and automation manufacturing machineries have become an inseparable part of industry, nowadays. However, robotic systems are generally limited to operate in highly structured environments. Although, sensors such as laser tracker, indoor GPS, 3D metrology and tracking systems are used for positioning and tracking in manufacturing and assembly tasks, these devices are highly limited to the working environment and the speed of operation and they are generally very expensive. Thus, integration of vision sensors with robotic systems and generally visual servoing system allows the robots to work in unstructured spaces, by producing non-contact measurements of the working area. However, projecting a 3D space into a 2D space, which happens in the camera, causes the loss of one dimension data. This initiates the challenges in vision based control. Moreover, the nonlinearities and complex structure of a manipulator robot make the problem more challenging. This project aims to develop new reliable visual servoing methods that allow its use in real robotic tasks. The main contributions of this project are in two parts; the visual servoing controller and trajectory planning algorithm. In the first part of the project, a new image based visual servoing controller called Augmented Image Based Visual Servoing (AIBVS) is presented. A proportional derivative (PD) controller is developed to generate acceleration as the controlling command of the robot. The stability analysis of the controller is conducted using Lyapanov theory. The developed controller has been tested on a 6 DOF Denso robot. The experimental results on point features and image moment features demonstrate the performance of the proposed AIBVS. Experimental results show that a damped response could be achieved using a PD controller with acceleration output. Moreover, smoother feature and robot trajectories are observed compared to those in conventional IBVS controllers. Later on, this controller is used on a moving object catching process. Visual servoing controllers have shown difficulty in stabilizing the system in global space. Hence, in the second part of the project, a trajectory planning algorithm is developed to achieve the global stability of the system. The trajectory planning is carried out by parameterizing the camera's velocity screw. The camera's velocity screw is parameterized using time-based profiles. The parameters of the velocity profile are then determined such that the velocity profile guides the robot to its desired position. This is done by minimizing the error between the initial and desired features. This method provides a reliable path for the robot considering all robotic constraints. The developed algorithm is tested on a Denso robot. The results show that the trajectory planning algorithm is able to perform visual servoing tasks which are unstable when performed using visual servoing controllers

    Docking Haptics: Extending the Reach of Haptics by Dynamic Combinations of Grounded and Worn Devices

    Full text link
    Grounded haptic devices can provide a variety of forces but have limited working volumes. Wearable haptic devices operate over a large volume but are relatively restricted in the types of stimuli they can generate. We propose the concept of docking haptics, in which different types of haptic devices are dynamically docked at run time. This creates a hybrid system, where the potential feedback depends on the user's location. We show a prototype docking haptic workspace, combining a grounded six degree-of-freedom force feedback arm with a hand exoskeleton. We are able to create the sensation of weight on the hand when it is within reach of the grounded device, but away from the grounded device, hand-referenced force feedback is still available. A user study demonstrates that users can successfully discriminate weight when using docking haptics, but not with the exoskeleton alone. Such hybrid systems would be able to change configuration further, for example docking two grounded devices to a hand in order to deliver twice the force, or extend the working volume. We suggest that the docking haptics concept can thus extend the practical utility of haptics in user interfaces

    Compliant control of Uni/ Multi- robotic arms with dynamical systems

    Get PDF
    Accomplishment of many interactive tasks hinges on the compliance of humans. Humans demonstrate an impressive capability of complying their behavior and more particularly their motions with the environment in everyday life. In humans, compliance emerges from different facets. For example, many daily activities involve reaching for grabbing tasks, where compliance appears in a form of coordination. Humans comply their handsâ motions with each other and with that of the object not only to establish a stable contact and to control the impact force but also to overcome sensorimotor imprecisions. Even though compliance has been studied from different aspects in humans, it is primarily related to impedance control in robotics. In this thesis, we leverage the properties of autonomous dynamical systems (DS) for immediate re-planning and introduce active complaint motion generators for controlling robots in three different scenarios, where compliance does not necessarily mean impedance and hence it is not directly related to control in the force/velocity domain. In the first part of the thesis, we propose an active compliant strategy for catching objects in flight, which is less sensitive to the timely control of the interception. The soft catching strategy consists in having the robot following the object for a short period of time. This leaves more time for the fingers to close on the object at the interception and offers more robustness than a âhardâ catching method in which the hand waits for the object at the chosen interception point. We show theoretically that the resulting DS will intercept the object at the intercept point, at the right time with the desired velocity direction. Stability and convergence of the approach are assessed through Lyapunov stability theory. In the second part, we propose a unified compliant control architecture for coordinately reaching for grabbing a moving object by a multi-arm robotic system. Due to the complexity of the task and of the system, each arm complies not only with the objectâs motion but also with the motion of other arms, in both task and joint spaces. At the task-space level, we propose a unified dynamical system that endows the multi-arm system with both synchronous and asynchronous behaviors and with the capability of smoothly transitioning between the two modes. At the joint space level, the compliance between the arms is achieved by introducing a centralized inverse kinematics (IK) solver under self-collision avoidance constraints; formulated as a quadratic programming problem (QP) and solved in real-time. In the last part, we propose a compliant dynamical system for stably transitioning from free motions to contacts. In this part, by modulating the robot's velocity in three regions, we show theoretically and empirically that the robot can (I) stably touch the contact surface (II) at a desired location, and (III) leave the surface or stop on the surface at a desired point

    Advanced Mobile Robotics: Volume 3

    Get PDF
    Mobile robotics is a challenging field with great potential. It covers disciplines including electrical engineering, mechanical engineering, computer science, cognitive science, and social science. It is essential to the design of automated robots, in combination with artificial intelligence, vision, and sensor technologies. Mobile robots are widely used for surveillance, guidance, transportation and entertainment tasks, as well as medical applications. This Special Issue intends to concentrate on recent developments concerning mobile robots and the research surrounding them to enhance studies on the fundamental problems observed in the robots. Various multidisciplinary approaches and integrative contributions including navigation, learning and adaptation, networked system, biologically inspired robots and cognitive methods are welcome contributions to this Special Issue, both from a research and an application perspective

    Enhanced Image-Based Visual Servoing Dealing with Uncertainties

    Get PDF
    Nowadays, the applications of robots in industrial automation have been considerably increased. There is increasing demand for the dexterous and intelligent robots that can work in unstructured environment. Visual servoing has been developed to meet this need by integration of vision sensors into robotic systems. Although there has been significant development in visual servoing, there still exist some challenges in making it fully functional in the industry environment. The nonlinear nature of visual servoing and also system uncertainties are part of the problems affecting the control performance of visual servoing. The projection of 3D image to 2D image which occurs in the camera creates a source of uncertainty in the system. Another source of uncertainty lies in the camera and robot manipulator's parameters. Moreover, limited field of view (FOV) of the camera is another issues influencing the control performance. There are two main types of visual servoing: position-based and image-based. This project aims to develop a series of new methods of image-based visual servoing (IBVS) which can address the nonlinearity and uncertainty issues and improve the visual servoing performance of industrial robots. The first method is an adaptive switch IBVS controller for industrial robots in which the adaptive law deals with the uncertainties of the monocular camera in eye-in-hand configuration. The proposed switch control algorithm decouples the rotational and translational camera motions and decomposes the IBVS control into three separate stages with different gains. This method can increase the system response speed and improve the tracking performance of IBVS while dealing with camera uncertainties. The second method is an image feature reconstruction algorithm based on the Kalman filter which is proposed to handle the situation where the image features go outside the camera's FOV. The combination of the switch controller and the feature reconstruction algorithm can not only improve the system response speed and tracking performance of IBVS, but also can ensure the success of servoing in the case of the feature loss. Next, in order to deal with the external disturbance and uncertainties due to the depth of the features, the third new control method is designed to combine proportional derivative (PD) control with sliding mode control (SMC) on a 6-DOF manipulator. The properly tuned PD controller can ensure the fast tracking performance and SMC can deal with the external disturbance and depth uncertainties. In the last stage of the thesis, the fourth new semi off-line trajectory planning method is developed to perform IBVS tasks for a 6-DOF robotic manipulator system. In this method, the camera's velocity screw is parametrized using time-based profiles. The parameters of the velocity profile are then determined such that the velocity profile takes the robot to its desired position. This is done by minimizing the error between the initial and desired features. The algorithm for planning the orientation of the robot is decoupled from the position planning of the robot. This allows a convex optimization problem which lead to a faster and more efficient algorithm. The merit of the proposed method is that it respects all of the system constraints. This method also considers the limitation caused by camera's FOV. All the developed algorithms in the thesis are validated via tests on a 6-DOF Denso robot in an eye-in-hand configuration

    Trajectory planning and control for robot manipulations

    Get PDF
    Comme les robots effectuent de plus en plus de tâches en interaction avec l'homme ou dans un environnement humain, ils doivent assurer la sécurité et le confort des hommes. Dans ce contexte, le robot doit adapter son comportement et agir en fonction des évolutions de l'environnement et des activités humaines. Les robots développés sur la base de l'apprentissage ou d'un planificateur de mouvement ne sont pas en mesure de réagir assez rapidement, c'est pourquoi nous proposons d'introduire un contrôleur de trajectoire intermédiaire dans l'architecture logicielle entre le contrôleur bas niveau et le planificateur de plus haut niveau. Le contrôleur de trajectoire que nous proposons est basé sur le concept de générateur de trajectoire en ligne (OTG), il permet de calculer des trajectoires en temps réel et facilite la communication entre les différents éléments, en particulier le planificateur de chemin, le générateur de trajectoire, le détecteur de collision et le contrôleur. Pour éviter de replanifier toute une trajectoire en réaction à un changement induit par un humain, notre contrôleur autorise la déformation locale de la trajectoire et la modification de la loi d'évolution pour accélérer ou décélérer le mouvement. Le contrôleur de trajectoire peut également commuter de la trajectoire initiale vers une nouvelle trajectoire. Les fonctions polynomiales cubiques que nous utilisons pour décrire les trajectoires fournissent des mouvements souples et de la flexibilité sans nécessiter de calculs complexes. De plus, les algorithmes de lissage que nous proposons permettent de produire des mouvements esthétiques ressemblants à ceux des humains. Ce travail, mené dans le cadre du projet ANR ICARO, a été intégré et validé avec les robots KUKA LWR de la plate-forme robotique du LAAS-CNRS.In order to perform a large variety of tasks in interaction with human or in human environments, a robot needs to guarantee safety and comfort for humans. In this context, the robot shall adapt its behavior and react to the environment changes and human activities. The robots based on learning or motion planning are not able to adapt fast enough, so we propose to use a trajectory controller as an intermediate control layer in the software structure. This intermediate layer exchanges information with the low level controller and the high level planner. The proposed trajectory controller, based on the concept of Online Trajectory Generation (OTG), allows real time computation of trajectories and easy communication with the different components, including path planner, trajectory generator, collision checker and controller. To avoid the replan of an entire trajectory when reacting to a human behaviour change, the controller must allow deforming locally a trajectory or accelerate/decelerate by modifying the time function. The trajectory controller must also accept to switch from an initial trajectory to a new trajectory to follow. Cubic polynomial functions are used to describe trajectories, they provide smoothness, flexibility and computational simplicity. Moreover, to satisfy the objective of aesthetics, smoothing algorithm are proposed to produce human-like motions. This work, conducted as part of the ANR project ICARO, has been integrated and validated on the KUKA LWR robot platform of LAAS-CNRS

    Autonomous Visual Servo Robotic Capture of Non-cooperative Target

    Get PDF
    This doctoral research develops and validates experimentally a vision-based control scheme for the autonomous capture of a non-cooperative target by robotic manipulators for active space debris removal and on-orbit servicing. It is focused on the final capture stage by robotic manipulators after the orbital rendezvous and proximity maneuver being completed. Two challenges have been identified and investigated in this stage: the dynamic estimation of the non-cooperative target and the autonomous visual servo robotic control. First, an integrated algorithm of photogrammetry and extended Kalman filter is proposed for the dynamic estimation of the non-cooperative target because it is unknown in advance. To improve the stability and precision of the algorithm, the extended Kalman filter is enhanced by dynamically correcting the distribution of the process noise of the filter. Second, the concept of incremental kinematic control is proposed to avoid the multiple solutions in solving the inverse kinematics of robotic manipulators. The proposed target motion estimation and visual servo control algorithms are validated experimentally by a custom built visual servo manipulator-target system. Electronic hardware for the robotic manipulator and computer software for the visual servo are custom designed and developed. The experimental results demonstrate the effectiveness and advantages of the proposed vision-based robotic control for the autonomous capture of a non-cooperative target. Furthermore, a preliminary study is conducted for future extension of the robotic control with consideration of flexible joints
    • …
    corecore