204 research outputs found

    Neural Control of Bimanual Robots With Guaranteed Global Stability and Motion Precision

    Get PDF
    Robots with coordinated dual arms are able to perform more complicated tasks that a single manipulator could hardly achieve. However, more rigorous motion precision is required to guarantee effective cooperation between the dual arms, especially when they grasp a common object. In this case, the internal forces applied on the object must also be considered in addition to the external forces. Therefore, a prescribed tracking performance at both transient and steady states is first specified, and then, a controller is synthesized to rigorously guarantee the specified motion performance. In the presence of unknown dynamics of both the robot arms and the manipulated object, the neural network approximation technique is employed to compensate for uncertainties. In order to extend the semiglobal stability achieved by conventional neural control to global stability, a switching mechanism is integrated into the control design. Effectiveness of the proposed control design has been shown through experiments carried out on the Baxter Robot

    Trajectory Tracking Control Design for Dual-Arm Robots Using Dynamic Surface Controller

    Get PDF
    This paper presents a dynamic surface controller (DSC) for dual-arm robots (DAR) tracking desired trajectories. The DSC algorithm is based on backstepping technique and multiple sliding surface control principle, but with an important addition. In the design of DSC, low-pass filters are included which prevent the complexity in computing due to the “explosion of terms”, i.e. the number of terms in the control law rapidly gets out of hand. Therefore, a controller constructed from this algorithm is simulated on a four degrees of freedom (DOF) dual-arm robot with a complex kinetic dynamic model. Moreover, the stability of the control system is proved by using Lyapunov theory. The simulation results show the effectiveness of the controller which provide precise tracking performance of the manipulator

    Geometry-aware Manipulability Learning, Tracking and Transfer

    Full text link
    Body posture influences human and robots performance in manipulation tasks, as appropriate poses facilitate motion or force exertion along different axes. In robotics, manipulability ellipsoids arise as a powerful descriptor to analyze, control and design the robot dexterity as a function of the articulatory joint configuration. This descriptor can be designed according to different task requirements, such as tracking a desired position or apply a specific force. In this context, this paper presents a novel \emph{manipulability transfer} framework, a method that allows robots to learn and reproduce manipulability ellipsoids from expert demonstrations. The proposed learning scheme is built on a tensor-based formulation of a Gaussian mixture model that takes into account that manipulability ellipsoids lie on the manifold of symmetric positive definite matrices. Learning is coupled with a geometry-aware tracking controller allowing robots to follow a desired profile of manipulability ellipsoids. Extensive evaluations in simulation with redundant manipulators, a robotic hand and humanoids agents, as well as an experiment with two real dual-arm systems validate the feasibility of the approach.Comment: Accepted for publication in the Intl. Journal of Robotics Research (IJRR). Website: https://sites.google.com/view/manipulability. Code: https://github.com/NoemieJaquier/Manipulability. 24 pages, 20 figures, 3 tables, 4 appendice

    Collaborative Bimanual Manipulation Using Optimal Motion Adaptation and Interaction Control Retargetting Human Commands to Feasible Robot Control References

    Get PDF
    This article presents a robust and reliable human–robot collaboration (HRC) framework for bimanual manipulation. We propose an optimal motion adaptation method to retarget arbitrary human commands to feasible robot pose references while maintaining payload stability. The framework comprises three modules: 1) a task-space sequential equilibrium and inverse kinematics optimization ( task-space SEIKO ) for retargeting human commands and enforcing feasibility constraints, 2) an admittance controller to facilitate compliant human–robot physical interactions, and 3) a low-level controller improving stability during physical interactions. Experimental results show that the proposed framework successfully adapted infeasible and dangerous human commands into continuous motions within safe boundaries and achieved stable grasping and maneuvering of large and heavy objects on a real dual-arm robot via teleoperation and physical interaction. Furthermore, the framework demonstrated the capability in the assembly task of building blocks and the insertion task of industrial power connectors

    Distributed Observer-Based Prescribed Performance Control for Multi-Robot Deformable Object Cooperative Teleoperation

    Get PDF
    In this paper, a distributed observer-based prescribed performance control method is proposed for using a multi-robot teleoperation system to manipulate a common deformable object. To achieve a stable position-tracking effect and realize the desired cooperative operational performance, we first define a new hybrid error matrix for both the relative distances and absolute positions of robots and then decompose the matrix into two new error terms for cooperative and independent robot control. Then, we improve the Kelvin-Voigt (K-V) contact model based on the new error terms. Because the center position and deformation of the object cannot be measured, the object dynamics are then expressed by the relative distances of robots and an equivalent impedance term. Each robot incorporates an observer to estimate contact force and object dynamics based on its own measurements. To address the position errors caused by biases in force estimation and realize the position-tracking effect of each robot, we improve the barrier Lyapunov functions (BLFs) by incorporating the errors into system control. which allows us to achieve a predefined position-tracking effect. We conduct an experiment to verify the proposed controller’s ability in a dual-telerobot cooperative manipulation task, even when the object is subjected to unknown disturbances. Note to Practitioners —This article is inspired by the limitations of multi-telerobot manipulation with a deformable object, where the deformation of the object cannot be measured directly. Meanwhile, force sensors, especially 6-axis force sensors, are very expensive. To realize the purpose that objects manipulated by multiple robots match the same state as operated on the leader side, we propose an object-centric teleoperation framework based on the estimates of contact forces and object dynamics and the improved barrier Lyapunov functions (BLFs). This framework contributes to two aspects in practice: 1) propose a control diagram for deformable object co-teleoperation of multi-robots for unmeasurable object’s centre position and deformation; 2) propose an improved BLFs controller based on the estimation of contact force and robot dynamics. The estimation errors are considered and transferred using an equivalent impedance to be integrated into the Lyapunov function to minimize both force and motion-tracking errors. The experimental results verify the effectiveness of the proposed method. The developed framework can be used in industrial applications with a similar scenario

    Distributed observer-based prescribed performance control for multi-robot deformable object cooperative teleoperation

    Get PDF
    In this paper, a distributed observer-based prescribed performance control method is proposed for using a multi-robot teleoperation system to manipulate a common deformable object. To achieve a stable position-tracking effect and realize the desired cooperative operational performance, we first define a new hybrid error matrix for both the relative distances and absolute positions of robots and then decompose the matrix into two new error terms for cooperative and independent robot control. Then, we improve the Kelvin-Voigt (K-V) contact model based on the new error terms. Because the center position and deformation of the object cannot be measured, the object dynamics are then expressed by the relative distances of robots and an equivalent impedance term. Each robot incorporates an observer to estimate contact force and object dynamics based on its own measurements. To address the position errors caused by biases in force estimation and realize the position-tracking effect of each robot, we improve the barrier Lyapunov functions (BLFs) by incorporating the errors into system control. which allows us to achieve a predefined position-tracking effect. We conduct an experiment to verify the proposed controller’s ability in a dual-telerobot cooperative manipulation task, even when the object is subjected to unknown disturbances. Note to Practitioners —This article is inspired by the limitations of multi-telerobot manipulation with a deformable object, where the deformation of the object cannot be measured directly. Meanwhile, force sensors, especially 6-axis force sensors, are very expensive. To realize the purpose that objects manipulated by multiple robots match the same state as operated on the leader side, we propose an object-centric teleoperation framework based on the estimates of contact forces and object dynamics and the improved barrier Lyapunov functions (BLFs). This framework contributes to two aspects in practice: 1) propose a control diagram for deformable object co-teleoperation of multi-robots for unmeasurable object’s centre position and deformation; 2) propose an improved BLFs controller based on the estimation of contact force and robot dynamics. The estimation errors are considered and transferred using an equivalent impedance to be integrated into the Lyapunov function to minimize both force and motion-tracking errors. The experimental results verify the effectiveness of the proposed method. The developed framework can be used in industrial applications with a similar scenario

    Imitation Learning of Motion Coordination in Robots:a Dynamical System Approach

    Get PDF
    The ease with which humans coordinate all their limbs is fascinating. Such a simplicity is the result of a complex process of motor coordination, i.e. the ability to resolve the biomechanical redundancy in an efficient and repeatable manner. Coordination enables a wide variety of everyday human activities from filling in a glass with water to pair figure skating. Therefore, it is highly desirable to endow robots with similar skills. Despite the apparent diversity of coordinated motions, all of them share a crucial similarity: these motions are dictated by underlying constraints. The constraints shape the formation of the coordination patterns between the different degrees of freedom. Coordination constraints may take a spatio-temporal form; for instance, during bimanual object reaching or while catching a ball on the fly. They also may relate to the dynamics of the task; for instance, when one applies a specific force profile to carry a load. In this thesis, we develop a framework for teaching coordination skills to robots. Coordination may take different forms, here, we focus on teaching a robot intra-limb and bimanual coordination, as well as coordination with a human during physical collaborative tasks. We use tools from well-established domains of Bayesian semiparametric learning (Gaussian Mixture Models and Regression, Hidden Markov Models), nonlinear dynamics, and adaptive control. We take a biologically inspired approach to robot control. Specifically, we adopt an imitation learning perspective to skill transfer, that offers a seamless and intuitive way of capturing the constraints contained in natural human movements. As the robot is taught from motion data provided by a human teacher, we exploit evidence from human motor control of the temporal evolution of human motions that may be described by dynamical systems. Throughout this thesis, we demonstrate that the dynamical system view on movement formation facilitates coordination control in robots. We explain how our framework for teaching coordination to a robot is built up, starting from intra-limb coordination and control, moving to bimanual coordination, and finally to physical interaction with a human. The dissertation opens with the discussion of learning discrete task-level coordination patterns, such as spatio-temporal constraints emerging between the two arms in bimanual manipulation tasks. The encoding of bimanual constraints occurs at the task level and proceeds through a discretization of the task as sequences of bimanual constraints. Once the constraints are learned, the robot utilizes them to couple the two dynamical systems that generate kinematic trajectories for the hands. Explicit coupling of the dynamical systems ensures accurate reproduction of the learned constraints, and proves to be crucial for successful accomplishment of the task. In the second part of this thesis, we consider learning one-arm control policies. We present an approach to extracting non-linear autonomous dynamical systems from kinematic data of arbitrary point-to-point motions. The proposed method aims to tackle the fundamental questions of learning robot coordination: (i) how to infer a motion representation that captures a multivariate coordination pattern between degrees of freedom and that generalizes this pattern to unseen contexts; (ii) whether the policy learned directly from demonstrations can provide robustness against spatial and temporal perturbations. Finally, we demonstrate that the developed dynamical system approach to coordination may go beyond kinematic motion learning. We consider physical interactions between a robot and a human in situations where they jointly perform manipulation tasks; in particular, the problem of collaborative carrying and positioning of a load. We extend the approach proposed in the second part of this thesis to incorporate haptic information into the learning process. As a result, the robot adapts its kinematic motion plan according to human intentions expressed through the haptic signals. Even after the robot has learned the task model, the human still remains a complex contact environment. To ensure robustness of the robot behavior in the face of the variability inherent to human movements, we wrap the learned task model in an adaptive impedance controller with automatic gain tuning. The techniques, developed in this thesis, have been applied to enable learning of unimanual and bimanual manipulation tasks on the robotics platforms HOAP-3, KATANA, and i-Cub, as well as to endow a pair of simulated robots with the ability to perform a manipulation task in the physical collaboration

    Human-robot co-carrying using visual and force sensing

    Get PDF
    In this paper, we propose a hybrid framework using visual and force sensing for human-robot co-carrying tasks. Visual sensing is utilized to obtain human motion and an observer is designed for estimating control input of human, which generates robot's desired motion towards human's intended motion. An adaptive impedance-based control strategy is proposed for trajectory tracking with neural networks (NNs) used to compensate for uncertainties in robot's dynamics. Motion synchronization is achieved and this approach yields a stable and efficient interaction behavior between human and robot, decreases human control effort and avoids interference to human during the interaction. The proposed framework is validated by a co-carrying task in simulations and experiments

    Bimanual robot skills: MP encoding, dimensionality reduction and reinforcement learning

    Get PDF
    In our culture, robots have been in novels and cinema for a long time, but it has been specially in the last two decades when the improvements in hardware - better computational power and components - and advances in Artificial Intelligence (AI), have allowed robots to start sharing spaces with humans. Such situations require, aside from ethical considerations, robots to be able to move with both compliance and precision, and learn at different levels, such as perception, planning, and motion, being the latter the focus of this work. The first issue addressed in this thesis is inverse kinematics for redundant robot manipulators, i.e: positioning the robot joints so as to reach a certain end-effector pose. We opt for iterative solutions based on the inversion of the kinematic Jacobian of a robot, and propose to filter and limit the gains in the spectral domain, while also unifying such approach with a continuous, multipriority scheme. Such inverse kinematics method is then used to derive manipulability in the whole workspace of an antropomorphic arm, and the coordination of two arms is subsequently optimized by finding their best relative positioning. Having solved the kinematic issues, a robot learning within a human environment needs to move compliantly, with limited amount of force, in order not to harm any humans or cause any damage, while being as precise as possible. Therefore, we developed two dynamic models for the same redundant arm we had analysed kinematically: The first based on local models with Gaussian projections, and the second characterizing the most problematic term of the dynamics, namely friction. Such models allowed us to implement feed-forward controllers, where we can actively change the weights in the compliance-precision tradeoff. Moreover, we used such models to predict external forces acting on the robot, without the use of force sensors. Afterwards, we noticed that bimanual robots must coordinate their components (or limbs) and be able to adapt to new situations with ease. Over the last decade, a number of successful applications for learning robot motion tasks have been published. However, due to the complexity of a complete system including all the required elements, most of these applications involve only simple robots with a large number of high-end technology sensors, or consist of very simple and controlled tasks. Using our previous framework for kinematics and control, we relied on two types of movement primitives to encapsulate robot motion. Such movement primitives are very suitable for using reinforcement learning. In particular, we used direct policy search, which uses the motion parametrization as the policy itself. In order to improve the learning speed in real robot applications, we generalized a policy search algorithm to give some importance to samples yielding a bad result, and we paid special attention to the dimensionality of the motion parametrization. We reduced such dimensionality with linear methods, using the rewards obtained through motion repetition and execution. We tested such framework in a bimanual task performed by two antropomorphic arms, such as the folding of garments, showing how a reduced dimensionality can provide qualitative information about robot couplings and help to speed up the learning of tasks when robot motion executions are costly.A la nostra cultura, els robots han estat presents en novel·les i cinema des de fa dècades, però ha sigut especialment en les últimes dues quan les millores en hardware (millors capacitats de còmput) i els avenços en intel·ligència artificial han permès que els robots comencin a compartir espais amb els humans. Aquestes situacions requereixen, a banda de consideracions ètiques, que els robots siguin capaços de moure's tant amb suavitat com amb precisió, i d'aprendre a diferents nivells, com són la percepció, planificació i moviment, essent l'última el centre d'atenció d'aquest treball. El primer problema adreçat en aquesta tesi és la cinemàtica inversa, i.e.: posicionar les articulacions del robot de manera que l'efector final estigui en una certa posició i orientació. Hem estudiat el camp de les solucions iteratives, basades en la inversió del Jacobià cinemàtic d'un robot, i proposem un filtre que limita els guanys en el seu domini espectral, mentre també unifiquem tal mètode dins un esquema multi-prioritat i continu. Aquest mètode per a la cinemàtica inversa és usat a l'hora d'encapsular tota la informació sobre l'espai de treball d'un braç antropomòrfic, i les capacitats de coordinació entre dos braços són optimitzades, tot trobant la seva millor posició relativa en l'espai. Havent resolt les dificultats cinemàtiques, un robot que aprèn en un entorn humà necessita moure's amb suavitat exercint unes forces limitades per tal de no causar danys, mentre es mou amb la màxima precisió possible. Per tant, hem desenvolupat dos models dinàmics per al mateix braç robòtic redundant que havíem analitzat des del punt de vista cinemàtic: El primer basat en models locals amb projeccions de Gaussianes i el segon, caracteritzant el terme més problemàtic i difícil de representar de la dinàmica, la fricció. Aquests models ens van permetre utilitzar controladors coneguts com "feed-forward", on podem canviar activament els guanys buscant l'equilibri precisió-suavitat que més convingui. A més, hem usat aquests models per a inferir les forces externes actuant en el robot, sense la necessitat de sensors de força. Més endavant, ens hem adonat que els robots bimanuals han de coordinar els seus components (braços) i ser capaços d'adaptar-se a noves situacions amb facilitat. Al llarg de l'última dècada, diverses aplicacions per aprendre tasques motores robòtiques amb èxit han estat publicades. No obstant, degut a la complexitat d'un sistema complet que inclogui tots els elements necessaris, la majoria d'aquestes aplicacions consisteixen en robots més aviat simples amb costosos sensors d'última generació, o a resoldre tasques senzilles en un entorn molt controlat. Utilitzant el nostre treball en cinemàtica i control, ens hem basat en dos tipus de primitives de moviment per caracteritzar la motricitat robòtica. Aquestes primitives de moviment són molt adequades per usar aprenentatge per reforç. En particular, hem usat la búsqueda directa de la política, un camp de l'aprenentatge per reforç que usa la parametrització del moviment com la pròpia política. Per tal de millorar la velocitat d'aprenentatge en aplicacions amb robots reals, hem generalitzat un algoritme de búsqueda directa de política per a donar importància a les mostres amb mal resultat, i hem donat especial atenció a la reducció de dimensionalitat en la parametrització dels moviments. Hem reduït la dimensionalitat amb mètodes lineals, utilitzant les recompenses obtingudes EN executar els moviments. Aquests mètodes han estat provats en tasques bimanuals com són plegar roba, usant dos braços antropomòrfics. Els resultats mostren com la reducció de dimensionalitat pot aportar informació qualitativa d'una tasca, i al mateix temps ajuda a aprendre-la més ràpid quan les execucions amb robots reals són costoses
    corecore