26,441 research outputs found
Deep Model Predictive Variable Impedance Control
The capability to adapt compliance by varying muscle stiffness is crucial for
dexterous manipulation skills in humans. Incorporating compliance in robot
motor control is crucial to performing real-world force interaction tasks with
human-level dexterity. This work presents a Deep Model Predictive Variable
Impedance Controller for compliant robotic manipulation which combines Variable
Impedance Control with Model Predictive Control (MPC). A generalized Cartesian
impedance model of a robot manipulator is learned using an exploration strategy
maximizing the information gain. This model is used within an MPC framework to
adapt the impedance parameters of a low-level variable impedance controller to
achieve the desired compliance behavior for different manipulation tasks
without any retraining or finetuning. The deep Model Predictive Variable
Impedance Control approach is evaluated using a Franka Emika Panda robotic
manipulator operating on different manipulation tasks in simulations and real
experiments. The proposed approach was compared with model-free and model-based
reinforcement approaches in variable impedance control for transferability
between tasks and performance.Comment: Preprint submitted to the journal of robotics and autonomous system
Robotic Manipulator Control in the Presence of Uncertainty
openThis research focuses on the problem of manipulator control in the presence of uncertainty and aims to compare different approaches for handling uncertainty while developing robust and adaptive methods that can control the robot without explicit knowledge of uncertainty bounds. Uncertainty is a pervasive challenge in robotics, arising from various sources such as sensor noise, modeling errors, and external disturbances. Effectively addressing uncertainty is crucial for achieving accurate and reliable manipulator control.
The research will explore and compare existing methods for uncertainty handling such as robust feedback linearization , sliding mode control and robust adaptive control. These methods provide mechanisms to model and compensate for uncertainty in the control system. Additionally, modified robust and adaptive control methods will be developed that can dynamically adjust control laws based on the observed states, without requiring explicit knowledge of uncertainty bounds.
To evaluate the performance of the different approaches, comprehensive experiments will be conducted on a manipulator platform. Various manipulation tasks will be performed under different levels of uncertainty, and the performance of each control approach will be assessed in terms of accuracy, stability, and adaptability. Comparative analysis will be conducted to highlight the strengths and weaknesses of each method and identify the most effective approach for handling uncertainty in manipulator control.
The outcomes of this research will contribute to the advancement of manipulator control by providing insights into the effectiveness of different approaches for uncertainty handling. The development of new robust and adaptive control methods will enable manipulators to operate in uncertain environments without requiring explicit knowledge of uncertainty bounds. Ultimately, this research will facilitate the deployment of more reliable and adaptive robotic systems capable of handling uncertainty and improving their performance in various real-world applications.This research focuses on the problem of manipulator control in the presence of uncertainty and aims to compare different approaches for handling uncertainty while developing robust and adaptive methods that can control the robot without explicit knowledge of uncertainty bounds. Uncertainty is a pervasive challenge in robotics, arising from various sources such as sensor noise, modeling errors, and external disturbances. Effectively addressing uncertainty is crucial for achieving accurate and reliable manipulator control.
The research will explore and compare existing methods for uncertainty handling such as robust feedback linearization , sliding mode control and robust adaptive control. These methods provide mechanisms to model and compensate for uncertainty in the control system. Additionally, modified robust and adaptive control methods will be developed that can dynamically adjust control laws based on the observed states, without requiring explicit knowledge of uncertainty bounds.
To evaluate the performance of the different approaches, comprehensive experiments will be conducted on a manipulator platform. Various manipulation tasks will be performed under different levels of uncertainty, and the performance of each control approach will be assessed in terms of accuracy, stability, and adaptability. Comparative analysis will be conducted to highlight the strengths and weaknesses of each method and identify the most effective approach for handling uncertainty in manipulator control.
The outcomes of this research will contribute to the advancement of manipulator control by providing insights into the effectiveness of different approaches for uncertainty handling. The development of new robust and adaptive control methods will enable manipulators to operate in uncertain environments without requiring explicit knowledge of uncertainty bounds. Ultimately, this research will facilitate the deployment of more reliable and adaptive robotic systems capable of handling uncertainty and improving their performance in various real-world applications
Investigations into an optimal approach for on-line robot trajectory planning and control.
The purpose of this thesis is to present a comprehensive and practical approach for the time-optimal motion planning and control of a general purpose industrial manipulator. In particular, the case of point-to-point path unconstrained motions is considered, with special emphasis towards strategies suitable for efficient on-line implementations. From a dynamic model description of the plant, and using an advanced graphical robotics simulation environment, the control algorithms are formulated. Experimental work is then conducted to verify the proposed algorithms, by interfacing the industrial manipulator to the master controller, implemented on a personal computer. The full rigid-body non-linear dynamics of the open-chain manipulator have been accommodated into the modelling, analysis and design of the control algorithms. For path unconstrained motions, this leads to a model-based regulating strategy between set points, which combines conventional trajectory planning and subsequent control tracking stages into one. Theoretical insights into these two robot motion disciplines are presented, and some are experimentally demonstrated on a CRS A251 industrial arm.
A critical evaluation of current approaches which yield optimal trajectory planning and control of robot manipulators is undertaken, leading to the design of a control solution which is shown to be a combination of Pontryagin's Maximum Principle and state-space methods of design. However, in a real world setting, consideration of the relationship between optimal control and on-line viability highlights the need to approximate manipulator dynamics by a piecewise linear and decoupled function, hence rendering a near-time-optimal solution in feedback form.
The on-line implementation of the proposed controller is presented together with a comparison between simulation and experimental results. Furthermore, these are compared with measurements from the industrial controller. It is shown that the model-based near-optimal-time feedback control algorithms allow faster manipulator motions, with an average speed-up of 14%, clearly outperforming current industrial controller practices in terms of increased productivity. This result was obtained by setting an acceptable absolute error limit on the target location of the joint (position and velocity) to within [2.0E-02 rad, 8.7E-03 rad/s], when the joint was regarded at rest
Learning Event-triggered Control from Data through Joint Optimization
We present a framework for model-free learning of event-triggered control
strategies. Event-triggered methods aim to achieve high control performance
while only closing the feedback loop when needed. This enables resource
savings, e.g., network bandwidth if control commands are sent via communication
networks, as in networked control systems. Event-triggered controllers consist
of a communication policy, determining when to communicate, and a control
policy, deciding what to communicate. It is essential to jointly optimize the
two policies since individual optimization does not necessarily yield the
overall optimal solution. To address this need for joint optimization, we
propose a novel algorithm based on hierarchical reinforcement learning. The
resulting algorithm is shown to accomplish high-performance control in line
with resource savings and scales seamlessly to nonlinear and high-dimensional
systems. The method's applicability to real-world scenarios is demonstrated
through experiments on a six degrees of freedom real-time controlled
manipulator. Further, we propose an approach towards evaluating the stability
of the learned neural network policies
Learning to Navigate Cloth using Haptics
We present a controller that allows an arm-like manipulator to navigate
deformable cloth garments in simulation through the use of haptic information.
The main challenge of such a controller is to avoid getting tangled in, tearing
or punching through the deforming cloth. Our controller aggregates force
information from a number of haptic-sensing spheres all along the manipulator
for guidance. Based on haptic forces, each individual sphere updates its target
location, and the conflicts that arise between this set of desired positions is
resolved by solving an inverse kinematic problem with constraints.
Reinforcement learning is used to train the controller for a single
haptic-sensing sphere, where a training run is terminated (and thus penalized)
when large forces are detected due to contact between the sphere and a
simplified model of the cloth. In simulation, we demonstrate successful
navigation of a robotic arm through a variety of garments, including an
isolated sleeve, a jacket, a shirt, and shorts. Our controller out-performs two
baseline controllers: one without haptics and another that was trained based on
large forces between the sphere and cloth, but without early termination.Comment: Supplementary video available at https://youtu.be/iHqwZPKVd4A.
Related publications http://www.cc.gatech.edu/~karenliu/Robotic_dressing.htm
Collision-free inverse kinematics of the redundant seven-link manipulator used in a cucumber picking robot
The paper presents results of research on an inverse kinematics algorithm that has been used in a functional model of a cucumber-harvesting robot consisting of a redundant P6R manipulator. Within a first generic approach, the inverse kinematics problem was reformulated as a non-linear programming problem and solved with a Genetic Algorithm (GA). Although solutions were easily obtained, the considerable calculation time needed to solve the problem prevented on-line implementation. To circumvent this problem, a second, less generic, approach was developed which consisted of a mixed numerical-analytic solution of the inverse kinematics problem exploiting the particular structure of the P6R manipulator. Using the latter approach, calculation time was considerably reduced. During the early stages of the cucumber-harvesting project, this inverse kinematics algorithm was used off-line to evaluate the ability of the robot to harvest cucumbers using 3D-information obtained from a cucumber crop in a real greenhouse. Thereafter, the algorithm was employed successfully in a functional model of the cucumber harvester to determine if cucumbers were hanging within the reachable workspace of the robot and to determine a collision-free harvest posture to be used for motion control of the manipulator during harvesting. The inverse kinematics algorithm is presented and demonstrated with some illustrative examples of cucumber harvesting, both off-line during the design phase as well as on-line during a field test
Whole-Body MPC for a Dynamically Stable Mobile Manipulator
Autonomous mobile manipulation offers a dual advantage of mobility provided
by a mobile platform and dexterity afforded by the manipulator. In this paper,
we present a whole-body optimal control framework to jointly solve the problems
of manipulation, balancing and interaction as one optimization problem for an
inherently unstable robot. The optimization is performed using a Model
Predictive Control (MPC) approach; the optimal control problem is transcribed
at the end-effector space, treating the position and orientation tasks in the
MPC planner, and skillfully planning for end-effector contact forces. The
proposed formulation evaluates how the control decisions aimed at end-effector
tracking and environment interaction will affect the balance of the system in
the future. We showcase the advantages of the proposed MPC approach on the
example of a ball-balancing robot with a robotic manipulator and validate our
controller in hardware experiments for tasks such as end-effector pose tracking
and door opening
- …