9 research outputs found

    A Bio-Inspired Manipulator with Claw Prototype for Winged Aerial Robots: Benchmark for Design and Control

    Get PDF
    Nature exhibits many examples of birds, insects and flying mammals with flapping wings and limbs offering some functionalities. Although in robotics, there are some examples of flying robots with wings, it has not been yet a goal to add to them some manipulation-like capabilities, similar to ones that are exhibited on birds. The flying robot (ornithopter) that we propose improves the existent aerial manipulators based on multirotor platforms in terms of longer flight duration of missions and safety in proximity to humans. Moreover, the manipulation capabilities allows them to perch in inaccessible places and perform some tasks with the body perched. This work presents a first prototype of lightweight manipulator to be mounted to an ornithopter and a new control methodology to balance them while they are perched and following a desired path with the end effector imitating their beaks. This allows for several possible applications, such as contact inspection following a path with an ultrasonic sensor mounted in the end effector. The manipulator prototype imitates birds with two-link legs and a body link with an actuated limb, where the links are all active except for the first passive one with a grabbing mechanism in its base, imitating a claw. Unlike standard manipulators, the lightweight requirement limits the frame size and makes it necessary to use micro motors. Successful experimental results with this prototype are reported.European Research Council 78824

    Intelligent model-based control of complex three-link mechanisms

    Get PDF
    The aim of this study is to understand the complexity and control challenges of the locomotion of a three-link mechanism of a robot system. In order to do this a three-link robot gymnast (Robogymnast) has been built in Cardiff University. The Robogymnast is composed of three links (one arm, one torso, one leg) and is powered by two geared DC motors. Currently the robot has three potentiometers to measure the relative angles between adjacent links and only one tachometer to measure the relative angular position of the first link. A mathematical model for the robot is derived using Lagrange equations. Since the model is inherently nonlinear and multivariate, it presents more challenges when modelling the Robogymnast and dealing with control motion problems. The proposed approach for dealing with the design of the control system is based on a discrete-time linear model around the upright position of the Robogymnast. To study the swinging motion of the Robogymnast, a new technique is proposed to manipulate the frequency and the amplitude of the sinusoidal signals as a means of controlling the motors. Due to the many combinations of the frequency and amplitude, an optimisation method is required to find the optimal set. The Bees Algorithm (BA), a novel swarm-based optimisation technique, is used to enhance the performance of the swinging motion through optimisation of the manipulated parameters of the control actions. The time taken to reach the upright position at its best is 128 seconds. Two different control methods are adopted to study the balancing/stablising of the Robogymnast in both the downward and upright configurations. The first is the optimal control algorithm using the Linear Quadratic Regulator (LQR) technique with integrators to help achieve and maintain the set of reference trajectories. The second is a combination of Local Control (LC) and LQR. Each controller is implemented via reduced order state observer to estimate the unmeasured states in terms of their relative angular velocities. From the identified data in the relative angular positions of the upright balancing control, it is reported that the maximum amplitude of the deviation in the relative angles on average are approximately 7.5° for the first link and 18° for the second link. It is noted that the third link deviated approximately by 2.5° using only the LQR controller, and no significant deviation when using the LQR with LC. To explore the combination between swinging and balancing motions, a switching mechanism between swinging and balancing algorithm is proposed. This is achieved by dividing the controller into three stages. The first stage is the swinging control, the next stage is the transition control which is accomplished using the Independent Joint Control (IJC) technique and finally balancing control is achieved by the LQR. The duration time of the transition controller to track the reference trajectory of the Robogymnast at its best is found to be within 0.4 seconds. An external disturbance is applied to each link of the Robogymnast separately in order to study the controller's ability to overcome the disturbance and to study the controller response. The simulation of the Robogymnast and experimental realization of the controllers are implemented using MATLAB® software and the C++ program environment respectively

    Dynamic balancing of underactuated robots

    Get PDF
    This thesis presents the control of planar underactuated systems that have one less control input than the number of degrees of freedom. The underactuated robots are studied to achieve dynamically stable motions commonly encountered during robot locomotion. This work emphasizes the relation between the underactuated systems and biped locomotion and builds on the previous works in the literature on underactuated robot locomotion. Two planar system models are treated: an acrobatic robot and a compass biped with torso. The dynamic stability of fast periodic trajectories of these systems are regulated by designing asymptotically stable feedback controllers. The resulting internal dynamics of the systems are analyzed and shaped to achieve energy efficiency and robustness of the closed-loop system trajectories. In particular, Bézier polynomial approximations and parameter optimization methods are used to systematically construct the internal dynamics of the systems. Simulation results are presented for dynamically stable orbits of the acrobatic robot and the compass biped with torso

    Advanced Strategies for Robot Manipulators

    Get PDF
    Amongst the robotic systems, robot manipulators have proven themselves to be of increasing importance and are widely adopted to substitute for human in repetitive and/or hazardous tasks. Modern manipulators are designed complicatedly and need to do more precise, crucial and critical tasks. So, the simple traditional control methods cannot be efficient, and advanced control strategies with considering special constraints are needed to establish. In spite of the fact that groundbreaking researches have been carried out in this realm until now, there are still many novel aspects which have to be explored

    Friction compensation in the swing-up control of viscously damped underactuated robotics

    Get PDF
    A dissertation submitted to the Faculty of Engineering and the Built Environment, University of the Witwatersrand, Johannesburg, in fulfilment of the requirements for the degree of Master of Science in Engineering in the Control Research Group School of Electrical and Information Engineering, Johannesburg, 2017In this research, we observed a torque-related limitation in the swing-up control of underactuated mechanical systems which had been integrated with viscous damping in the unactuated joint. The objective of this research project was thus to develop a practical work-around solution to this limitation. The nth order underactuated robotic system is represented in this research as a collection of compounded pendulums with n-1 actuators placed at each joint with the exception of the first joint. This system is referred to as the PAn-1 robot (Passive first joint, followed by n-1 Active joints), with the Acrobot (PA1 robot) and the PAA robot (or PA2 robot) being among the most well-known examples. A number of friction models exist in literature, which include, and are not exclusive to, the Coulomb and the Stribeck effect models, but the viscous damping model was selected for this research since it is more extensively covered in existing literature. The effectiveness of swing-up control using Lyapunov’s direct method when applied on the undamped PAn-1 robot has been vigorously demonstrated in existing literature, but there is no literature that discusses the swing-up control of viscously damped systems. We show, however, that the application of satisfactory swing-up control using Lyapunov’s direct method is constrained to underactuated systems that are either undamped or actively damped (viscous damping integrated into the actuated joints only). The violation of this constraint results in the derivation of a torque expression that cannot be solved for (invertibility problem, for systems described by n > 2) or a torque expression which contains a conditional singularity (singularity problem, for systems with n = 2). This constraint is formally summarised as the matched damping condition, and highlights a clear limitation in the Lyapunov-related swing-up control of underactuated mechanical systems. This condition has significant implications on the practical realisation of the swing-up control of underactuated mechanical systems, which justifies the investigation into the possibility of a work-around. We thus show that the limitation highlighted by the matched damping condition can be overcome through the implementation of the partial feedback linearisation (PFL) technique. Two key contributions are generated from this research as a result, which iii include the gain selection criterion (for Traditional Collocated PFL), and the convergence algorithm (for noncollocated PFL). The gain selection criterion is an analytical solution that is composed of a set of inequalities that map out a geometric region of appropriate gains in the swing-up gain space. Selecting a gain combination within this region will ensure that the fully-pendent equilibrium point (FPEP) is unstable, which is a necessary condition for swing-up control when the system is initialised near the FPEP. The convergence algorithm is an experimental solution that, once executed, will provide information about the distal pendulum’s angular initial condition that is required to swing-up a robot with a particular angular initial condition for the proximal pendulum, along with the minimum gain that is required to execute the swing-up control in this particular configuration. Significant future contributions on this topic may result from the inclusion of more complex friction models. Additionally, the degree of actuation of the system may be reduced through the implementation of energy storing components, such as torsional springs, at the joint. In summary, we present two contributions in the form of the gain selection criterion and the convergence algorithm which accommodate the circumnavigation of the limitation formalised as the matched damping condition. This condition pertains to the Lyapunov-related swing-up control of underactuated mechanical systems that have been integrated with viscous damping in the unactuated joint.CK201

    Combining reinforcement learning and optimal control for the control of nonlinear dynamical systems

    No full text
    This thesis presents a novel hierarchical learning framework, Reinforcement Learning Optimal Control, for controlling nonlinear dynamical systems with continuous states and actions. The adapted approach mimics the neural computations that allow our brain to bridge across the divide between symbolic action-selection and low-level actuation control by operating at two levels of abstraction. First, current findings demonstrate that at the level of limb coordination human behaviour is explained by linear optimal feedback control theory, where cost functions match energy and timing constraints of tasks. Second, humans learn cognitive tasks involving learning symbolic level action selection, in terms of both model-free and model-based reinforcement learning algorithms. We postulate that the ease with which humans learn complex nonlinear tasks arises from combining these two levels of abstraction. The Reinforcement Learning Optimal Control framework learns the local task dynamics from naive experience using an expectation maximization algorithm for estimation of linear dynamical systems and forms locally optimal Linear Quadratic Regulators, producing continuous low-level control. A high-level reinforcement learning agent uses these available controllers as actions and learns how to combine them in state space, while maximizing a long term reward. The optimal control costs form training signals for high-level symbolic learner. The algorithm demonstrates that a small number of locally optimal linear controllers can be combined in a smart way to solve global nonlinear control problems and forms a proof-of-principle to how the brain may bridge the divide between low-level continuous control and high-level symbolic action selection. It competes in terms of computational cost and solution quality with state-of-the-art control, which is illustrated with solutions to benchmark problems.Open Acces
    corecore