76 research outputs found

    An Application of Modified T2FHC Algorithm in Two-Link Robot Controller

    Get PDF
    Parallel robotic systems have shown their advantages over the traditional serial robots such as high payload capacity, high speed, and high precision. Their applications are widespread from transportation to manufacturing fields. Therefore, most of the recent studies in parallel robots focus on finding the best method to improve the system accuracy. Enhancing this metric, however, is still the biggest challenge in controlling a parallel robot owing to the complex mathematical model of the system. In this paper, we present a novel solution to this problem with a Type 2 Fuzzy Coherent Controller Network (T2FHC), which is composed of a Type 2 Cerebellar Model Coupling Controller (CMAC) with its fast convergence ability and a Brain Emotional Learning Controller (BELC) using the Lyaponov-based weight updating rule. In addition, the T2FHC is combined with a surface generator to increase the system flexibility. To evaluate its applicability in real life, the proposed controller was tested on a Quanser 2-DOF robot system in three case studies: no load, 180 g load and 360 g load, respectively. The results showed that the proposed structure achieved superior performance compared to those of available algorithms such as CMAC and Novel Self-Organizing Fuzzy CMAC (NSOF CMAC). The Root Mean Square Error (RMSE) index of the system that was 2.20E-06 for angle A and 2.26E-06 for angle B and the tracking error that was -6.42E-04 for angle A and 2.27E-04 for angle B demonstrate the good stability and high accuracy of the proposed T2FHC. With this outstanding achievement, the proposed method is promising to be applied to many applications using nonlinear systems

    Markov chain monte carlo algorithm for bayesian policy search

    Get PDF
    The fundamental intention in Reinforcement Learning (RL) is to seek for optimal parameters of a given parameterized policy. Policy search algorithms have paved the way for making the RL suitable for applying to complex dynamical systems, such as robotics domain, where the environment comprised of high-dimensional state and action spaces. Although many policy search techniques are based on the wide spread policy gradient methods, thanks to their appropriateness to such complex environments, their performance might be a ected by slow convergence or local optima complications. The reason for this is due to the urge for computation of the gradient components of the parameterized policy. In this study, we avail a Bayesian approach for policy search problem pertinent to the RL framework, The problem of interest is to control a discrete time Markov decision process (MDP) with continuous state and action spaces. We contribute to the eld by propounding a Particle Markov Chain Monte Carlo (P-MCMC) algorithm as a method of generating samples for the policy parameters from a posterior distribution, instead of performing gradient approximations. To do so, we adopt a prior density over policy parameters and aim for the posterior distribution where the `likelihood' is assumed to be the expected total reward. In terms of risk-sensitive scenarios, where a multiplicative expected total reward is employed to measure the performance of the policy, rather than its cumulative counterpart, our methodology is t for purpose owing to the fact that by utilizing a reward function in a multiplicative form, one can fully take sequential Monte Carlo (SMC), known as the particle lter within the iterations of the P-MCMC. it is worth mentioning that these methods have widely been used in statistical and engineering applications in recent years. Furthermore, in order to deal with the challenging problem of the policy search in large-dimensional state spaces an Adaptive MCMC algorithm will be proposed. This research is organized as follows: In Chapter 1, we commence with a general introduction and motivation to the current work and highlight the topics that are going to be covered. In Chapter 2ö a literature review pursuant to the context of the thesis will be conducted. In Chapter 3, a brief review of some popular policy gradient based RL methods is provided. We proceed with Bayesian inference notion and present Markov Chain Monte Carlo methods in Chapter 4. The original work of the thesis is formulated in this chapter where a novel SMC algorithm for policy search in RL setting is advocated. In order to exhibit the fruitfulness of the proposed algorithm in learning a parameterized policy, numerical simulations are incorporated in Chapter 5. To validate the applicability of the proposed method in real-time it will be implemented on a control problem of a physical setup of a two degree of freedom (2-DoF) robotic manipulator where its corresponding results appear in Chapter 6. Finally, concluding remarks and future work are expressed in chapter

    Advanced Strategies for Robot Manipulators

    Get PDF
    Amongst the robotic systems, robot manipulators have proven themselves to be of increasing importance and are widely adopted to substitute for human in repetitive and/or hazardous tasks. Modern manipulators are designed complicatedly and need to do more precise, crucial and critical tasks. So, the simple traditional control methods cannot be efficient, and advanced control strategies with considering special constraints are needed to establish. In spite of the fact that groundbreaking researches have been carried out in this realm until now, there are still many novel aspects which have to be explored

    Design of stable adaptive fuzzy control.

    Get PDF
    by John Tak Kuen Koo.Thesis (M.Phil.)--Chinese University of Hong Kong, 1994.Includes bibliographical references (leaves 217-[220]).Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Introduction --- p.1Chapter 1.2 --- "Robust, Adaptive and Fuzzy Control" --- p.2Chapter 1.3 --- Adaptive Fuzzy Control --- p.4Chapter 1.4 --- Object of Study --- p.10Chapter 1.5 --- Scope of the Thesis --- p.13Chapter 2 --- Background on Adaptive Control and Fuzzy Logic Control --- p.17Chapter 2.1 --- Adaptive control --- p.17Chapter 2.1.1 --- Model reference adaptive systems --- p.20Chapter 2.1.2 --- MIT Rule --- p.23Chapter 2.1.3 --- Model Reference Adaptive Control (MRAC) --- p.24Chapter 2.2 --- Fuzzy Logic Control --- p.33Chapter 2.2.1 --- Fuzzy sets and logic --- p.33Chapter 2.2.2 --- Fuzzy Relation --- p.40Chapter 2.2.3 --- Inference Mechanisms --- p.43Chapter 2.2.4 --- Defuzzification --- p.49Chapter 3 --- Explicit Form of a Class of Fuzzy Logic Controllers --- p.51Chapter 3.1 --- Introduction --- p.51Chapter 3.2 --- Construction of a class of fuzzy controller --- p.53Chapter 3.3 --- Explicit form of the fuzzy controller --- p.57Chapter 3.4 --- Design criteria on the fuzzy controller --- p.65Chapter 3.5 --- B-Spline fuzzy controller --- p.68Chapter 4 --- Model Reference Adaptive Fuzzy Control (MRAFC) --- p.73Chapter 4.1 --- Introduction --- p.73Chapter 4.2 --- "Fuzzy Controller, Plant and Reference Model" --- p.75Chapter 4.3 --- Derivation of the MRAFC adaptive laws --- p.79Chapter 4.4 --- "Extension to the Multi-Input, Multi-Output Case" --- p.84Chapter 4.5 --- Simulation --- p.90Chapter 5 --- MRAFC on a Class of Nonlinear Systems: Type I --- p.97Chapter 5.1 --- Introduction --- p.98Chapter 5.2 --- Choice of Controller --- p.99Chapter 5.3 --- Derivation of the MRAFC adaptive laws --- p.102Chapter 5.4 --- Example: Stabilization of a pendulum --- p.109Chapter 6 --- MRAFC on a Class of Nonlinear Systems: Type II --- p.112Chapter 6.1 --- Introduction --- p.113Chapter 6.2 --- Fuzzy System as Function Approximator --- p.114Chapter 6.3 --- Construction of MRAFC for the nonlinear systems --- p.118Chapter 6.4 --- Input-Output Linearization --- p.130Chapter 6.5 --- MRAFC with Input-Output Linearization --- p.132Chapter 6.6 --- Example --- p.136Chapter 7 --- Analysis of MRAFC System --- p.140Chapter 7.1 --- Averaging technique --- p.140Chapter 7.2 --- Parameter convergence --- p.143Chapter 7.3 --- Robustness --- p.152Chapter 7.4 --- Simulation --- p.157Chapter 8 --- Application of MRAFC scheme on Manipulator Control --- p.166Chapter 8.1 --- Introduction --- p.166Chapter 8.2 --- Robot Manipulator Control --- p.170Chapter 8.3 --- MRAFC on Robot Manipulator Control --- p.173Chapter 8.3.1 --- Part A: Nonlinear-function feedback fuzzy controller --- p.174Chapter 8.3.2 --- Part B: State-feedback fuzzy controller --- p.182Chapter 8.4 --- Simulation --- p.186Chapter 9 --- Conclusion --- p.199Chapter A --- Implementation of MRAFC Scheme with Practical Issues --- p.203Chapter A.1 --- Rule Generation by MRAFC scheme --- p.203Chapter A.2 --- Implementation Considerations --- p.211Chapter A.3 --- MRAFC System Design Procedure --- p.215Bibliography --- p.21
    • …
    corecore