201 research outputs found

    Hybrid LQG-Neural Controller for Inverted Pendulum System

    Full text link
    The paper presents a hybrid system controller, incorporating a neural and an LQG controller. The neural controller has been optimized by genetic algorithms directly on the inverted pendulum system. The failure free optimization process stipulated a relatively small region of the asymptotic stability of the neural controller, which is concentrated around the regulation point. The presented hybrid controller combines benefits of a genetically optimized neural controller and an LQG controller in a single system controller. High quality of the regulation process is achieved through utilization of the neural controller, while stability of the system during transient processes and a wide range of operation are assured through application of the LQG controller. The hybrid controller has been validated by applying it to a simulation model of an inherently unstable system of inverted pendulum

    Synthesis of Minimal Error Control Software

    Full text link
    Software implementations of controllers for physical systems are at the core of many embedded systems. The design of controllers uses the theory of dynamical systems to construct a mathematical control law that ensures that the controlled system has certain properties, such as asymptotic convergence to an equilibrium point, while optimizing some performance criteria. However, owing to quantization errors arising from the use of fixed-point arithmetic, the implementation of this control law can only guarantee practical stability: under the actions of the implementation, the trajectories of the controlled system converge to a bounded set around the equilibrium point, and the size of the bounded set is proportional to the error in the implementation. The problem of verifying whether a controller implementation achieves practical stability for a given bounded set has been studied before. In this paper, we change the emphasis from verification to automatic synthesis. Using synthesis, the need for formal verification can be considerably reduced thereby reducing the design time as well as design cost of embedded control software. We give a methodology and a tool to synthesize embedded control software that is Pareto optimal w.r.t. both performance criteria and practical stability regions. Our technique is a combination of static analysis to estimate quantization errors for specific controller implementations and stochastic local search over the space of possible controllers using particle swarm optimization. The effectiveness of our technique is illustrated using examples of various standard control systems: in most examples, we achieve controllers with close LQR-LQG performance but with implementation errors, hence regions of practical stability, several times as small.Comment: 18 pages, 2 figure

    Feedback control of unsupported standing in paraplegia. Part II: experimental results

    Get PDF
    For pt. I see ibid., vol. 5, no. 4, p. 331-40 (1997). This is the second of a pair of papers which describe an investigation into the feasibility of providing artificial balance to paraplegics using electrical stimulation of the paralyzed muscles. By bracing the body above the shanks, only stimulation of the plantar flexors is necessary. This arrangement prevents any influence from the intact neuromuscular system above the spinal cord lesion. Here, the authors present experimental results from intact and paraplegic subjects

    Synthesizing Stable Reduced-Order Visuomotor Policies for Nonlinear Systems via Sums-of-Squares Optimization

    Full text link
    We present a method for synthesizing dynamic, reduced-order output-feedback polynomial control policies for control-affine nonlinear systems which guarantees runtime stability to a goal state, when using visual observations and a learned perception module in the feedback control loop. We leverage Lyapunov analysis to formulate the problem of synthesizing such policies. This problem is nonconvex in the policy parameters and the Lyapunov function that is used to prove the stability of the policy. To solve this problem approximately, we propose two approaches: the first solves a sequence of sum-of-squares optimization problems to iteratively improve a policy which is provably-stable by construction, while the second directly performs gradient-based optimization on the parameters of the polynomial policy, and its closed-loop stability is verified a posteriori. We extend our approach to provide stability guarantees in the presence of observation noise, which realistically arises due to errors in the learned perception module. We evaluate our approach on several underactuated nonlinear systems, including pendula and quadrotors, showing that our guarantees translate to empirical stability when controlling these systems from images, while baseline approaches can fail to reliably stabilize the system.Comment: IEEE Conference on Decision and Control (CDC), Singapore, December 2023 (accepted

    Modeling and design of an observer-based robust controller for a low-cost inverted pendulum based on the H8 approach

    Get PDF
    © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting /republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other worksThe inverted pendulum is a mechanical system with a simple configuration that carries a non-linear, unstable nature widely used in control theory as a benchmark for research. The current research presents the modeling of a low-cost commercially available inverted pendulum and the design of a robust state-feedback controller and a robust observer, following the H 8 principle and using a low-cost commercially available inverted pendulum system as a reference. Simulated results compare the performance of the designed controller and observer with a conventional LQR controller and observer.Peer ReviewedPostprint (author's final draft

    Control of a modified double inverted pendulum using machine learning based model predictive control

    Get PDF
    Abstract: A machine learning-based controller (MLC) has been developed for a modified double inverted pendulum on a cart (MDIPC). First, the governing differential equations of the system are derived using the Lagrangian method. Then, a dataset is generated to train and test the machine learning-based models of the plant. Different types of machine learning models such as artificial neural networks (ANN), deep neural networks (DNN), long-short-term memory neural networks (LSTM), gated recurrent unit (GRU), and recurrent neural networks (RNN) are employed to capture the system’s dynamics. DNN and LSTM are selected due to their superior performance compared to other models. Finally, different variations of the Model Predictive Controller (MPC) are designed, and their performance is evaluated in terms of running time and tracking error. The proposed control methods are shown to have an advantage over the conventional nonlinear and linear model predictive control methods in simulation.Communication présentée lors du congrès international tenu conjointement par Canadian Society for Mechanical Engineering (CSME) et Computational Fluid Dynamics Society of Canada (CFD Canada), à l’Université de Sherbrooke (Québec), du 28 au 31 mai 2023

    Locomoção de humanoides robusta e versátil baseada em controlo analítico e física residual

    Get PDF
    Humanoid robots are made to resemble humans but their locomotion abilities are far from ours in terms of agility and versatility. When humans walk on complex terrains or face external disturbances, they combine a set of strategies, unconsciously and efficiently, to regain stability. This thesis tackles the problem of developing a robust omnidirectional walking framework, which is able to generate versatile and agile locomotion on complex terrains. We designed and developed model-based and model-free walk engines and formulated the controllers using different approaches including classical and optimal control schemes and validated their performance through simulations and experiments. These frameworks have hierarchical structures that are composed of several layers. These layers are composed of several modules that are connected together to fade the complexity and increase the flexibility of the proposed frameworks. Additionally, they can be easily and quickly deployed on different platforms. Besides, we believe that using machine learning on top of analytical approaches is a key to open doors for humanoid robots to step out of laboratories. We proposed a tight coupling between analytical control and deep reinforcement learning. We augmented our analytical controller with reinforcement learning modules to learn how to regulate the walk engine parameters (planners and controllers) adaptively and generate residuals to adjust the robot’s target joint positions (residual physics). The effectiveness of the proposed frameworks was demonstrated and evaluated across a set of challenging simulation scenarios. The robot was able to generalize what it learned in one scenario, by displaying human-like locomotion skills in unforeseen circumstances, even in the presence of noise and external pushes.Os robôs humanoides são feitos para se parecerem com humanos, mas suas habilidades de locomoção estão longe das nossas em termos de agilidade e versatilidade. Quando os humanos caminham em terrenos complexos ou enfrentam distúrbios externos combinam diferentes estratégias, de forma inconsciente e eficiente, para recuperar a estabilidade. Esta tese aborda o problema de desenvolver um sistema robusto para andar de forma omnidirecional, capaz de gerar uma locomoção para robôs humanoides versátil e ágil em terrenos complexos. Projetámos e desenvolvemos motores de locomoção sem modelos e baseados em modelos. Formulámos os controladores usando diferentes abordagens, incluindo esquemas de controlo clássicos e ideais, e validámos o seu desempenho por meio de simulações e experiências reais. Estes frameworks têm estruturas hierárquicas compostas por várias camadas. Essas camadas são compostas por vários módulos que são conectados entre si para diminuir a complexidade e aumentar a flexibilidade dos frameworks propostos. Adicionalmente, o sistema pode ser implementado em diferentes plataformas de forma fácil. Acreditamos que o uso de aprendizagem automática sobre abordagens analíticas é a chave para abrir as portas para robôs humanoides saírem dos laboratórios. Propusemos um forte acoplamento entre controlo analítico e aprendizagem profunda por reforço. Expandimos o nosso controlador analítico com módulos de aprendizagem por reforço para aprender como regular os parâmetros do motor de caminhada (planeadores e controladores) de forma adaptativa e gerar resíduos para ajustar as posições das juntas alvo do robô (física residual). A eficácia das estruturas propostas foi demonstrada e avaliada em um conjunto de cenários de simulação desafiadores. O robô foi capaz de generalizar o que aprendeu em um cenário, exibindo habilidades de locomoção humanas em circunstâncias imprevistas, mesmo na presença de ruído e impulsos externos.Programa Doutoral em Informátic

    Dual Mode Control of an Inverted Pendulum: Design, Analysis and Experimental Evaluation

    Get PDF
    We present an inverted pendulum design using readily available V-slot rail components and 3D printing to construct custom parts. To enable the examination of different pendulum characteristics, we constructed three pendulum poles of different lengths. We implemented a brake mechanism to modify sliding friction resistance and built a paddle that can be attached to the ends of the pendulum poles. A testing rig was also developed to consistently apply disturbances by tapping the pendulum pole, characterizing balancing performance. We perform a comprehensive analysis of the behavior and control of the pendulum. This begins by considering its dynamics, including the nonlinear differential equation that describes the system, its linearization, and its representation in the s-domain. The primary focus of this work is the development of two distinct control modes for the pendulum: a velocity control mode, designed to balance the pendulum while the cart is in motion, and a position control mode, aimed at maintaining the pendulum cart at a specific location. For this, we derived two different state space models: one for implementing the velocity control mode and another for the position control mode. In the position control mode, integral action applied to the cart position ensures that the inverted pendulum remains balanced and maintains its desired position on the rail. For both models, linear observer-based state feedback controllers were implemented. The control laws are designed as linear quadratic regulators (LQR), and the systems are simulated in MATLAB. To actuate the physical pendulum system, a stepper motor was used, and its controller was assembled in a DIN rail panel to simplify the integration of all necessary components. We examined how the optimized performance, achieved with the medium-length pendulum pole, translates to poles of other lengths. Our findings reveal distinct behavioral differences between the control modes
    corecore