3,295 research outputs found

    Cascaded control for balancing an inverted pendulum on a flying quadrotor

    Get PDF
    SUMMARYThis paper is focused on the flying inverted pendulum problem, i.e., how to balance a pendulum on a flying quadrotor. After analyzing the system dynamics, a three loop cascade control strategy is proposed based on active disturbance rejection control (ADRC). Both the pendulum balancing and the trajectory tracking of the flying quadrotor are implemented by using the proposed control strategy. A simulation platform of 3D mechanical systems is deployed to verify the control performance and robustness of the proposed strategy, including a comparison with a Linear Quadratic Controller (LQR). Finally, a real quadrotor is flying with a pendulum to demonstrate the proposed method that can keep the system at equilibrium and show strong robustness against disturbances.</jats:p

    Neural Lyapunov Control

    Full text link
    We propose new methods for learning control policies and neural network Lyapunov functions for nonlinear control problems, with provable guarantee of stability. The framework consists of a learner that attempts to find the control and Lyapunov functions, and a falsifier that finds counterexamples to quickly guide the learner towards solutions. The procedure terminates when no counterexample is found by the falsifier, in which case the controlled nonlinear system is provably stable. The approach significantly simplifies the process of Lyapunov control design, provides end-to-end correctness guarantee, and can obtain much larger regions of attraction than existing methods such as LQR and SOS/SDP. We show experiments on how the new methods obtain high-quality solutions for challenging control problems.Comment: NeurIPS 201

    ROS Based High Performance Control Architecture for an Aerial Robotic Testbed

    Get PDF
    The purpose of this thesis is to show the development of an aerial testbed based on the Robot Operating System (ROS). Such a testbed provides flexibility to control heterogenous vehicles, since the robots are able to simply communication with each other on the High Level (HL) control side. ROS runs on an embedded computer on-board each quadrotor. This eliminates the need of a Ground Base Station, since the complete HL control runs on-board the Unmanned Aerial Vehicle (UAV). The architecture of the system is explained throughout the thesis with detailed explanations of the specific hardware and software used for the system. The implementation on two different quadrotor models is documented and shows that even though they have different components, they can be controlled similarly by the framework. The user is able to control every unit of the testbed with position, velocity and/or acceleration data. To show this independency, control architectures are shown and implemented. Extensive tests verify their effectiveness. The flexibility of the proposed aerial testbed is demonstrated by implementing several applications that require high-performance control. Additionally, a framework for a flying inverted pendulum on a quadrotor using robust hybrid control is presented. The goal is to have a universal controller which is able to swing-up and balance an off-centered pendulum that is attached to the UAV linearly and rotationally. The complete dynamic model is derived and a control strategy is presented. The performance of the controller is demonstrated using realistic simulation studies. The realization in the testbed is documented with modifications that were made to the quadrotor to attach the pendulum. First flight tests are conducted and are presented. The possibilities of using a ROS based framework is shown at every step. It has many advantages for implementation purposes, especially in a heterogeneous robotic environment with many agents. Real-time data of the robot is provided by ROS topics and can be used at any point in the system. The control architecture has been validated and verified with different practical tests, which also allowed improving the system by tuning the specific control parameters

    Robust Intelligent Sensing and Control Multi Agent Analysis Platform for Research and Education

    Get PDF
    The aim of this thesis is the development and implementation of a controlled testing platform for the Robust Intelligent Sensing and Controls (RISC) Lab at Utah State University (USU). This will be an open source adaptable expandable robotics platform usable for both education and research. This differs from the many other platforms developed in that the entire platform software will be made open source. This open source software will encourage collaboration among other universities and enable researchers to essentially pick up where others have left off without the necessity of replicating months or even years of work. The expected results of this research will create a foundation for diverse robotics investigation at USU as well as enable attempts at novel methods of control, estimation and optimization. This will also contribute a complete software testbed setup to the already vibrant robotics open source research community. This thesis first outlines the platform setup and novel developments therein. The second stage provides an example of how this has been used in education, providing an example curriculum implementing modern control techniques. The third section provides some exploratory research in trajectory control and state estimation of the tip of an inverted pendulum atop a small unmanned aerial vehicle as well as bearing-only cooperative localization experimentation. Finally, a conclusion and future work is discussed

    Reinforcement Learning and Planning for Preference Balancing Tasks

    Get PDF
    Robots are often highly non-linear dynamical systems with many degrees of freedom, making solving motion problems computationally challenging. One solution has been reinforcement learning (RL), which learns through experimentation to automatically perform the near-optimal motions that complete a task. However, high-dimensional problems and task formulation often prove challenging for RL. We address these problems with PrEference Appraisal Reinforcement Learning (PEARL), which solves Preference Balancing Tasks (PBTs). PBTs define a problem as a set of preferences that the system must balance to achieve a goal. The method is appropriate for acceleration-controlled systems with continuous state-space and either discrete or continuous action spaces with unknown system dynamics. We show that PEARL learns a sub-optimal policy on a subset of states and actions, and transfers the policy to the expanded domain to produce a more refined plan on a class of robotic problems. We establish convergence to task goal conditions, and even when preconditions are not verifiable, show that this is a valuable method to use before other more expensive approaches. Evaluation is done on several robotic problems, such as Aerial Cargo Delivery, Multi-Agent Pursuit, Rendezvous, and Inverted Flying Pendulum both in simulation and experimentally. Additionally, PEARL is leveraged outside of robotics as an array sorting agent. The results demonstrate high accuracy and fast learning times on a large set of practical applications
    • …
    corecore