1,630 research outputs found

    Robust position control of a tilt-wing quadrotor

    Get PDF
    This paper presents a robust position controller for a tilt-wing quadrotor to track desired trajectories under external wind and aerodynamic disturbances. Wind effects are modeled using Dryden model and are included in the dynamic model of the vehicle. Robust position control is achieved by introducing a disturbance observer which estimates the total disturbance acting on the system. In the design of the disturbance observer, the nonlinear terms which appear in the dynamics of the aerial vehicle are also treated as disturbances and included in the total disturbance. Utilization of the disturbance observer implies a linear model with nominal parameters. Since the resulting dynamics are linear, only PID type simple controllers are designed for position and attitude control. Simulations and experimental results show that the performance of the observer based position control system is quite satisfactory

    A 64mW DNN-based Visual Navigation Engine for Autonomous Nano-Drones

    Full text link
    Fully-autonomous miniaturized robots (e.g., drones), with artificial intelligence (AI) based visual navigation capabilities are extremely challenging drivers of Internet-of-Things edge intelligence capabilities. Visual navigation based on AI approaches, such as deep neural networks (DNNs) are becoming pervasive for standard-size drones, but are considered out of reach for nanodrones with size of a few cm2{}^\mathrm{2}. In this work, we present the first (to the best of our knowledge) demonstration of a navigation engine for autonomous nano-drones capable of closed-loop end-to-end DNN-based visual navigation. To achieve this goal we developed a complete methodology for parallel execution of complex DNNs directly on-bard of resource-constrained milliwatt-scale nodes. Our system is based on GAP8, a novel parallel ultra-low-power computing platform, and a 27 g commercial, open-source CrazyFlie 2.0 nano-quadrotor. As part of our general methodology we discuss the software mapping techniques that enable the state-of-the-art deep convolutional neural network presented in [1] to be fully executed on-board within a strict 6 fps real-time constraint with no compromise in terms of flight results, while all processing is done with only 64 mW on average. Our navigation engine is flexible and can be used to span a wide performance range: at its peak performance corner it achieves 18 fps while still consuming on average just 3.5% of the power envelope of the deployed nano-aircraft.Comment: 15 pages, 13 figures, 5 tables, 2 listings, accepted for publication in the IEEE Internet of Things Journal (IEEE IOTJ

    Robust hovering control of a quad tilt-wing UAV

    Get PDF
    This paper presents design of a robust hovering controller for a quad tilt-wing UAV to hover at a desired position under external wind and aerodynamic disturbances. Wind and the aerodynamic disturbances are modeled using the Dryden model. In order to increase the robustness of the system, a disturbance observer is utilized to estimate the unknown disturbances acting on the system. Nonlinear terms which appear in the dynamics of the vehicle are also treated as disturbances and included in the total disturbance. Proper compensation of disturbances implies a linear model with nominal parameters. Thus, for robust hovering control, only PID type simple controllers have been employed and their performances have been found very satisfactory. Proposed hovering controller has been verified with several simulations and experiments

    Accurate Tracking of Aggressive Quadrotor Trajectories using Incremental Nonlinear Dynamic Inversion and Differential Flatness

    Full text link
    Autonomous unmanned aerial vehicles (UAVs) that can execute aggressive (i.e., high-speed and high-acceleration) maneuvers have attracted significant attention in the past few years. This paper focuses on accurate tracking of aggressive quadcopter trajectories. We propose a novel control law for tracking of position and yaw angle and their derivatives of up to fourth order, specifically, velocity, acceleration, jerk, and snap along with yaw rate and yaw acceleration. Jerk and snap are tracked using feedforward inputs for angular rate and angular acceleration based on the differential flatness of the quadcopter dynamics. Snap tracking requires direct control of body torque, which we achieve using closed-loop motor speed control based on measurements from optical encoders attached to the motors. The controller utilizes incremental nonlinear dynamic inversion (INDI) for robust tracking of linear and angular accelerations despite external disturbances, such as aerodynamic drag forces. Hence, prior modeling of aerodynamic effects is not required. We rigorously analyze the proposed control law through response analysis, and we demonstrate it in experiments. The controller enables a quadcopter UAV to track complex 3D trajectories, reaching speeds up to 12.9 m/s and accelerations up to 2.1g, while keeping the root-mean-square tracking error down to 6.6 cm, in a flight volume that is roughly 18 m by 7 m and 3 m tall. We also demonstrate the robustness of the controller by attaching a drag plate to the UAV in flight tests and by pulling on the UAV with a rope during hover.Comment: To be published in IEEE Transactions on Control Systems Technology. Revision: new set of experiments at increased speed (up to 12.9 m/s), updated controller design using quaternion representation, new video available at https://youtu.be/K15lNBAKDC

    Vision and Learning for Deliberative Monocular Cluttered Flight

    Full text link
    Cameras provide a rich source of information while being passive, cheap and lightweight for small and medium Unmanned Aerial Vehicles (UAVs). In this work we present the first implementation of receding horizon control, which is widely used in ground vehicles, with monocular vision as the only sensing mode for autonomous UAV flight in dense clutter. We make it feasible on UAVs via a number of contributions: novel coupling of perception and control via relevant and diverse, multiple interpretations of the scene around the robot, leveraging recent advances in machine learning to showcase anytime budgeted cost-sensitive feature selection, and fast non-linear regression for monocular depth prediction. We empirically demonstrate the efficacy of our novel pipeline via real world experiments of more than 2 kms through dense trees with a quadrotor built from off-the-shelf parts. Moreover our pipeline is designed to combine information from other modalities like stereo and lidar as well if available
    corecore