22 research outputs found

    Real-time UAV Complex Missions Leveraging Self-Adaptive Controller with Elastic Structure

    Full text link
    The expectation of unmanned air vehicles (UAVs) pushes the operation environment to narrow spaces, where the systems may fly very close to an object and perform an interaction. This phase brings the variation in UAV dynamics: thrust and drag coefficient of the propellers might change under different proximity. At the same time, UAVs may need to operate under external disturbances to follow time-based trajectories. Under these challenging conditions, a standard controller approach may not handle all missions with a fixed structure, where there may be a need to adjust its parameters for each different case. With these motivations, practical implementation and evaluation of an autonomous controller applied to a quadrotor UAV are proposed in this work. A self-adaptive controller based on a composite control scheme where a combination of sliding mode control (SMC) and evolving neuro-fuzzy control is used. The parameter vector of the neuro-fuzzy controller is updated adaptively based on the sliding surface of the SMC. The autonomous controller possesses a new elastic structure, where the number of fuzzy rules keeps growing or get pruned based on bias and variance balance. The interaction of the UAV is experimentally evaluated in real time considering the ground effect, ceiling effect and flight through a strong fan-generated wind while following time-based trajectories.Comment: 18 page

    Immunity-Based Framework for Autonomous Flight in GPS-Challenged Environment

    Get PDF
    In this research, the artificial immune system (AIS) paradigm is used for the development of a conceptual framework for autonomous flight when vehicle position and velocity are not available from direct sources such as the global navigation satellite systems or external landmarks and systems. The AIS is expected to provide corrections of velocity and position estimations that are only based on the outputs of onboard inertial measurement units (IMU). The AIS comprises sets of artificial memory cells that simulate the function of memory T- and B-cells in the biological immune system of vertebrates. The innate immune system uses information about invading antigens and needed antibodies. This information is encoded and sorted by T- and B-cells. The immune system has an adaptive component that can accelerate and intensify the immune response upon subsequent infection with the same antigen. The artificial memory cells attempt to mimic these characteristics for estimation error compensation and are constructed under normal conditions when all sensor systems function accurately, including those providing vehicle position and velocity information. The artificial memory cells consist of two main components: a collection of instantaneous measurements of relevant vehicle features representing the antigen and a set of instantaneous estimation errors or correction features, representing the antibodies. The antigen characterizes the dynamics of the system and is assumed to be correlated with the required corrections of position and velocity estimation or antibodies. When the navigation source is unavailable, the currently measured vehicle features from the onboard sensors are matched against the AIS antigens and the corresponding corrections are extracted and used to adjust the position and velocity estimation algorithm and provide the corrected estimation as actual measurement feedback to the vehicle’s control system. The proposed framework is implemented and tested through simulation in two versions: with corrections applied to the output or the input of the estimation scheme. For both approaches, the vehicle feature or antigen sets include increments of body axes components of acceleration and angular rate. The correction feature or antibody sets include vehicle position and velocity and vehicle acceleration adjustments, respectively. The impact on the performance of the proposed methodology produced by essential elements such as path generation method, matching algorithm, feature set, and the IMU grade was investigated. The findings demonstrated that in all cases, the proposed methodology could significantly reduce the accumulation of dead reckoning errors and can become a viable solution in situations where direct accurate measurements and other sources of information are not available. The functionality of the proposed methodology and its promising outcomes were successfully illustrated using the West Virginia University unmanned aerial system simulation environment

    Sistem Pendaratan Otomatis pada Quadcopter menggunakan Sliding Mode Controller

    Get PDF
    A quadcopter has a very nonlinear system characteristic that is influenced by unexpected disturbances such as the influence of wind that reflected off the ground when taking off or landing. Therefore, a robust control strategy is needed to improve the quadcopter performance. In this study, the control strategy is used to resolve outdoor automatic landing problems in a stable manner using the Sliding Mode Control (SMC) algorithm. The quadcopter has six degrees of freedom (6-DoF) with only four independent inputs, this makes it impossible to control 6-DoF directly and simultaneously. To handle this, the proposed structure is a multilevel control structure, inner loop dan outer loop controller. The Inner loop controls the rotational dynamics subsystem (3-DoF), while the outer loop controls the translational dynamics subsystem (3-DoF) which is designed in conjunction with the generation of attitude angle set-point. With the concept of automatics landing can reduce the risk of accidents on a quadcopter. The SMC technique on an automatics quadcopter landing shows the results with an error in roll of ± 0.05 radians, pitch ± 0.03 radians, yaw less than 0.3 radians, and translational movements the z-axis is ± 0.2 meters

    Autonomous Drone Landings on an Unmanned Marine Vehicle using Deep Reinforcement Learning

    Get PDF
    This thesis describes with the integration of an Unmanned Surface Vehicle (USV) and an Unmanned Aerial Vehicle (UAV, also commonly known as drone) in a single Multi-Agent System (MAS). In marine robotics, the advantage offered by a MAS consists of exploiting the key features of a single robot to compensate for the shortcomings in the other. In this way, a USV can serve as the landing platform to alleviate the need for a UAV to be airborne for long periods time, whilst the latter can increase the overall environmental awareness thanks to the possibility to cover large portions of the prevailing environment with a camera (or more than one) mounted on it. There are numerous potential applications in which this system can be used, such as deployment in search and rescue missions, water and coastal monitoring, and reconnaissance and force protection, to name but a few. The theory developed is of a general nature. The landing manoeuvre has been accomplished mainly identifying, through artificial vision techniques, a fiducial marker placed on a flat surface serving as a landing platform. The raison d'etre for the thesis was to propose a new solution for autonomous landing that relies solely on onboard sensors and with minimum or no communications between the vehicles. To this end, initial work solved the problem while using only data from the cameras mounted on the in-flight drone. In the situation in which the tracking of the marker is interrupted, the current position of the USV is estimated and integrated into the control commands. The limitations of classic control theory used in this approached suggested the need for a new solution that empowered the flexibility of intelligent methods, such as fuzzy logic or artificial neural networks. The recent achievements obtained by deep reinforcement learning (DRL) techniques in end-to-end control in playing the Atari video-games suite represented a fascinating while challenging new way to see and address the landing problem. Therefore, novel architectures were designed for approximating the action-value function of a Q-learning algorithm and used to map raw input observation to high-level navigation actions. In this way, the UAV learnt how to land from high latitude without any human supervision, using only low-resolution grey-scale images and with a level of accuracy and robustness. Both the approaches have been implemented on a simulated test-bed based on Gazebo simulator and the model of the Parrot AR-Drone. The solution based on DRL was further verified experimentally using the Parrot Bebop 2 in a series of trials. The outcomes demonstrate that both these innovative methods are both feasible and practicable, not only in an outdoor marine scenario but also in indoor ones as well

    Path Planning and Control of UAV using Machine Learning and Deep Reinforcement Learning Techniques

    Get PDF
    Uncrewed Aerial Vehicles (UAVs) are playing an increasingly signifcant role in modern life. In the past decades, lots of commercial and scientifc communities all over the world have been developing autonomous techniques of UAV for a broad range of applications, such as forest fre monitoring, parcel delivery, disaster rescue, natural resource exploration, and surveillance. This brings a large number of opportunities and challenges for UAVs to improve their abilities in path planning, motion control and fault-tolerant control (FTC) directions. Meanwhile, due to the powerful decisionmaking, adaptive learning and pattern recognition capabilities of machine learning (ML) and deep reinforcement learning (DRL), the use of ML and DRL have been developing rapidly and obtain major achievement in a variety of applications. However, there is not many researches on the ML and DRl in the feld of motion control and real-time path planning of UAVs. This thesis focuses on the development of ML and DRL in the path planning, motion control and FTC of UAVs. A number of ontributions pertaining to the state space defnition, reward function design and training method improvement have been made in this thesis, which improve the effectiveness and efciency of applying DRL in UAV motion control problems. In addition to the control problems, this thesis also presents real-time path planning contributions, including relative state space defnition and human pedestrian inspired reward function, which provide a reliable and effective solution of the real-time path planning in a complex environment

    UAV or Drones for Remote Sensing Applications in GPS/GNSS Enabled and GPS/GNSS Denied Environments

    Get PDF
    The design of novel UAV systems and the use of UAV platforms integrated with robotic sensing and imaging techniques, as well as the development of processing workflows and the capacity of ultra-high temporal and spatial resolution data, have enabled a rapid uptake of UAVs and drones across several industries and application domains.This book provides a forum for high-quality peer-reviewed papers that broaden awareness and understanding of single- and multiple-UAV developments for remote sensing applications, and associated developments in sensor technology, data processing and communications, and UAV system design and sensing capabilities in GPS-enabled and, more broadly, Global Navigation Satellite System (GNSS)-enabled and GPS/GNSS-denied environments.Contributions include:UAV-based photogrammetry, laser scanning, multispectral imaging, hyperspectral imaging, and thermal imaging;UAV sensor applications; spatial ecology; pest detection; reef; forestry; volcanology; precision agriculture wildlife species tracking; search and rescue; target tracking; atmosphere monitoring; chemical, biological, and natural disaster phenomena; fire prevention, flood prevention; volcanic monitoring; pollution monitoring; microclimates; and land use;Wildlife and target detection and recognition from UAV imagery using deep learning and machine learning techniques;UAV-based change detection
    corecore