6 research outputs found

    UAV Maneuvering Target Tracking in Uncertain Environments based on Deep Reinforcement Learning and Meta-learning

    Get PDF
    This paper combines Deep Reinforcement Learning (DRL) with Meta-learning and proposes a novel approach, named Meta Twin Delayed Deep Deterministic policy gradient (Meta-TD3), to realize the control of Unmanned Aerial Vehicle (UAV), allowing a UAV to quickly track a target in an environment where the motion of a target is uncertain. This approach can be applied to a variety of scenarios, such as wildlife protection, emergency aid, and remote sensing. We consider multi-tasks experience replay buffer to provide data for multi-tasks learning of DRL algorithm, and we combine Meta-learning to develop a multi-task reinforcement learning update method to ensure the generalization capability of reinforcement learning. Compared with the state-of-the-art algorithms, Deep Deterministic Policy Gradient (DDPG) and Twin Delayed Deep Deterministic policy gradient (TD3), experimental results show that the Meta-TD3 algorithm has achieved a great improvement in terms of both convergence value and convergence rate. In a UAV target tracking problem, Meta-TD3 only requires a few steps to train to enable a UAV to adapt quickly to a new target movement mode more and maintain a better tracking effectiveness

    Self-Evolving Integrated Vertical Heterogeneous Networks

    Full text link
    6G and beyond networks tend towards fully intelligent and adaptive design in order to provide better operational agility in maintaining universal wireless access and supporting a wide range of services and use cases while dealing with network complexity efficiently. Such enhanced network agility will require developing a self-evolving capability in designing both the network architecture and resource management to intelligently utilize resources, reduce operational costs, and achieve the coveted quality of service (QoS). To enable this capability, the necessity of considering an integrated vertical heterogeneous network (VHetNet) architecture appears to be inevitable due to its high inherent agility. Moreover, employing an intelligent framework is another crucial requirement for self-evolving networks to deal with real-time network optimization problems. Hence, in this work, to provide a better insight on network architecture design in support of self-evolving networks, we highlight the merits of integrated VHetNet architecture while proposing an intelligent framework for self-evolving integrated vertical heterogeneous networks (SEI-VHetNets). The impact of the challenges associated with SEI-VHetNet architecture, on network management is also studied considering a generalized network model. Furthermore, the current literature on network management of integrated VHetNets along with the recent advancements in artificial intelligence (AI)/machine learning (ML) solutions are discussed. Accordingly, the core challenges of integrating AI/ML in SEI-VHetNets are identified. Finally, the potential future research directions for advancing the autonomous and self-evolving capabilities of SEI-VHetNets are discussed.Comment: 25 pages, 5 figures, 2 table

    Modified Structured Domain Randomization in a Synthetic Environment for Learning Algorithms

    Get PDF
    Deep Reinforcement Learning (DRL) has the capability to solve many complex tasks in robotics, self-driving cars, smart grids, finance, healthcare, and intelligent autonomous systems. During training, DRL agents interact freely with the environment to arrive at an inference model. Under real-world conditions this training creates difficulties of safety, cost, and time considerations. Training in synthetic environments helps overcome these difficulties, however, this only approximates real-world conditions resulting in a ‘reality gap’. The synthetic training of agents has proven advantageous but requires methods to bridge this reality gap. This work addressed this through a methodology which supports agent learning. A framework which incorporates a modifiable synthetic environment integrated with an unmodified DRL algorithm was used to train, test, and evaluate agents while using a modified Structured Domain Randomization (SDR+) technique. It was hypothesized that the application of environment domain randomizations (DR) during the learning process would allow the agent to learn variability and adapt accordingly. Experiments using the SDR+ technique included naturalistic and physical-based DR while applying the concept of context-aware elements (CAE) to guide and speed up agent training. Drone racing served as the use case. The experimental framework workflow generated the following results. First, a baseline was established by training and validating an agent in a generic synthetic environment void of DR and CAE. The agent was then tested in environments with DR which showed degradation of performance. This validated the reality gap phenomenon under synthetic conditions and established a metric for comparison. Second, an SDR+ agent was successfully trained and validated under various applications of DR and CAE. Ablation studies determine most DR and CAE effects applied had equivalent effects on agent performance. Under comparison, the SDR+ agent’s performance exceeded that of the baseline agent in every test where single or combined DR effects were applied. These tests indicated that the SDR+ agent’s performance did improve in environments with applied DR of the same order as received during training. The last result came from testing the SDR+ agent’s inference model in a completely new synthetic environment with more extreme and additional DR effects applied. The SDR+ agent’s performance was degraded to a point where it was inconclusive if generalization occurred in the form of learning to adapt to variations. If the agent’s navigational capabilities, control/feedback from the DRL algorithm, and the use of visual sensing were improved, it is assumed that future work could exhibit indications of generalization using the SDR+ technique
    corecore