6,651 research outputs found

    Velocity field path-planning for single and multiple unmanned ariel vehicles

    Get PDF
    Unmanned aerial vehicles (UAV) have seen a rapid growth in utilisation for reconnaissance, mostly using single UAVs. However, future utilisation of UAVs for applications such as bistatic synthetic aperture radar and stereoscopic imaging, will require the use of multiple UAVs acting cooperatively to achieve mission goals. In addition, to de-skill the operation of UAVs for certain applications will require the migration of path-planning functions from the ground to the UAV. This paper details a computationally efficient algorithm to enable path-planning for single UAVs and to form and re-form UAV formations with active collision avoidance. The algorithm presented extends classical potential field methods used in other domains for the UAV path-planning problem. It is demonstrated that a range of tasks can be executed autonomously, allowing high level tasking of single and multiple UAVs in formation, with the formation commanded as a single entity

    Automatic Curriculum Learning For Deep RL: A Short Survey

    Full text link
    Automatic Curriculum Learning (ACL) has become a cornerstone of recent successes in Deep Reinforcement Learning (DRL).These methods shape the learning trajectories of agents by challenging them with tasks adapted to their capacities. In recent years, they have been used to improve sample efficiency and asymptotic performance, to organize exploration, to encourage generalization or to solve sparse reward problems, among others. The ambition of this work is dual: 1) to present a compact and accessible introduction to the Automatic Curriculum Learning literature and 2) to draw a bigger picture of the current state of the art in ACL to encourage the cross-breeding of existing concepts and the emergence of new ideas.Comment: Accepted at IJCAI202

    Technology assessment of advanced automation for space missions

    Get PDF
    Six general classes of technology requirements derived during the mission definition phase of the study were identified as having maximum importance and urgency, including autonomous world model based information systems, learning and hypothesis formation, natural language and other man-machine communication, space manufacturing, teleoperators and robot systems, and computer science and technology

    Mapping Instructions and Visual Observations to Actions with Reinforcement Learning

    Full text link
    We propose to directly map raw visual observations and text input to actions for instruction execution. While existing approaches assume access to structured environment representations or use a pipeline of separately trained models, we learn a single model to jointly reason about linguistic and visual input. We use reinforcement learning in a contextual bandit setting to train a neural network agent. To guide the agent's exploration, we use reward shaping with different forms of supervision. Our approach does not require intermediate representations, planning procedures, or training different models. We evaluate in a simulated environment, and show significant improvements over supervised learning and common reinforcement learning variants.Comment: In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 201
    • …
    corecore