748 research outputs found

    Three-dimensional in situ observations of compressive damage mechanisms in syntactic foam using X-ray microcomputed tomography

    Get PDF
    Royal Society Grant number RG140680 Lloyd's Register Foundation (GB) Oil and Gas Academy of Scotland Open access via Springer Compact AgreementPeer reviewedPublisher PD

    Agent Modeling as Auxiliary Task for Deep Reinforcement Learning

    Full text link
    In this paper we explore how actor-critic methods in deep reinforcement learning, in particular Asynchronous Advantage Actor-Critic (A3C), can be extended with agent modeling. Inspired by recent works on representation learning and multiagent deep reinforcement learning, we propose two architectures to perform agent modeling: the first one based on parameter sharing, and the second one based on agent policy features. Both architectures aim to learn other agents' policies as auxiliary tasks, besides the standard actor (policy) and critic (values). We performed experiments in both cooperative and competitive domains. The former is a problem of coordinated multiagent object transportation and the latter is a two-player mini version of the Pommerman game. Our results show that the proposed architectures stabilize learning and outperform the standard A3C architecture when learning a best response in terms of expected rewards.Comment: AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE'19

    The influence of pulsed laser powder bed fusion process parameters on Inconel 718 material properties

    Get PDF
    Funding This publication was made possible by the sponsorship and support of Lloyd's Register Foundation, United Kingdom. The work was enabled through, and undertaken at, the National Structural Integrity Research Centre (NSIRC) United Kingdom, a postgraduate engineering facility for industry-led research into structural integrity established and managed by TWI through a network of both national and international Universities. Lloyd’s Register Foundation helps to protect life and property by supporting engineering-related education, public engagement and the application of research. Data availability The raw data required to reproduce these findings cannot be shared at this time as the data also forms part of an ongoing study. The processed data required to reproduce these findings cannot be shared at this time as the data also forms part of an ongoing study.Peer reviewedPostprin

    Action Guidance with MCTS for Deep Reinforcement Learning

    Full text link
    Deep reinforcement learning has achieved great successes in recent years, however, one main challenge is the sample inefficiency. In this paper, we focus on how to use action guidance by means of a non-expert demonstrator to improve sample efficiency in a domain with sparse, delayed, and possibly deceptive rewards: the recently-proposed multi-agent benchmark of Pommerman. We propose a new framework where even a non-expert simulated demonstrator, e.g., planning algorithms such as Monte Carlo tree search with a small number rollouts, can be integrated within asynchronous distributed deep reinforcement learning methods. Compared to a vanilla deep RL algorithm, our proposed methods both learn faster and converge to better policies on a two-player mini version of the Pommerman game.Comment: AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE'19). arXiv admin note: substantial text overlap with arXiv:1904.05759, arXiv:1812.0004

    Terminal Prediction as an Auxiliary Task for Deep Reinforcement Learning

    Full text link
    Deep reinforcement learning has achieved great successes in recent years, but there are still open challenges, such as convergence to locally optimal policies and sample inefficiency. In this paper, we contribute a novel self-supervised auxiliary task, i.e., Terminal Prediction (TP), estimating temporal closeness to terminal states for episodic tasks. The intuition is to help representation learning by letting the agent predict how close it is to a terminal state, while learning its control policy. Although TP could be integrated with multiple algorithms, this paper focuses on Asynchronous Advantage Actor-Critic (A3C) and demonstrating the advantages of A3C-TP. Our extensive evaluation includes: a set of Atari games, the BipedalWalker domain, and a mini version of the recently proposed multi-agent Pommerman game. Our results on Atari games and the BipedalWalker domain suggest that A3C-TP outperforms standard A3C in most of the tested domains and in others it has similar performance. In Pommerman, our proposed method provides significant improvement both in learning efficiency and converging to better policies against different opponents.Comment: AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE'19). arXiv admin note: text overlap with arXiv:1812.0004

    Representative volume element (RVE) based crystal plasticity study of void growth on phase boundary in titanium alloys

    Get PDF
    Author is thankful to University of Aberdeen for the award of Elphinstone Scholarship which covers the tuition fee of PhD study of author.Peer reviewedPostprin
    corecore