1,170,647 research outputs found

    Knowledge Extracted from Recurrent Deep Belief Network for Real Time Deterministic Control

    Full text link
    Recently, the market on deep learning including not only software but also hardware is developing rapidly. Big data is collected through IoT devices and the industry world will analyze them to improve their manufacturing process. Deep Learning has the hierarchical network architecture to represent the complicated features of input patterns. Although deep learning can show the high capability of classification, prediction, and so on, the implementation on GPU devices are required. We may meet the trade-off between the higher precision by deep learning and the higher cost with GPU devices. We can success the knowledge extraction from the trained deep learning with high classification capability. The knowledge that can realize faster inference of pre-trained deep network is extracted as IF-THEN rules from the network signal flow given input data. Some experiment results with benchmark tests for time series data sets showed the effectiveness of our proposed method related to the computational speed.Comment: 6 pages, 10 figures. arXiv admin note: text overlap with arXiv:1807.0395

    A Deep Hierarchical Approach to Lifelong Learning in Minecraft

    Full text link
    We propose a lifelong learning system that has the ability to reuse and transfer knowledge from one task to another while efficiently retaining the previously learned knowledge-base. Knowledge is transferred by learning reusable skills to solve tasks in Minecraft, a popular video game which is an unsolved and high-dimensional lifelong learning problem. These reusable skills, which we refer to as Deep Skill Networks, are then incorporated into our novel Hierarchical Deep Reinforcement Learning Network (H-DRLN) architecture using two techniques: (1) a deep skill array and (2) skill distillation, our novel variation of policy distillation (Rusu et. al. 2015) for learning skills. Skill distillation enables the HDRLN to efficiently retain knowledge and therefore scale in lifelong learning, by accumulating knowledge and encapsulating multiple reusable skills into a single distilled network. The H-DRLN exhibits superior performance and lower learning sample complexity compared to the regular Deep Q Network (Mnih et. al. 2015) in sub-domains of Minecraft
    corecore