289,081 research outputs found
Domain Adaptive Transfer Learning for Fault Diagnosis
Thanks to digitization of industrial assets in fleets, the ambitious goal of
transferring fault diagnosis models fromone machine to the other has raised
great interest. Solving these domain adaptive transfer learning tasks has the
potential to save large efforts on manually labeling data and modifying models
for new machines in the same fleet. Although data-driven methods have shown
great potential in fault diagnosis applications, their ability to generalize on
new machines and new working conditions are limited because of their tendency
to overfit to the training set in reality. One promising solution to this
problem is to use domain adaptation techniques. It aims to improve model
performance on the target new machine. Inspired by its successful
implementation in computer vision, we introduced Domain-Adversarial Neural
Networks (DANN) to our context, along with two other popular methods existing
in previous fault diagnosis research. We then carefully justify the
applicability of these methods in realistic fault diagnosis settings, and offer
a unified experimental protocol for a fair comparison between domain adaptation
methods for fault diagnosis problems.Comment: Presented at 2019 Prognostics and System Health Management Conference
(PHM 2019) in Paris, Franc
Efficient Deep Reinforcement Learning via Adaptive Policy Transfer
Transfer Learning (TL) has shown great potential to accelerate Reinforcement
Learning (RL) by leveraging prior knowledge from past learned policies of
relevant tasks. Existing transfer approaches either explicitly computes the
similarity between tasks or select appropriate source policies to provide
guided explorations for the target task. However, how to directly optimize the
target policy by alternatively utilizing knowledge from appropriate source
policies without explicitly measuring the similarity is currently missing. In
this paper, we propose a novel Policy Transfer Framework (PTF) to accelerate RL
by taking advantage of this idea. Our framework learns when and which source
policy is the best to reuse for the target policy and when to terminate it by
modeling multi-policy transfer as the option learning problem. PTF can be
easily combined with existing deep RL approaches. Experimental results show it
significantly accelerates the learning process and surpasses state-of-the-art
policy transfer methods in terms of learning efficiency and final performance
in both discrete and continuous action spaces.Comment: Accepted by IJCAI'202
Active Learning: Effects of Core Training Design Elements on Self-Regulatory Processes, Learning, and Adaptability
This research describes a comprehensive examination of the cognitive, motivational, and emotional processes underlying active learning approaches, their effects on learning and transfer, and the core training design elements (exploration, training frame, emotion-control) and individual differences (cognitive ability, trait goal orientation, trait anxiety) that shape these processes. Participants (N = 350) were trained to operate a complex computer-based simulation. Exploratory learning and error-encouragement framing had a positive effect on adaptive transfer performance and interacted with cognitive ability and dispositional goal orientation to influence trainees’ metacognition and state goal orientation. Trainees who received the emotion-control strategy had lower levels of state anxiety. Implications for developing an integrated theory of active learning, learner-centered design, and research extensions are discussed
- …