25,335 research outputs found
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
Neural networks are among the most accurate supervised learning methods in
use today, but their opacity makes them difficult to trust in critical
applications, especially when conditions in training differ from those in test.
Recent work on explanations for black-box models has produced tools (e.g. LIME)
to show the implicit rules behind predictions, which can help us identify when
models are right for the wrong reasons. However, these methods do not scale to
explaining entire datasets and cannot correct the problems they reveal. We
introduce a method for efficiently explaining and regularizing differentiable
models by examining and selectively penalizing their input gradients, which
provide a normal to the decision boundary. We apply these penalties both based
on expert annotation and in an unsupervised fashion that encourages diverse
models with qualitatively different decision boundaries for the same
classification problem. On multiple datasets, we show our approach generates
faithful explanations and models that generalize much better when conditions
differ between training and test
Interpretation of partial discharge activity in the presence of harmonics
Recent work has identified that circumstances of equipment operation can radically change condition monitoring data. This contribution investigates the significance of considering circumstance monitoring on the diagnostic interpretation of such condition monitoring data. Electrical treeing partial discharge data have been subjected to a data mining investigation, providing a platform for classification of harmonic influenced partial discharge patterns. The Total Harmonic Distortion (THD) index was varied to a maximum of 40%. The results show progressive development for interpretation of condition monitoring data, improving the asset manager's holistic view of an asset's health
Hierarchical Temporal Representation in Linear Reservoir Computing
Recently, studies on deep Reservoir Computing (RC) highlighted the role of
layering in deep recurrent neural networks (RNNs). In this paper, the use of
linear recurrent units allows us to bring more evidence on the intrinsic
hierarchical temporal representation in deep RNNs through frequency analysis
applied to the state signals. The potentiality of our approach is assessed on
the class of Multiple Superimposed Oscillator tasks. Furthermore, our
investigation provides useful insights to open a discussion on the main aspects
that characterize the deep learning framework in the temporal domain.Comment: This is a pre-print of the paper submitted to the 27th Italian
Workshop on Neural Networks, WIRN 201
- …