84 research outputs found
TFN: An Interpretable Neural Network with Time-Frequency Transform Embedded for Intelligent Fault Diagnosis
Convolutional Neural Networks (CNNs) are widely used in fault diagnosis of
mechanical systems due to their powerful feature extraction and classification
capabilities. However, the CNN is a typical black-box model, and the mechanism
of CNN's decision-making are not clear, which limits its application in
high-reliability-required fault diagnosis scenarios. To tackle this issue, we
propose a novel interpretable neural network termed as Time-Frequency Network
(TFN), where the physically meaningful time-frequency transform (TFT) method is
embedded into the traditional convolutional layer as an adaptive preprocessing
layer. This preprocessing layer named as time-frequency convolutional (TFconv)
layer, is constrained by a well-designed kernel function to extract
fault-related time-frequency information. It not only improves the diagnostic
performance but also reveals the logical foundation of the CNN prediction in
the frequency domain. Different TFT methods correspond to different kernel
functions of the TFconv layer. In this study, four typical TFT methods are
considered to formulate the TFNs and their effectiveness and interpretability
are proved through three mechanical fault diagnosis experiments. Experimental
results also show that the proposed TFconv layer can be easily generalized to
other CNNs with different depths. The code of TFN is available on
https://github.com/ChenQian0618/TFN.Comment: 20 pages, 15 figures, 5 table
Learning spectro-temporal representations of complex sounds with parameterized neural networks
Deep Learning models have become potential candidates for auditory
neuroscience research, thanks to their recent successes on a variety of
auditory tasks. Yet, these models often lack interpretability to fully
understand the exact computations that have been performed. Here, we proposed a
parametrized neural network layer, that computes specific spectro-temporal
modulations based on Gabor kernels (Learnable STRFs) and that is fully
interpretable. We evaluated predictive capabilities of this layer on Speech
Activity Detection, Speaker Verification, Urban Sound Classification and Zebra
Finch Call Type Classification. We found out that models based on Learnable
STRFs are on par for all tasks with different toplines, and obtain the best
performance for Speech Activity Detection. As this layer is fully
interpretable, we used quantitative measures to describe the distribution of
the learned spectro-temporal modulations. The filters adapted to each task and
focused mostly on low temporal and spectral modulations. The analyses show that
the filters learned on human speech have similar spectro-temporal parameters as
the ones measured directly in the human auditory cortex. Finally, we observed
that the tasks organized in a meaningful way: the human vocalizations tasks
closer to each other and bird vocalizations far away from human vocalizations
and urban sounds tasks
- …