3 research outputs found
Feature Importance Explanations for Temporal Black-Box Models
Models in the supervised learning framework may capture rich and complex
representations over the features that are hard for humans to interpret.
Existing methods to explain such models are often specific to architectures and
data where the features do not have a time-varying component. In this work, we
propose TIME, a method to explain models that are inherently temporal in
nature. Our approach (i) uses a model-agnostic permutation-based approach to
analyze global feature importance, (ii) identifies the importance of salient
features with respect to their temporal ordering as well as localized windows
of influence, and (iii) uses hypothesis testing to provide statistical rigor