1 research outputs found
Interpreting Differentiable Latent States for Healthcare Time-series Data
Machine learning enables extracting clinical insights from large temporal
datasets. The applications of such machine learning models include identifying
disease patterns and predicting patient outcomes. However, limited
interpretability poses challenges for deploying advanced machine learning in
digital healthcare. Understanding the meaning of latent states is crucial for
interpreting machine learning models, assuming they capture underlying
patterns. In this paper, we present a concise algorithm that allows for i)
interpreting latent states using highly related input features; ii)
interpreting predictions using subsets of input features via latent states; and
iii) interpreting changes in latent states over time. The proposed algorithm is
feasible for any model that is differentiable. We demonstrate that this
approach enables the identification of a daytime behavioral pattern for
predicting nocturnal behavior in a real-world healthcare dataset