In this study, we address the interpretability issue in complex, black-box
Machine Learning models applied to sequence data. We introduce the Model-Based
tree Hidden Semi-Markov Model (MOB-HSMM), an inherently interpretable model
aimed at detecting high mortality risk events and discovering hidden patterns
associated with the mortality risk in Intensive Care Units (ICU). This model
leverages knowledge distilled from Deep Neural Networks (DNN) to enhance
predictive performance while offering clear explanations. Our experimental
results indicate the improved performance of Model-Based trees (MOB trees) via
employing LSTM for learning sequential patterns, which are then transferred to
MOB trees. Integrating MOB trees with the Hidden Semi-Markov Model (HSMM) in
the MOB-HSMM enables uncovering potential and explainable sequences using
available information