Machine learning models often deteriorate in their performance when they are
used to predict the outcomes over data on which they were not trained. These
scenarios can often arise in real world when the distribution of data changes
gradually or abruptly due to major events like a pandemic. There have been many
attempts in machine learning research to come up with techniques that are
resilient to such Concept drifts. However, there is no principled framework to
identify the drivers behind the drift in model performance. In this paper, we
propose a novel framework - DBShap that uses Shapley values to identify the
main contributors of the drift and quantify their respective contributions. The
proposed framework not only quantifies the importance of individual features in
driving the drift but also includes the change in the underlying relation
between the input and output as a possible driver. The explanation provided by
DBShap can be used to understand the root cause behind the drift and use it to
make the model resilient to the drift