Despite recent studies on understanding deep neural networks (DNNs), there
exists numerous questions on how DNNs generate their predictions. Especially,
given similar predictions on different input samples, are the underlying
mechanisms generating those predictions the same? In this work, we propose
NeuCEPT, a method to locally discover critical neurons that play a major role
in the model's predictions and identify model's mechanisms in generating those
predictions. We first formulate a critical neurons identification problem as
maximizing a sequence of mutual-information objectives and provide a
theoretical framework to efficiently solve for critical neurons while keeping
the precision under control. NeuCEPT next heuristically learns different
model's mechanisms in an unsupervised manner. Our experimental results show
that neurons identified by NeuCEPT not only have strong influence on the
model's predictions but also hold meaningful information about model's
mechanisms.Comment: 6 main page