1 research outputs found
Investigating the Compositional Structure Of Deep Neural Networks
The current understanding of deep neural networks can only partially explain
how input structure, network parameters and optimization algorithms jointly
contribute to achieve the strong generalization power that is typically
observed in many real-world applications. In order to improve the comprehension
and interpretability of deep neural networks, we here introduce a novel
theoretical framework based on the compositional structure of piecewise linear
activation functions. By defining a direct acyclic graph representing the
composition of activation patterns through the network layers, it is possible
to characterize the instances of the input data with respect to both the
predicted label and the specific (linear) transformation used to perform
predictions. Preliminary tests on the MNIST dataset show that our method can
group input instances with regard to their similarity in the internal
representation of the neural network, providing an intuitive measure of input
complexity