5 research outputs found
Characterizing the Shape of Activation Space in Deep Neural Networks
The representations learned by deep neural networks are difficult to
interpret in part due to their large parameter space and the complexities
introduced by their multi-layer structure. We introduce a method for computing
persistent homology over the graphical activation structure of neural networks,
which provides access to the task-relevant substructures activated throughout
the network for a given input. This topological perspective provides unique
insights into the distributed representations encoded by neural networks in
terms of the shape of their activation structures. We demonstrate the value of
this approach by showing an alternative explanation for the existence of
adversarial examples. By studying the topology of network activations across
multiple architectures and datasets, we find that adversarial perturbations do
not add activations that target the semantic structure of the adversarial class
as previously hypothesized. Rather, adversarial examples are explainable as
alterations to the dominant activation structures induced by the original
image, suggesting the class representations learned by deep networks are
problematically sparse on the input space
Path homologies of deep feedforward networks
We provide a characterization of two types of directed homology for
fully-connected, feedforward neural network architectures. These exact
characterizations of the directed homology structure of a neural network
architecture are the first of their kind. We show that the directed flag
homology of deep networks reduces to computing the simplicial homology of the
underlying undirected graph, which is explicitly given by Euler characteristic
computations. We also show that the path homology of these networks is
non-trivial in higher dimensions and depends on the number and size of the
layers within the network. These results provide a foundation for investigating
homological differences between neural network architectures and their realized
structure as implied by their parameters.Comment: To appear in the proceedings of IEEE ICMLA 201