31,296 research outputs found
Characterizing the Shape of Activation Space in Deep Neural Networks
The representations learned by deep neural networks are difficult to
interpret in part due to their large parameter space and the complexities
introduced by their multi-layer structure. We introduce a method for computing
persistent homology over the graphical activation structure of neural networks,
which provides access to the task-relevant substructures activated throughout
the network for a given input. This topological perspective provides unique
insights into the distributed representations encoded by neural networks in
terms of the shape of their activation structures. We demonstrate the value of
this approach by showing an alternative explanation for the existence of
adversarial examples. By studying the topology of network activations across
multiple architectures and datasets, we find that adversarial perturbations do
not add activations that target the semantic structure of the adversarial class
as previously hypothesized. Rather, adversarial examples are explainable as
alterations to the dominant activation structures induced by the original
image, suggesting the class representations learned by deep networks are
problematically sparse on the input space
Compositional Distributional Semantics with Long Short Term Memory
We are proposing an extension of the recursive neural network that makes use
of a variant of the long short-term memory architecture. The extension allows
information low in parse trees to be stored in a memory register (the `memory
cell') and used much later higher up in the parse tree. This provides a
solution to the vanishing gradient problem and allows the network to capture
long range dependencies. Experimental results show that our composition
outperformed the traditional neural-network composition on the Stanford
Sentiment Treebank.Comment: 10 pages, 7 figure
- …