We compute how small input perturbations affect the output of deep neural
networks, exploring an analogy between deep networks and dynamical systems,
where the growth or decay of local perturbations is characterised by
finite-time Lyapunov exponents. We show that the maximal exponent forms
geometrical structures in input space, akin to coherent structures in dynamical
systems. Ridges of large positive exponents divide input space into different
regions that the network associates with different classes. These ridges
visualise the geometry that deep networks construct in input space, shedding
light on the fundamental mechanisms underlying their learning capabilities.Comment: 6 pages, 4 figure