How do neural networks extract patterns from pixels? Feature visualizations
attempt to answer this important question by visualizing highly activating
patterns through optimization. Today, visualization methods form the foundation
of our knowledge about the internal workings of neural networks, as a type of
mechanistic interpretability. Here we ask: How reliable are feature
visualizations? We start our investigation by developing network circuits that
trick feature visualizations into showing arbitrary patterns that are
completely disconnected from normal network behavior on natural input. We then
provide evidence for a similar phenomenon occurring in standard, unmanipulated
networks: feature visualizations are processed very differently from standard
input, casting doubt on their ability to "explain" how neural networks process
natural images. We underpin this empirical finding by theory proving that the
set of functions that can be reliably understood by feature visualization is
extremely small and does not include general black-box neural networks.
Therefore, a promising way forward could be the development of networks that
enforce certain structures in order to ensure more reliable feature
visualizations