53 research outputs found
Scale Alone Does not Improve Mechanistic Interpretability in Vision Models
In light of the recent widespread adoption of AI systems, understanding the
internal information processing of neural networks has become increasingly
critical. Most recently, machine vision has seen remarkable progress by scaling
neural networks to unprecedented levels in dataset and model size. We here ask
whether this extraordinary increase in scale also positively impacts the field
of mechanistic interpretability. In other words, has our understanding of the
inner workings of scaled neural networks improved as well? We use a
psychophysical paradigm to quantify one form of mechanistic interpretability
for a diverse suite of nine models and find no scaling effect for
interpretability - neither for model nor dataset size. Specifically, none of
the investigated state-of-the-art models are easier to interpret than the
GoogLeNet model from almost a decade ago. Latest-generation vision models
appear even less interpretable than older architectures, hinting at a
regression rather than improvement, with modern models sacrificing
interpretability for accuracy. These results highlight the need for models
explicitly designed to be mechanistically interpretable and the need for more
helpful interpretability methods to increase our understanding of networks at
an atomic level. We release a dataset containing more than 130'000 human
responses from our psychophysical evaluation of 767 units across nine models.
This dataset facilitates research on automated instead of human-based
interpretability evaluations, which can ultimately be leveraged to directly
optimize the mechanistic interpretability of models.Comment: Spotlight at NeurIPS 2023. The first two authors contributed equally.
Code available at https://brendel-group.github.io/imi
An Interventional Perspective on Identifiability in Gaussian LTI Systems with Independent Component Analysis
We investigate the relationship between system identification and
intervention design in dynamical systems. While previous research demonstrated
how identifiable representation learning methods, such as Independent Component
Analysis (ICA), can reveal cause-effect relationships, it relied on a passive
perspective without considering how to collect data. Our work shows that in
Gaussian Linear Time-Invariant (LTI) systems, the system parameters can be
identified by introducing diverse intervention signals in a multi-environment
setting. By harnessing appropriate diversity assumptions motivated by the ICA
literature, our findings connect experiment design and representational
identifiability in dynamical systems. We corroborate our findings on synthetic
and (simulated) physical data. Additionally, we show that Hidden Markov Models,
in general, and (Gaussian) LTI systems, in particular, fulfil a generalization
of the Causal de Finetti theorem with continuous parameters.Comment: CLeaR2024 camera ready. Code available at
https://github.com/rpatrik96/lti-ic
Covariant boost and structure functions of baryons in Gross-Neveu models
Baryons in the large N limit of two-dimensional Gross-Neveu models are
reconsidered. The time-dependent Dirac-Hartree-Fock approach is used to boost a
baryon to any inertial frame and shown to yield the covariant energy-momentum
relation. Momentum distributions are computed exactly in arbitrary frames and
used to interpolate between the rest frame and the infinite momentum frame,
where they are related to structure functions. Effects from the Dirac sea
depend sensitively on the occupation fraction of the valence level and the bare
fermion mass and do not vanish at infinite momentum. In the case of the kink
baryon, they even lead to divergent quark and antiquark structure functions at
x=0.Comment: 13 pages, 12 figures; v2: minor correction
- …