In-context learning (ICL) has shown impressive results in few-shot learning
tasks, yet its underlying mechanism is still not fully understood. A recent
line of work suggests that ICL performs gradient descent (GD)-based
optimization implicitly. While appealing, much of the research focuses on
simplified settings, where the parameters of a shallow model are optimized. In
this work, we revisit evidence for ICL-GD correspondence on realistic NLP tasks
and models. We find gaps in evaluation, both in terms of problematic metrics
and insufficient baselines. We show that surprisingly, even untrained models
achieve comparable ICL-GD similarity scores despite not exhibiting ICL. Next,
we explore a major discrepancy in the flow of information throughout the model
between ICL and GD, which we term Layer Causality. We propose a simple GD-based
optimization procedure that respects layer causality, and show it improves
similarity scores significantly.Comment: Accepted to NAACL 2024 main conferenc