Many neural nets appear to represent data as linear combinations of "feature
vectors." Algorithms for discovering these vectors have seen impressive recent
success. However, we argue that this success is incomplete without an
understanding of relational composition: how (or whether) neural nets combine
feature vectors to represent more complicated relationships. To facilitate
research in this area, this paper offers a guided tour of various relational
mechanisms that have been proposed, along with preliminary analysis of how such
mechanisms might affect the search for interpretable features. We end with a
series of promising areas for empirical research, which may help determine how
neural networks represent structured data