A significant number of neural architectures for reading comprehension have
recently been developed and evaluated on large cloze-style datasets. We present
experiments supporting the emergence of "predication structure" in the hidden
state vectors of these readers. More specifically, we provide evidence that the
hidden state vectors represent atomic formulas Φ[c] where Φ is a
semantic property (predicate) and c is a constant symbol entity identifier.Comment: Accepted for Repl4NLP: 2nd Workshop on Representation Learning for
NL