Cognitive psychologists often use the term fluid intelligence to
describe the ability of humans to solve novel tasks without any prior training.
In contrast to humans, deep neural networks can perform cognitive tasks only
after extensive (pre-)training with a large number of relevant examples.
Motivated by fluid intelligence research in the cognitive sciences, we built a
benchmark task which we call sequence consistency evaluation (SCE) that can be
used to address this gap. Solving the SCE task requires the ability to extract
simple rules from sequences, a basic computation that in humans, is required
for solving various intelligence tests. We tested untrained (naive)
deep learning models in the SCE task. Specifically, we tested two networks that
can learn latent relations, Relation Networks (RN) and Contrastive Predictive
Coding (CPC). We found that the latter, which imposes a causal structure on the
latent relations performs better. We then show that naive few-shot learning of
sequences can be successfully used for anomaly detection in two different
tasks, visual and auditory, without any prior training