Deep learning uses an increasing amount of computation and data to solve very
specific problems. By stark contrast, human minds solve a wide range of
problems using a fixed amount of computation and limited experience. One
ability that seems crucial to this kind of general intelligence is
meta-reasoning, i.e., our ability to reason about reasoning. To make deep
learning do more from less, we propose the differentiable logical meta
interpreter (DLMI). The key idea is to realize a meta-interpreter using
differentiable forward-chaining reasoning in first-order logic. This directly
allows DLMI to reason and even learn about its own operations. This is
different from performing object-level deep reasoning and learning, which
refers in some way to entities external to the system. In contrast, DLMI is
able to reflect or introspect, i.e., to shift from meta-reasoning to
object-level reasoning and vice versa. Among many other experimental
evaluations, we illustrate this behavior using the novel task of "repairing
Kandinsky patterns," i.e., how to edit the objects in an image so that it
agrees with a given logical concept