5 research outputs found
Recommended from our members
Neural Diagrammatic Reasoning
Diagrams have been shown to be effective tools for humans to represent and reason about
complex concepts. They have been widely used to represent concepts in science teaching, to
communicate workflow in industries and to measure human fluid intelligence. Mechanised
reasoning systems typically encode diagrams into symbolic representations that can be
easily processed with rule-based expert systems. This relies on human experts to define the
framework of diagram-to-symbol mapping and the set of rules to reason with the symbols.
This means the reasoning systems cannot be easily adapted to other diagrams without
a new set of human-defined representation mapping and reasoning rules. Moreover such
systems are not able to cope with diagram inputs as raw and possibly noisy images. The
need for human input and the lack of robustness to noise significantly limit the applications
of mechanised diagrammatic reasoning systems.
A key research question then arises: can we develop human-like reasoning systems that
learn to reason robustly without predefined reasoning rules? To answer this question, I
propose Neural Diagrammatic Reasoning, a new family of diagrammatic reasoning
systems which does not have the drawbacks of mechanised reasoning systems. The new
systems are based on deep neural networks, a recently popular machine learning method
that achieved human-level performance on a range of perception tasks such as object
detection, speech recognition and natural language processing. The proposed systems are
able to learn both diagram to symbol mapping and implicit reasoning rules only from data,
with no prior human input about symbols and rules in the reasoning tasks. Specifically I
developed EulerNet, a novel neural network model that solves Euler diagram syllogism
tasks with 99.5% accuracy. Experiments show that EulerNet learns useful representations
of the diagrams and tasks, and is robust to noise and deformation in the input data. I
also developed MXGNet, a novel multiplex graph neural architecture that solves Raven
Progressive Matrices (RPM) tasks. MXGNet achieves state-of-the-art accuracies on two
popular RPM datasets. In addition, I developed Discrete-AIR, an unsupervised learning
architecture that learns semi-symbolic representations of diagrams without any labels.
Lastly I designed a novel inductive bias module that can be readily used in today’s deep
neural networks to improve their generalisation capability on relational reasoning tasks.EPSRC Studentship and Cambridge Trust Scholarshi
Strategy analysis of non-consequence inference with Euler diagrams
How can Euler diagrams support non-consequence inferences? Although an inference to non-consequence, in which people are asked to judge whether no valid conclusion can be drawn from the given premises (e.g., All B are A; No C are B), is one of the two sides of logical inference, it has received remarkably little attention in research on human diagrammatic reasoning; how diagrams are really manipulated for such inferences remains unclear. We hypothesized that people naturally make these inferences by enumerating possible diagrams, based on the logical notion of self-consistency, in which every (simple) Euler diagram is true (satisfiable) in a set-theoretical interpretation. The work is divided into three parts, each exploring a particular condition or scenario. In condition 1, we asked participants to directly manipulate diagrams with size-fixed circles as they solved syllogistic tasks, with the result that more reasoners used the enumeration strategy. In condition 2, another type of size-fixed diagram was used. The diagram layout change interfered with accurate task performances and with the use of the enumeration strategy; however, the enumeration strategy was still dominant for those who could correctly perform the tasks. In condition 3, we used size-scalable diagrams (with the default size as in condition 2), which reduced the interfering effect of diagram layout and enhanced participants' selection of the enumeration strategy. These results provide evidence that non-consequence inferences can be achieved by diagram enumeration, exploiting the self-consistency of Euler diagrams. An alternate strategy based on counter-example construction with Euler diagrams, as well as effects of diagram layout in inferential processes, are also discussed