1,734 research outputs found
Recommended from our members
Neural Diagrammatic Reasoning
Diagrams have been shown to be effective tools for humans to represent and reason about
complex concepts. They have been widely used to represent concepts in science teaching, to
communicate workflow in industries and to measure human fluid intelligence. Mechanised
reasoning systems typically encode diagrams into symbolic representations that can be
easily processed with rule-based expert systems. This relies on human experts to define the
framework of diagram-to-symbol mapping and the set of rules to reason with the symbols.
This means the reasoning systems cannot be easily adapted to other diagrams without
a new set of human-defined representation mapping and reasoning rules. Moreover such
systems are not able to cope with diagram inputs as raw and possibly noisy images. The
need for human input and the lack of robustness to noise significantly limit the applications
of mechanised diagrammatic reasoning systems.
A key research question then arises: can we develop human-like reasoning systems that
learn to reason robustly without predefined reasoning rules? To answer this question, I
propose Neural Diagrammatic Reasoning, a new family of diagrammatic reasoning
systems which does not have the drawbacks of mechanised reasoning systems. The new
systems are based on deep neural networks, a recently popular machine learning method
that achieved human-level performance on a range of perception tasks such as object
detection, speech recognition and natural language processing. The proposed systems are
able to learn both diagram to symbol mapping and implicit reasoning rules only from data,
with no prior human input about symbols and rules in the reasoning tasks. Specifically I
developed EulerNet, a novel neural network model that solves Euler diagram syllogism
tasks with 99.5% accuracy. Experiments show that EulerNet learns useful representations
of the diagrams and tasks, and is robust to noise and deformation in the input data. I
also developed MXGNet, a novel multiplex graph neural architecture that solves Raven
Progressive Matrices (RPM) tasks. MXGNet achieves state-of-the-art accuracies on two
popular RPM datasets. In addition, I developed Discrete-AIR, an unsupervised learning
architecture that learns semi-symbolic representations of diagrams without any labels.
Lastly I designed a novel inductive bias module that can be readily used in today’s deep
neural networks to improve their generalisation capability on relational reasoning tasks.EPSRC Studentship and Cambridge Trust Scholarshi
Backprop as Functor: A compositional perspective on supervised learning
A supervised learning algorithm searches over a set of functions
parametrised by a space to find the best approximation to some ideal
function . It does this by taking examples , and updating the parameter according to some rule. We define a
category where these update rules may be composed, and show that gradient
descent---with respect to a fixed step size and an error function satisfying a
certain property---defines a monoidal functor from a category of parametrised
functions to this category of update rules. This provides a structural
perspective on backpropagation, as well as a broad generalisation of neural
networks.Comment: 13 pages + 4 page appendi
How Researchers Use Diagrams in Communicating Neural Network Systems
Neural networks are a prevalent and effective machine learning component, and
their application is leading to significant scientific progress in many
domains. As the field of neural network systems is fast growing, it is
important to understand how advances are communicated. Diagrams are key to
this, appearing in almost all papers describing novel systems. This paper
reports on a study into the use of neural network system diagrams, through
interviews, card sorting, and qualitative feedback structured around
ecologically-derived examples. We find high diversity of usage, perception and
preference in both creation and interpretation of diagrams, examining this in
the context of existing design, information visualisation, and user experience
guidelines. Considering the interview data alongside existing guidance, we
propose guidelines aiming to improve the way in which neural network system
diagrams are constructed.Comment: 19 pages, 6 tables, 3 figure
Dagstuhl News January - December 2000
"Dagstuhl News" is a publication edited especially for the members of the Foundation "Informatikzentrum Schloss Dagstuhl" to thank them for their support. The News give a summary of the scientific work being done in Dagstuhl. Each Dagstuhl Seminar is presented by a small abstract describing the contents and scientific highlights of the seminar as well as the perspectives or challenges of the research topic
Extrapolatable Relational Reasoning With Comparators in Low-Dimensional Manifolds
While modern deep neural architectures generalise well when test data is sampled from the same distribution as training data, they fail badly for cases when the test data distribution differs from the training distribution even along a few dimensions. This lack of out-of-distribution generalisation is increasingly manifested when the tasks become more abstract and complex, such as in relational reasoning. In this paper we propose a neuroscience-inspired inductive-biased module that can be readily amalgamated with current neural network architectures to improve out-of-distribution (o.o.d) generalisation performance on relational reasoning tasks. This module learns to project high-dimensional object representations to low-dimensional manifolds for more efficient and generalisable relational comparisons. We show that neural nets with this inductive bias achieve considerably better o.o.d generalisation performance for a range of relational reasoning tasks. We finally analyse the proposed inductive bias module to understand the importance of lower dimension projection, and propose an augmentation to the algorithmic alignment theory to better measure algorithmic alignment with generalisation
Categorical Vector Space Semantics for Lambek Calculus with a Relevant Modality
We develop a categorical compositional distributional semantics for Lambek
Calculus with a Relevant Modality !L*, which has a limited edition of the
contraction and permutation rules. The categorical part of the semantics is a
monoidal biclosed category with a coalgebra modality, very similar to the
structure of a Differential Category. We instantiate this category to finite
dimensional vector spaces and linear maps via "quantisation" functors and work
with three concrete interpretations of the coalgebra modality. We apply the
model to construct categorical and concrete semantic interpretations for the
motivating example of !L*: the derivation of a phrase with a parasitic gap. The
effectiveness of the concrete interpretations are evaluated via a
disambiguation task, on an extension of a sentence disambiguation dataset to
parasitic gap phrases, using BERT, Word2Vec, and FastText vectors and
Relational tensors
- …