3,713 research outputs found
Extrapolatable Relational Reasoning With Comparators in Low-Dimensional Manifolds
While modern deep neural architectures generalise well when test data is sampled from the same distribution as training data, they fail badly for cases when the test data distribution differs from the training distribution even along a few dimensions. This lack of out-of-distribution generalisation is increasingly manifested when the tasks become more abstract and complex, such as in relational reasoning. In this paper we propose a neuroscience-inspired inductive-biased module that can be readily amalgamated with current neural network architectures to improve out-of-distribution (o.o.d) generalisation performance on relational reasoning tasks. This module learns to project high-dimensional object representations to low-dimensional manifolds for more efficient and generalisable relational comparisons. We show that neural nets with this inductive bias achieve considerably better o.o.d generalisation performance for a range of relational reasoning tasks. We finally analyse the proposed inductive bias module to understand the importance of lower dimension projection, and propose an augmentation to the algorithmic alignment theory to better measure algorithmic alignment with generalisation
Recommended from our members
Neural Diagrammatic Reasoning
Diagrams have been shown to be effective tools for humans to represent and reason about
complex concepts. They have been widely used to represent concepts in science teaching, to
communicate workflow in industries and to measure human fluid intelligence. Mechanised
reasoning systems typically encode diagrams into symbolic representations that can be
easily processed with rule-based expert systems. This relies on human experts to define the
framework of diagram-to-symbol mapping and the set of rules to reason with the symbols.
This means the reasoning systems cannot be easily adapted to other diagrams without
a new set of human-defined representation mapping and reasoning rules. Moreover such
systems are not able to cope with diagram inputs as raw and possibly noisy images. The
need for human input and the lack of robustness to noise significantly limit the applications
of mechanised diagrammatic reasoning systems.
A key research question then arises: can we develop human-like reasoning systems that
learn to reason robustly without predefined reasoning rules? To answer this question, I
propose Neural Diagrammatic Reasoning, a new family of diagrammatic reasoning
systems which does not have the drawbacks of mechanised reasoning systems. The new
systems are based on deep neural networks, a recently popular machine learning method
that achieved human-level performance on a range of perception tasks such as object
detection, speech recognition and natural language processing. The proposed systems are
able to learn both diagram to symbol mapping and implicit reasoning rules only from data,
with no prior human input about symbols and rules in the reasoning tasks. Specifically I
developed EulerNet, a novel neural network model that solves Euler diagram syllogism
tasks with 99.5% accuracy. Experiments show that EulerNet learns useful representations
of the diagrams and tasks, and is robust to noise and deformation in the input data. I
also developed MXGNet, a novel multiplex graph neural architecture that solves Raven
Progressive Matrices (RPM) tasks. MXGNet achieves state-of-the-art accuracies on two
popular RPM datasets. In addition, I developed Discrete-AIR, an unsupervised learning
architecture that learns semi-symbolic representations of diagrams without any labels.
Lastly I designed a novel inductive bias module that can be readily used in today’s deep
neural networks to improve their generalisation capability on relational reasoning tasks.EPSRC Studentship and Cambridge Trust Scholarshi
Classification of Explainable Artificial Intelligence Methods through Their Output Formats
Machine and deep learning have proven their utility to generate data-driven models with high accuracy and precision. However, their non-linear, complex structures are often difficult to interpret. Consequently, many scholars have developed a plethora of methods to explain their functioning and the logic of their inferences. This systematic review aimed to organise these methods into a hierarchical classification system that builds upon and extends existing taxonomies by adding a significant dimension—the output formats. The reviewed scientific papers were retrieved by conducting an initial search on Google Scholar with the keywords “explainable artificial intelligence”; “explainable machine learning”; and “interpretable machine learning”. A subsequent iterative search was carried out by checking the bibliography of these articles. The addition of the dimension of the explanation format makes the proposed classification system a practical tool for scholars, supporting them to select the most suitable type of explanation format for the problem at hand. Given the wide variety of challenges faced by researchers, the existing XAI methods provide several solutions to meet the requirements that differ considerably between the users, problems and application fields of artificial intelligence (AI). The task of identifying the most appropriate explanation can be daunting, thus the need for a classification system that helps with the selection of methods. This work concludes by critically identifying the limitations of the formats of explanations and by providing recommendations and possible future research directions on how to build a more generally applicable XAI method. Future work should be flexible enough to meet the many requirements posed by the widespread use of AI in several fields, and the new regulation
- …