1,018 research outputs found
The Integration of Connectionism and First-Order Knowledge Representation and Reasoning as a Challenge for Artificial Intelligence
Intelligent systems based on first-order logic on the one hand, and on
artificial neural networks (also called connectionist systems) on the other,
differ substantially. It would be very desirable to combine the robust neural
networking machinery with symbolic knowledge representation and reasoning
paradigms like logic programming in such a way that the strengths of either
paradigm will be retained. Current state-of-the-art research, however, fails by
far to achieve this ultimate goal. As one of the main obstacles to be overcome
we perceive the question how symbolic knowledge can be encoded by means of
connectionist systems: Satisfactory answers to this will naturally lead the way
to knowledge extraction algorithms and to integrated neural-symbolic systems.Comment: In Proceedings of INFORMATION'2004, Tokyo, Japan, to appear. 12 page
Dimensions of Neural-symbolic Integration - A Structured Survey
Research on integrated neural-symbolic systems has made significant progress
in the recent past. In particular the understanding of ways to deal with
symbolic knowledge within connectionist systems (also called artificial neural
networks) has reached a critical mass which enables the community to strive for
applicable implementations and use cases. Recent work has covered a great
variety of logics used in artificial intelligence and provides a multitude of
techniques for dealing with them within the context of artificial neural
networks. We present a comprehensive survey of the field of neural-symbolic
integration, including a new classification of system according to their
architectures and abilities.Comment: 28 page
A connectionist representation of first-order formulae with dynamic variable binding
The relationship between symbolicism and connectionism has been one of the major
issues in recent Artificial Intelligence research. An increasing number of researchers
from each side have tried to adopt desirable characteristics of the other. These efforts
have produced a number of different strategies for interfacing connectionist and sym¬
bolic AI. One of them is connectionist symbol processing which attempts to replicate
symbol processing functionalities using connectionist components.In this direction, this thesis develops a connectionist inference architecture which per¬
forms standard symbolic inference on a subclass of first-order predicate calculus. Our
primary interest is in understanding how formulas which are described in a limited
form of first-order predicate calculus may be implemented using a connectionist archi¬
tecture. Our chosen knowledge representation scheme is a subset of first-order Horn
clause expressions which is a set of universally quantified expressions in first-order
predicate calculus. As a focus of attention we are developing techniques for compiling
first-order Horn clause expressions into a connectionist network. This offers practical
benefits but also forces limitations on the scope of the compiled system, since we tire, in
fact, merging an interpreter into the connectionist networks. The compilation process
has to take into account not only first-order Horn clause expressions themselves but
also the strategy which we intend to use for drawing inferences from them. Thus, this
thesis explores the extent to which this type of a translation can build a connectionist
inference model to accommodate desired symbolic inference.This work first involves constructing efficient connectionist mechanisms to represent
basic symbol components, dynamic bindings, basic symbolic inference procedures, and
devising a set of algorithms which automatically translates input descriptions to neural
networks using the above connectionist mechanisms. These connectionist mechanisms
are built by taking an existing temporal synchrony mechanism and extending it further
to obtain desirable features to represent and manipulate basic symbol structures. The
existing synchrony mechanism represents dynamic bindings very efficiently using tem¬
poral synchronous activity between neuron elements but it has fundamental limitations
in supporting standard symbolic inference. The extension addresses these limitations.The ability of the connectionist inference model was tested using various types of first
order Horn clause expressions. The results showed that the proposed connectionist in¬
ference model was able to encode significant sets of first order Horn clause expressions
and replicated basic symbolic styles of inference in a connectionist manner. The system
successfully demonstrated not only forward chaining but also backward chaining over
the networks encoding the input expressions. The results, however, also showed that
implementing a connectionist mechanism for full unification among groups of unifying
arguments in rules, are encoding some types of rules, is difficult to achieve in a con¬
nectionist manner needs additional mechanisms. In addition, some difficult issues such
as encoding rules having recursive definitions remained untouched
End-to-End Differentiable Proving
We introduce neural networks for end-to-end differentiable proving of queries
to knowledge bases by operating on dense vector representations of symbols.
These neural networks are constructed recursively by taking inspiration from
the backward chaining algorithm as used in Prolog. Specifically, we replace
symbolic unification with a differentiable computation on vector
representations of symbols using a radial basis function kernel, thereby
combining symbolic reasoning with learning subsymbolic vector representations.
By using gradient descent, the resulting neural network can be trained to infer
facts from a given incomplete knowledge base. It learns to (i) place
representations of similar symbols in close proximity in a vector space, (ii)
make use of such similarities to prove queries, (iii) induce logical rules, and
(iv) use provided and induced logical rules for multi-hop reasoning. We
demonstrate that this architecture outperforms ComplEx, a state-of-the-art
neural link prediction model, on three out of four benchmark knowledge bases
while at the same time inducing interpretable function-free first-order logic
rules.Comment: NIPS 2017 camera-ready, NIPS 201
Connectionist Inference Models
The performance of symbolic inference tasks has long been a challenge to connectionists. In this paper, we present an extended survey of this area. Existing connectionist inference systems are reviewed, with particular reference to how they perform variable binding and rule-based reasoning, and whether they involve distributed or localist representations. The benefits and disadvantages of different representations and systems are outlined, and conclusions drawn regarding the capabilities of connectionist inference systems when compared with symbolic inference systems or when used for cognitive modeling
Holistic processing of hierarchical structures in connectionist networks
Despite the success of connectionist systems to model some aspects of cognition, critics argue that the lack of symbol processing makes them inadequate for modelling
high-level cognitive tasks which require the representation and processing of hierarchical structures. In this thesis we investigate four mechanisms for encoding hierarchical structures in distributed representations that are suitable for processing in
connectionist systems: Tensor Product Representation, Recursive Auto-Associative
Memory (RAAM), Holographic Reduced Representation (HRR), and Binary Spatter
Code (BSC). In these four schemes representations of hierarchical structures are either
learned in a connectionist network or constructed by means of various mathematical
operations from binary or real-value vectors.It is argued that the resulting representations carry structural information without being themselves syntactically structured. The structural information about a represented
object is encoded in the position of its representation in a high-dimensional representational space. We use Principal Component Analysis and constructivist networks to
show that well-separated clusters consisting of representations for structurally similar
hierarchical objects are formed in the representational spaces of RAAMs and HRRs.
The spatial structure of HRRs and RAAM representations supports the holistic yet
structure-sensitive processing of them. Holistic operations on RAAM representations
can be learned by backpropagation networks. However, holistic operators over HRRs,
Tensor Products, and BSCs have to be constructed by hand, which is not a desirable situation. We propose two new algorithms for learning holistic transformations of HRRs
from examples. These algorithms are able to generalise the acquired knowledge to
hierarchical objects of higher complexity than the training examples. Such generalisations exhibit systematicity of a degree which, to our best knowledge, has not yet been
achieved by any other comparable learning method.Finally, we outline how a number of holistic transformations can be learned in parallel and applied to representations of structurally different objects. The ability to distinguish and perform a number of different structure-sensitive operations is one step
towards a connectionist architecture that is capable of modelling complex high-level
cognitive tasks such as natural language processing and logical inference
Faith in the Algorithm, Part 1: Beyond the Turing Test
Since the Turing test was first proposed by Alan Turing in 1950, the primary
goal of artificial intelligence has been predicated on the ability for
computers to imitate human behavior. However, the majority of uses for the
computer can be said to fall outside the domain of human abilities and it is
exactly outside of this domain where computers have demonstrated their greatest
contribution to intelligence. Another goal for artificial intelligence is one
that is not predicated on human mimicry, but instead, on human amplification.
This article surveys various systems that contribute to the advancement of
human and social intelligence
- …