8 research outputs found
The Integration of Connectionism and First-Order Knowledge Representation and Reasoning as a Challenge for Artificial Intelligence
Intelligent systems based on first-order logic on the one hand, and on
artificial neural networks (also called connectionist systems) on the other,
differ substantially. It would be very desirable to combine the robust neural
networking machinery with symbolic knowledge representation and reasoning
paradigms like logic programming in such a way that the strengths of either
paradigm will be retained. Current state-of-the-art research, however, fails by
far to achieve this ultimate goal. As one of the main obstacles to be overcome
we perceive the question how symbolic knowledge can be encoded by means of
connectionist systems: Satisfactory answers to this will naturally lead the way
to knowledge extraction algorithms and to integrated neural-symbolic systems.Comment: In Proceedings of INFORMATION'2004, Tokyo, Japan, to appear. 12 page
End-to-End Differentiable Proving
We introduce neural networks for end-to-end differentiable proving of queries
to knowledge bases by operating on dense vector representations of symbols.
These neural networks are constructed recursively by taking inspiration from
the backward chaining algorithm as used in Prolog. Specifically, we replace
symbolic unification with a differentiable computation on vector
representations of symbols using a radial basis function kernel, thereby
combining symbolic reasoning with learning subsymbolic vector representations.
By using gradient descent, the resulting neural network can be trained to infer
facts from a given incomplete knowledge base. It learns to (i) place
representations of similar symbols in close proximity in a vector space, (ii)
make use of such similarities to prove queries, (iii) induce logical rules, and
(iv) use provided and induced logical rules for multi-hop reasoning. We
demonstrate that this architecture outperforms ComplEx, a state-of-the-art
neural link prediction model, on three out of four benchmark knowledge bases
while at the same time inducing interpretable function-free first-order logic
rules.Comment: NIPS 2017 camera-ready, NIPS 201
Dimensions of Neural-symbolic Integration - A Structured Survey
Research on integrated neural-symbolic systems has made significant progress
in the recent past. In particular the understanding of ways to deal with
symbolic knowledge within connectionist systems (also called artificial neural
networks) has reached a critical mass which enables the community to strive for
applicable implementations and use cases. Recent work has covered a great
variety of logics used in artificial intelligence and provides a multitude of
techniques for dealing with them within the context of artificial neural
networks. We present a comprehensive survey of the field of neural-symbolic
integration, including a new classification of system according to their
architectures and abilities.Comment: 28 page
Combining Representation Learning with Logic for Language Processing
The current state-of-the-art in many natural language processing and
automated knowledge base completion tasks is held by representation learning
methods which learn distributed vector representations of symbols via
gradient-based optimization. They require little or no hand-crafted features,
thus avoiding the need for most preprocessing steps and task-specific
assumptions. However, in many cases representation learning requires a large
amount of annotated training data to generalize well to unseen data. Such
labeled training data is provided by human annotators who often use formal
logic as the language for specifying annotations. This thesis investigates
different combinations of representation learning methods with logic for
reducing the need for annotated training data, and for improving
generalization.Comment: PhD Thesis, University College London, Submitted and accepted in 201
A connectionist representation of first-order formulae with dynamic variable binding
The relationship between symbolicism and connectionism has been one of the major
issues in recent Artificial Intelligence research. An increasing number of researchers
from each side have tried to adopt desirable characteristics of the other. These efforts
have produced a number of different strategies for interfacing connectionist and sym¬
bolic AI. One of them is connectionist symbol processing which attempts to replicate
symbol processing functionalities using connectionist components.In this direction, this thesis develops a connectionist inference architecture which per¬
forms standard symbolic inference on a subclass of first-order predicate calculus. Our
primary interest is in understanding how formulas which are described in a limited
form of first-order predicate calculus may be implemented using a connectionist archi¬
tecture. Our chosen knowledge representation scheme is a subset of first-order Horn
clause expressions which is a set of universally quantified expressions in first-order
predicate calculus. As a focus of attention we are developing techniques for compiling
first-order Horn clause expressions into a connectionist network. This offers practical
benefits but also forces limitations on the scope of the compiled system, since we tire, in
fact, merging an interpreter into the connectionist networks. The compilation process
has to take into account not only first-order Horn clause expressions themselves but
also the strategy which we intend to use for drawing inferences from them. Thus, this
thesis explores the extent to which this type of a translation can build a connectionist
inference model to accommodate desired symbolic inference.This work first involves constructing efficient connectionist mechanisms to represent
basic symbol components, dynamic bindings, basic symbolic inference procedures, and
devising a set of algorithms which automatically translates input descriptions to neural
networks using the above connectionist mechanisms. These connectionist mechanisms
are built by taking an existing temporal synchrony mechanism and extending it further
to obtain desirable features to represent and manipulate basic symbol structures. The
existing synchrony mechanism represents dynamic bindings very efficiently using tem¬
poral synchronous activity between neuron elements but it has fundamental limitations
in supporting standard symbolic inference. The extension addresses these limitations.The ability of the connectionist inference model was tested using various types of first
order Horn clause expressions. The results showed that the proposed connectionist in¬
ference model was able to encode significant sets of first order Horn clause expressions
and replicated basic symbolic styles of inference in a connectionist manner. The system
successfully demonstrated not only forward chaining but also backward chaining over
the networks encoding the input expressions. The results, however, also showed that
implementing a connectionist mechanism for full unification among groups of unifying
arguments in rules, are encoding some types of rules, is difficult to achieve in a con¬
nectionist manner needs additional mechanisms. In addition, some difficult issues such
as encoding rules having recursive definitions remained untouched
Recommended from our members