50,197 research outputs found
Logical Learning Through a Hybrid Neural Network with Auxiliary Inputs
The human reasoning process is seldom a one-way process from an input leading
to an output. Instead, it often involves a systematic deduction by ruling out
other possible outcomes as a self-checking mechanism. In this paper, we
describe the design of a hybrid neural network for logical learning that is
similar to the human reasoning through the introduction of an auxiliary input,
namely the indicators, that act as the hints to suggest logical outcomes. We
generate these indicators by digging into the hidden information buried
underneath the original training data for direct or indirect suggestions. We
used the MNIST data to demonstrate the design and use of these indicators in a
convolutional neural network. We trained a series of such hybrid neural
networks with variations of the indicators. Our results show that these hybrid
neural networks are very robust in generating logical outcomes with inherently
higher prediction accuracy than the direct use of the original input and output
in apparent models. Such improved predictability with reassured logical
confidence is obtained through the exhaustion of all possible indicators to
rule out all illogical outcomes, which is not available in the apparent models.
Our logical learning process can effectively cope with the unknown unknowns
using a full exploitation of all existing knowledge available for learning. The
design and implementation of the hints, namely the indicators, become an
essential part of artificial intelligence for logical learning. We also
introduce an ongoing application setup for this hybrid neural network in an
autonomous grasping robot, namely as_DeepClaw, aiming at learning an optimized
grasping pose through logical learning.Comment: 11 pages, 9 figures, 4 table
Recommended from our members
Neurons and symbols: a manifesto
We discuss the purpose of neural-symbolic integration including its principles, mechanisms and applications. We outline a cognitive computational model for neural-symbolic integration, position the model in the broader context of multi-agent systems, machine learning and automated reasoning, and list some of the challenges for the area of
neural-symbolic computation to achieve the promise of effective integration of robust learning and expressive reasoning under uncertainty
The Universe is a Strange Place
This is a broad and in places unconventional overview of the strengths and
shortcomings of our standard models of fundamental physics and of cosmology.
The emphasis is on ideas that have accessible experimental consequences. It
becomes clear that the frontiers of these subjects share much ground in common.Comment: 12 pages; Keynote talk at SpacePartII, Washington D.C., Dec. 2003; to
be published in the Porceeding
The Knowledge Level in Cognitive Architectures: Current Limitations and Possible Developments
In this paper we identify and characterize an analysis of two problematic aspects affecting the representational level of cognitive architectures (CAs), namely: the limited size and the homogeneous typology of the encoded and processed knowledge.
We argue that such aspects may constitute not only a technological problem that, in our opinion, should be addressed in order to build articial agents able to exhibit intelligent behaviours in general scenarios, but also an epistemological one, since they limit the plausibility of the comparison of the CAs' knowledge representation and processing mechanisms with those executed by humans in their everyday activities. In the final part of the paper further directions of research will be explored, trying to address current limitations and
future challenges
- …