867 research outputs found
Automata theoretic aspects of temporal behaviour and computability in logical neural networks
Imperial Users onl
A review of the field of artificial intelligence and its possible applications to nasa objectives final report
Artificial intelligence - control, data gathering, and data analyzing systems desig
The Integration of Connectionism and First-Order Knowledge Representation and Reasoning as a Challenge for Artificial Intelligence
Intelligent systems based on first-order logic on the one hand, and on
artificial neural networks (also called connectionist systems) on the other,
differ substantially. It would be very desirable to combine the robust neural
networking machinery with symbolic knowledge representation and reasoning
paradigms like logic programming in such a way that the strengths of either
paradigm will be retained. Current state-of-the-art research, however, fails by
far to achieve this ultimate goal. As one of the main obstacles to be overcome
we perceive the question how symbolic knowledge can be encoded by means of
connectionist systems: Satisfactory answers to this will naturally lead the way
to knowledge extraction algorithms and to integrated neural-symbolic systems.Comment: In Proceedings of INFORMATION'2004, Tokyo, Japan, to appear. 12 page
Perceptron theory can predict the accuracy of neural networks
Multilayer neural networks set the current state of
the art for many technical classification problems. But, these
networks are still, essentially, black boxes in terms of analyzing
them and predicting their performance. Here, we develop a
statistical theory for the one-layer perceptron and show that
it can predict performances of a surprisingly large variety of
neural networks with different architectures. A general theory
of classification with perceptrons is developed by generalizing
an existing theory for analyzing reservoir computing models
and connectionist models for symbolic reasoning known as
vector symbolic architectures. Our statistical theory offers three
formulas leveraging the signal statistics with increasing detail.
The formulas are analytically intractable, but can be evaluated
numerically. The description level that captures maximum details
requires stochastic sampling methods. Depending on the network
model, the simpler formulas already yield high prediction accuracy.
The quality of the theory predictions is assessed in three
experimental settings, a memorization task for echo state networks
(ESNs) from reservoir computing literature, a collection of
classification datasets for shallow randomly connected networks,
and the ImageNet dataset for deep convolutional neural networks.
We find that the second description level of the perceptron theory
can predict the performance of types of ESNs, which could not
be described previously. Furthermore, the theory can predict
deep multilayer neural networks by being applied to their output
layer. While other methods for prediction of neural networks
performance commonly require to train an estimator model,
the proposed theory requires only the first two moments of
the distribution of the postsynaptic sums in the output neurons.
Moreover, the perceptron theory compares favorably to other
methods that do not rely on training an estimator model
- ā¦