3 research outputs found
Location and stability of the high-gain equilibria of nonlinear neural networks
The author analyzes the number, location, and stability behavior of the equilibria of arbitrary nonlinear neural networks without resorting to energy arguments based on assumptions of symmetric interactions or no self-interactions. The class of networks studied consists of very general continuous-time continuous-state (CTCS) networks that contain the standard Hopfield network as a special case. The emphasis is on the case where the slopes of the sigmoidal nonlinearities become larger and larger
On the application of neural networks to symbol systems.
While for many years two alternative approaches to building intelligent systems, symbolic
AI and neural networks, have each demonstrated specific advantages and also revealed
specific weaknesses, in recent years a number of researchers have sought methods of combining
the two into a unified methodology which embodies the benefits of each while attenuating the
disadvantages.
This work sets out to identify the key ideas from each discipline and combine them
into an architecture which would be practically scalable for very large network applications.
The architecture is based on a relational database structure and forms the environment for an
investigation into the necessary properties of a symbol encoding which will permit the singlepresentation
learning of patterns and associations, the development of categories and features
leading to robust generalisation and the seamless integration of a range of memory persistencies
from short to long term.
It is argued that if, as proposed by many proponents of symbolic AI, the symbol encoding
must be causally related to its syntactic meaning, then it must also be mutable as the network
learns and grows, adapting to the growing complexity of the relationships in which it is
instantiated. Furthermore, it is argued that in order to create an efficient and coherent memory
structure, the symbolic encoding itself must have an underlying structure which is not accessible
symbolically; this structure would provide the framework permitting structurally sensitive processes
to act upon symbols without explicit reference to their content. Such a structure must dictate
how new symbols are created during normal operation.
The network implementation proposed is based on K-from-N codes, which are shown
to possess a number of desirable qualities and are well matched to the requirements of the symbol
encoding. Several networks are developed and analysed to exploit these codes, based around
a recurrent version of the non-holographic associati ve memory of Willshaw, et al. The simplest
network is shown to have properties similar to those of a Hopfield network, but the storage capacity
is shown to be greater, though at a cost of lower signal to noise ratio.
Subsequent network additions break each K-from-N pattern into L subsets, each using
D-from-N coding, creating cyclic patterns of period L. This step increases the capacity still further
but at a cost of lower signal to noise ratio. The use of the network in associating pairs of
input patterns with any given output pattern, an architectural requirement, is verified.
The use of complex synaptic junctions is investigated as a means to increase storage
capacity, to address the stability-plasticity dilemma and to implement the hierarchical aspects
of the symbol encoding defined in the architecture. A wide range of options is developed which
allow a number of key global parameters to be traded-off. One scheme is analysed and simulated.
A final section examines some of the elements that need to be added to our current understanding
of neural network-based reasoning systems to make general purpose intelligent systems
possible. It is argued that the sections of this work represent pieces of the whole in this
regard and that their integration will provide a sound basis for making such systems a reality