4 research outputs found

    Binary Associative Memories as a Benchmark for Spiking Neuromorphic Hardware

    Get PDF
    Stöckel A, Jenzen C, Thies M, Rückert U. Binary Associative Memories as a Benchmark for Spiking Neuromorphic Hardware. Frontiers in Computational Neuroscience. 2017;11: 71.Large-scale neuromorphic hardware platforms, specialized computer systems for energy efficient simulation of spiking neural networks, are being developed around the world, for example as part of the European Human Brain Project (HBP). Due to conceptual differences, a universal performance analysis of these systems in terms of runtime, accuracy and energy efficiency is non-trivial, yet indispensable for further hard- and software development. In this paper we describe a scalable benchmark based on a spiking neural network implementation of the binary neural associative memory. We treat neuromorphic hardware and software simulators as black-boxes and execute exactly the same network description across all devices. Experiments on the HBP platforms under varying configurations of the associative memory show that the presented method allows to test the quality of the neuron model implementation, and to explain significant deviations from the expected reference output

    On the application of neural networks to symbol systems.

    Get PDF
    While for many years two alternative approaches to building intelligent systems, symbolic AI and neural networks, have each demonstrated specific advantages and also revealed specific weaknesses, in recent years a number of researchers have sought methods of combining the two into a unified methodology which embodies the benefits of each while attenuating the disadvantages. This work sets out to identify the key ideas from each discipline and combine them into an architecture which would be practically scalable for very large network applications. The architecture is based on a relational database structure and forms the environment for an investigation into the necessary properties of a symbol encoding which will permit the singlepresentation learning of patterns and associations, the development of categories and features leading to robust generalisation and the seamless integration of a range of memory persistencies from short to long term. It is argued that if, as proposed by many proponents of symbolic AI, the symbol encoding must be causally related to its syntactic meaning, then it must also be mutable as the network learns and grows, adapting to the growing complexity of the relationships in which it is instantiated. Furthermore, it is argued that in order to create an efficient and coherent memory structure, the symbolic encoding itself must have an underlying structure which is not accessible symbolically; this structure would provide the framework permitting structurally sensitive processes to act upon symbols without explicit reference to their content. Such a structure must dictate how new symbols are created during normal operation. The network implementation proposed is based on K-from-N codes, which are shown to possess a number of desirable qualities and are well matched to the requirements of the symbol encoding. Several networks are developed and analysed to exploit these codes, based around a recurrent version of the non-holographic associati ve memory of Willshaw, et al. The simplest network is shown to have properties similar to those of a Hopfield network, but the storage capacity is shown to be greater, though at a cost of lower signal to noise ratio. Subsequent network additions break each K-from-N pattern into L subsets, each using D-from-N coding, creating cyclic patterns of period L. This step increases the capacity still further but at a cost of lower signal to noise ratio. The use of the network in associating pairs of input patterns with any given output pattern, an architectural requirement, is verified. The use of complex synaptic junctions is investigated as a means to increase storage capacity, to address the stability-plasticity dilemma and to implement the hierarchical aspects of the symbol encoding defined in the architecture. A wide range of options is developed which allow a number of key global parameters to be traded-off. One scheme is analysed and simulated. A final section examines some of the elements that need to be added to our current understanding of neural network-based reasoning systems to make general purpose intelligent systems possible. It is argued that the sections of this work represent pieces of the whole in this regard and that their integration will provide a sound basis for making such systems a reality

    Design space exploration of associative memories using spiking neurons with respect to neuromorphic hardware implementations

    Get PDF
    Stöckel A. Design space exploration of associative memories using spiking neurons with respect to neuromorphic hardware implementations. Bielefeld: Universität Bielefeld; 2016.Artificial neural networks are well-established models for key functions of biological brains, such as low-level sensory processing and memory. In particular, networks of artificial spiking neurons emulate the time dynamics, high parallelisation and asynchronicity of their biological counterparts. Large scale hardware simulators for such networks – _neuromorphic_ computers – are developed as part of the Human Brain Project, with the ultimate goal to gain insights regarding the neural foundations of cognitive processes. In this thesis, we focus on one key cognitive function of biological brains, associative memory. We implement the well-understood Willshaw model for artificial spiking neural networks, thoroughly explore the design space for the implementation, provide fast design space exploration software and evaluate our implementation in software simulation as well as neuromorphic hardware. Thereby we provide an approach to manually or automatically infer viable parameters for an associative memory on different hardware and software platforms. The performance of the associative memory was found to vary significantly between individual neuromorphic hardware platforms and numerical simulations. The network is thus a suitable benchmark for neuromorphic systems

    An analysis of learning in weightless neural systems

    No full text
    This thesis brings together two strands of neural networks research - weightless systems and statistical learning theory - in an attempt to understand better the learning and generalisation abilities of a class of pattern classifying machines. The machines under consideration are n-tuple classifiers. While their analysis falls outside the domain of more widespread neural networks methods the method has found considerable application since its first publication in 1959. The larger class of learning systems to which the n-tuple classifier belongs is known as the set of weightless or RAM-based systems, because of the fact that they store all their modifiable information in the nodes rather than as weights on the connections. The analytical tools used are those of statistical learning theory. Learning methods and machines are considered in terms of a formal learning problem which allows the precise definition of terms such as learning and generalisation (in this context). Results relating the empirical error of the machine on the training set, the number of training examples and the complexity of the machine (as measured by the Vapnik- Chervonenkis dimension) to the generalisation error are derived. In the thesis this theoretical framework is applied for the first time to weightless systems in general and to n-tuple classifiers in particular. Novel theoretical results are used to inspire the design of related learning machines and empirical tests are used to assess the power of these new machines. Also data-independent theoretical results are compared with data-dependent results to explain the apparent anomalies in the n-tuple classifier's behaviour. The thesis takes an original approach to the study of weightless networks, and one which gives new insights into their strengths as learning machines. It also allows a new family of learning machines to be introduced and a method for improving generalisation to be applied.Open Acces
    corecore