32 research outputs found

    SYNAPSE-1: A High-Speed General Purpose Parallel Neurocomputer System

    Full text link
    This paper describes the general purpose neurocomputer SYNAPSE-1 which has been developed in cooperation between Siemens Munich and the University of Mannheim. This system contains one of the most powerful processors available for neural algorithms, the neuro signal processor MA16. The prototype system executes a test algorithm 8,000 times as fast as a Sparc-2 workstation. This processing speed has been achieved by using a system architecture which is optimally adapted to the general structure of neural algorithms. It is a systolic array of MA16 processors embedded in a multiprocessor system of general purpose microprocessors

    NeuroFlow: A General Purpose Spiking Neural Network Simulation Platform using Customizable Processors

    Get PDF
    © 2016 Cheung, Schultz and Luk.NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation

    Innovation and application of ANN in Europe demonstrated by Kohonen maps

    Get PDF
    One of the most important contributions to neural networks comes from Kohonen, Helsinki/Espoo, Finland, who had the idea of self-organizating maps in 1981. He verified his idea by an algorithm of which many applications make use of. The impetus for this idea came from biology, a field where the Europeans have always been very active at several research laboratories. The challenge was to model the self-organization found in the brain. Today one goal is the development of more sophisticated neurons which model the biological neurons more exactly. They should come to a better performance of neural nets with only a few complex neurons instead of many simple ones. A lot of application concepts arise from this idea: Kohonen himself applied it to speech recognition, but the project did not overcome much more than the recognition of the numerals one to ten at that time. A more promising application for self-organizing maps is process control and process monitoring. Several proposals were made which concern parameter classification of semiconductor technologies, design of integrated circuits, and control of chemical processes. Self-organizing maps were applied to robotics. The neural concept was introduced into electric power systems. At Dortmund we are working on a system which has to monitor the quality and the reliability of gears and electrical motors in equipment installed in coal mines. The results are promising and the probability to apply the system in the field is very high. A special feature of the system is that linguistic rules which are embedded in a fuzzy controller analyze the data of the self-organizing map in regard to life expectation of the gears. It seems that the fuzzy technique will introduce the technology of neural networks in a tandem mode. These technologies together with the genetic algorithms start to form the attractive field of computational intelligence

    Introduzione al Neural Computing

    Get PDF
    Scopo di queste note quello di presentare alcune nozioni di base sul Neural Computing, descrivere brevemente alcuni modelli di rete neurale e illustrare i principali campi applicativi, con particolare attenzione alla Linguistica. Viene inoltre presentata una breve descrizione delle modalit di implementazione di reti neurali e viene esaminata in dettaglio l\u2019architettura sia hardware sia software di un neurocomputer, in relazione al quale vengono presentati alcuni progetti di ricerca che fanno uso di reti neurali. Il Neural Computing rappresenta il tentativo di definire modelli di calcolo che, sia pure in modo molto semplificato, simulino le funzionalit del cervello biologico. L\u2019analogia con il cervello biologico di basso livello e si basa su considerazioni architetturali e funzionali. Il neurone considerato, infatti, l\u2019elemento base del sistema. Di tale elemento base, il nucleo costituisce l\u2019elemento di elaborazione mentre dendriti ed assone sono, rispettivamente, i canali di ingresso e il canale di uscita. L\u2019assone trasmette successioni di impulsi mentre le connessioni fra neuroni avvengono mediante sinapsi la cui forza sinaptica ne determina l\u2019efficacia. Le connessioni possono essere sia di tipo eccitatorio sia di tipo inibitorio. Le sinapsi rappresentano le unit di memoria del sistema e l\u2019apprendimento si traduce in una modifica della forza sinaptica. L\u2019architettura dei sistemi neurali, d\u2019altra parte, si basa sulla definizione del concetto di neurone formale (vedi figura 1) caratterizzato da canali di ingresso e di uscita, dai pesi che simulano le sinapsi e la loro efficacia e da funzioni matematiche che ne modellizzano il comportamento. Da un punto di vista funzionale, le caratteristiche del cervello biologico che si ritrovano in parte nei sistemi neurali sono l\u2019elevata interconnettivit di elementi semplici e specializzati, l\u2019elevato parallelismo e la ridondanza Le reti neurali rappresentano pertanto modelli di calcolo che derivano alcune delle loro propriet salienti dalle teorie sul sistema nervoso centrale. Sebbene un feedback fra Neuroscienze e Neural Computing sia indubbiamente utile non strettamente necessario nella misura in cui le reti neurali sono viste come modelli astratti di calcolo e non come modelli del cervello biologico e della mente

    Intrinsically Evolvable Artificial Neural Networks

    Get PDF
    Dedicated hardware implementations of neural networks promise to provide faster, lower power operation when compared to software implementations executing on processors. Unfortunately, most custom hardware implementations do not support intrinsic training of these networks on-chip. The training is typically done using offline software simulations and the obtained network is synthesized and targeted to the hardware offline. The FPGA design presented here facilitates on-chip intrinsic training of artificial neural networks. Block-based neural networks (BbNN), the type of artificial neural networks implemented here, are grid-based networks neuron blocks. These networks are trained using genetic algorithms to simultaneously optimize the network structure and the internal synaptic parameters. The design supports online structure and parameter updates, and is an intrinsically evolvable BbNN platform supporting functional-level hardware evolution. Functional-level evolvable hardware (EHW) uses evolutionary algorithms to evolve interconnections and internal parameters of functional modules in reconfigurable computing systems such as FPGAs. Functional modules can be any hardware modules such as multipliers, adders, and trigonometric functions. In the implementation presented, the functional module is a neuron block. The designed platform is suitable for applications in dynamic environments, and can be adapted and retrained online. The online training capability has been demonstrated using a case study. A performance characterization model for RC implementations of BbNNs has also been presented

    A Decade of Neural Networks: Practical Applications and Prospects

    Get PDF
    The Jet Propulsion Laboratory Neural Network Workshop, sponsored by NASA and DOD, brings together sponsoring agencies, active researchers, and the user community to formulate a vision for the next decade of neural network research and application prospects. While the speed and computing power of microprocessors continue to grow at an ever-increasing pace, the demand to intelligently and adaptively deal with the complex, fuzzy, and often ill-defined world around us remains to a large extent unaddressed. Powerful, highly parallel computing paradigms such as neural networks promise to have a major impact in addressing these needs. Papers in the workshop proceedings highlight benefits of neural networks in real-world applications compared to conventional computing techniques. Topics include fault diagnosis, pattern recognition, and multiparameter optimization

    414 InternatIonal Journal of electronIcs & communIcatIon technology

    Get PDF
    Abstract Neural networks are a new method of programming computers. They are exceptionally good at performing pattern recognition and other tasks that are very difficult to program using conventional techniques. Programs that employ neural nets are also capable of learning on their own and adapting to changing conditions. Neural nets may be the future of computing .A good way to understand them is with a puzzle that neural nets can be used to solve. Suppose that you are given 500 characters of code that you know to be C, C++, Java, or Python. Now, construct a program that identifies the code's language. One solution is to construct a neural net that learns to identify these languages. According to a simplified account, the human brain consists of about ten billion neurons --and a neuron is, on average, connected to several thousand other neurons. By way of these connections, neurons both send and receive varying quantities of energy. One very important feature of neurons is that they don't react immediately to the reception of energy. Instead, they sum their received energies, and they send their own quantities of energy to other neurons only when this sum has reached a certain critical threshold. The brain learns by adjusting the number and strength of these connections. The brain's network of neurons forms a massively parallel information processing system. This contrasts with conventional computers, in which a single processor executes a single series of instructions

    On the Reliability Assessment of Artificial Neural Networks Running on AI-Oriented MPSoCs

    Get PDF
    Nowadays, the usage of electronic devices running artificial neural networks (ANNs)-based applications is spreading in our everyday life. Due to their outstanding computational capabilities, ANNs have become appealing solutions for safety-critical systems as well. Frequently, they are considered intrinsically robust and fault tolerant for being brain-inspired and redundant computing models. However, when ANNs are deployed on resource-constrained hardware devices, single physical faults may compromise the activity of multiple neurons. Therefore, it is crucial to assess the reliability of the entire neural computing system, including both the software and the hardware components. This article systematically addresses reliability concerns for ANNs running on multiprocessor system-on-a-chips (MPSoCs). It presents a methodology to assign resilience scores to individual neurons and, based on that, schedule the workload of an ANN on the target MPSoC so that critical neurons are neatly distributed among the available processing elements. This reliability-oriented methodology exploits an integer linear programming solver to find the optimal solution. Experimental results are given for three different convolutional neural networks trained on MNIST, SVHN, and CIFAR-10. We carried out a comprehensive assessment on an open-source artificial intelligence-based RISC-V MPSoC. The results show the reliability improvements of the proposed methodology against the traditional scheduling

    Computer vision algorithms on reconfigurable logic arrays

    Full text link
    corecore