154,261 research outputs found

    Live Demonstration: Multiplexing AER Asynchronous Channels over LVDS Links with Flow-Control and Clock- Correction for Scalable Neuromorphic Systems

    Get PDF
    In this live demonstration we exploit the use of a serial link for fast asynchronous communication in massively parallel processing platforms connected to a DVS for realtime implementation of bio-inspired vision processing on spiking neural networks

    Introduction to a system for implementing neural net connections on SIMD architectures

    Get PDF
    Neural networks have attracted much interest recently, and using parallel architectures to simulate neural networks is a natural and necessary application. The SIMD model of parallel computation is chosen, because systems of this type can be built with large numbers of processing elements. However, such systems are not naturally suited to generalized communication. A method is proposed that allows an implementation of neural network connections on massively parallel SIMD architectures. The key to this system is an algorithm permitting the formation of arbitrary connections between the neurons. A feature is the ability to add new connections quickly. It also has error recovery ability and is robust over a variety of network topologies. Simulations of the general connection system, and its implementation on the Connection Machine, indicate that the time and space requirements are proportional to the product of the average number of connections per neuron and the diameter of the interconnection network

    Distributed computing methodology for training neural networks in an image-guided diagnostic application

    Get PDF
    Distributed computing is a process through which a set of computers connected by a network is used collectively to solve a single problem. In this paper, we propose a distributed computing methodology for training neural networks for the detection of lesions in colonoscopy. Our approach is based on partitioning the training set across multiple processors using a parallel virtual machine. In this way, interconnected computers of varied architectures can be used for the distributed evaluation of the error function and gradient values, and, thus, training neural networks utilizing various learning methods. The proposed methodology has large granularity and low synchronization, and has been implemented and tested. Our results indicate that the parallel virtual machine implementation of the training algorithms developed leads to considerable speedup, especially when large network architectures and training sets are used

    Методи паралельно-вертикального опрацювання даних у нейромережах

    No full text
    Визначено операцiйний базис нейромереж, обгрунтовано доцiльнiсть розроблення апаратних нейромереж паралельно-вертикального типу, розроблено орiєнтований на НВIС-реалiзацiю паралельно-вертикальний метод опрацювання даних у нейроелементах (нейромережах), який забезпечує зменшення кiлькостi виводiв iнтерфейсу, розрядностi мiжнейронних зв’язкiв i затрат обладнання та запропоновано принципи НВIС-реалiзацiї нейромереж.Определен операционный базис нейросетей, обоснована целесообразность разработки аппаратных нейросетей параллельно-вертикального типа, разработан ориентированный на СБИС-реализацию параллельно-вертикальный метод обработки данных в нейроэлементах (нейросетях), который обеспечивает уменьшение количества выводов интерфейса, разрядности межнейронных связей и затрат оборудования, и предложены принципы СБИС-реализации нейросетей.An operational basis of neural networks has been identified. The feasibility of development of parallel-vertical hardware neural networks has been substantiated. A parallel-vertical data processing method in neural elements (neural networks) that is oriented to the VLSI implementation and provides a reduction of the number of interface’s pins, the bitness of interneuron connection, and equipment costs has been developed. The principles of the VLSI implementation of neural networks have been proposed

    Implementation of boolean neural networks on parallel computers

    Get PDF
    This paper analyses the parallel implementation using networks of transputers of a neural structure belonging to a particular class of neural architectures known as GSN neural networks. These architectures, belonging to the general clasa of RAM-based networks and composed 01 digitally specified processing nodes, have been implemented using different processing topologies, and performance in relatíon to both training and testing efficiency in a practical pattern recognition task has been evaluated.Eje: Redes Neuronales. Algoritmos genéticosRed de Universidades con Carreras en Informática (RedUNCI

    A parallel supercomputer implementation of a biological inspired neural network and its use for pattern recognition

    Get PDF
    Abstract : A parallel implementation of a large spiking neural network is proposed and evaluated. The neural network implements the binding by synchrony process using the Oscillatory Dynamic Link Matcher (ODLM). Scalability, speed and performance are compared for 2 implementations: Message Passing Interface (MPI) and Compute Unified Device Architecture (CUDA) running on clusters of multicore supercomputers and NVIDIA graphical processing units respectively. A global spiking list that represents at each instant the state of the neural network is described. This list indexes each neuron that fires during the current simulation time so that the influence of their spikes are simultaneously processed on all computing units. Our implementation shows a good scalability for very large networks. A complex and large spiking neural network has been implemented in parallel with success, thus paving the road towards real-life applications based on networks of spiking neurons. MPI offers a better scalability than CUDA, while the CUDA implementation on a GeForce GTX 285 gives the best cost to performance ratio. When running the neural network on the GTX 285, the processing speed is comparable to the MPI implementation on RQCHP’s Mammouth parallel with 64 notes (128 cores)

    Chemical structure matching using correlation matrix memories

    Get PDF
    This paper describes the application of the Relaxation By Elimination (RBE) method to matching the 3D structure of molecules in chemical databases within the frame work of binary correlation matrix memories. The paper illustrates that, when combined with distributed representations, the method maps well onto these networks, allowing high performance implementation in parallel systems. It outlines the motivation, the neural architecture, the RBE method and presents some results of matching small molecules against a database of 100,000 models
    corecore