10 research outputs found

    An analog CMOS chip set for neural networks with arbitrary topologies

    Get PDF

    A Reconfigurable Linear RF Analog Processor for Realizing Microwave Artificial Neural Network

    Full text link
    Owing to the data explosion and rapid development of artificial intelligence (AI), particularly deep neural networks (DNNs), the ever-increasing demand for large-scale matrix-vector multiplication has become one of the major issues in machine learning (ML). Training and evaluating such neural networks rely on heavy computational resources, resulting in significant system latency and power consumption. To overcome these issues, analog computing using optical interferometric-based linear processors have recently appeared as promising candidates in accelerating matrix-vector multiplication and lowering power consumption. On the other hand, radio frequency (RF) electromagnetic waves can also exhibit similar advantages as the optical counterpart by performing analog computation at light speed with lower power. Furthermore, RF devices have extra benefits such as lower cost, mature fabrication, and analog-digital mixed design simplicity, which has great potential in realizing affordable, scalable, low latency, low power, near-sensor radio frequency neural network (RFNN) that may greatly enrich RF signal processing capability. In this work, we propose a 2X2 reconfigurable linear RF analog processor in theory and experiment, which can be applied as a matrix multiplier in an artificial neural network (ANN). The proposed device can be utilized to realize a 2X2 simple RFNN for data classification. An 8X8 linear analog processor formed by 28 RFNN devices are also applied in a 4-layer ANN for Modified National Institute of Standards and Technology (MNIST) dataset classification.Comment: 11 pages, 16 figure

    Neural networks : analog VLSI implementation and learning algorithms

    Get PDF

    Efficient Mapping of Neural Network Models on a Class of Parallel Architectures.

    Get PDF
    This dissertation develops a formal and systematic methodology for efficient mapping of several contemporary artificial neural network (ANN) models on k-ary n-cube parallel architectures (KNC\u27s). We apply the general mapping to several important ANN models including feedforward ANN\u27s trained with backpropagation algorithm, radial basis function networks, cascade correlation learning, and adaptive resonance theory networks. Our approach utilizes a parallel task graph representing concurrent operations of the ANN model during training. The mapping of the ANN is performed in two steps. First, the parallel task graph of the ANN is mapped to a virtual KNC of compatible dimensionality. This involves decomposing each operation into its atomic tasks. Second, the dimensionality of the virtual KNC architecture is recursively reduced through a sequence of transformations until a desired metric is optimized. We refer to this process as folding the virtual architecture. The optimization criteria we consider in this dissertation are defined in terms of the iteration time of the algorithm on the folded architecture. If necessary, the mapping scheme may utilize a subset of the processors of a given KNC architecture if it results in the most efficient simulation. A unique feature of our mapping is that it systematically selects an appropriate degree of parallelism leading to a highly efficient realization of the ANN model on KNC architectures. A novel feature of our work is its ability to efficiently map unit-allocating ANN\u27s. These networks possess a dynamic structure which grows during training. We present a highly efficient scheme for simulating such networks on existing KNC parallel architectures. We assume an upper bound on size of the neural network We perform the folding such that the iteration time of the largest network is minimized. We show that our mapping leads to near-optimal simulation of smaller instances of the neural network. In addition, based on our mapping no data migration or task rescheduling is needed as the size of network grows

    Continuous-valued probabilistic neural computation in VLSI

    Get PDF

    FEEDFORWARD ARTIFICIAL NEURAL NETWORK DESIGN UTILISING SUBTHRESHOLD MODE CMOS DEVICES

    Get PDF
    This thesis reviews various previously reported techniques for simulating artificial neural networks and investigates the design of fully-connected feedforward networks based on MOS transistors operating in the subthreshold mode of conduction as they are suitable for performing compact, low power, implantable pattern recognition systems. The principal objective is to demonstrate that the transfer characteristic of the devices can be fully exploited to design basic processing modules which overcome the linearity range, weight resolution, processing speed, noise and mismatch of components problems associated with weak inversion conduction, and so be used to implement networks which can be trained to perform practical tasks. A new four-quadrant analogue multiplier, one of the most important cells in the design of artificial neural networks, is developed. Analytical as well as simulation results suggest that the new scheme can efficiently be used to emulate both the synaptic and thresholding functions. To complement this thresholding-synapse, a novel current-to-voltage converter is also introduced. The characteristics of the well known sample-and-hold circuit as a weight memory scheme are analytically derived and simulation results suggest that a dummy compensated technique is required to obtain the required minimum of 8 bits weight resolution. Performance of the combined load and thresholding-synapse arrangement as well as an on-chip update/refresh mechanism are analytically evaluated and simulation studies on the Exclusive OR network as a benchmark problem are provided and indicate a useful level of functionality. Experimental results on the Exclusive OR network and a 'QRS' complex detector based on a 10:6:3 multilayer perceptron are also presented and demonstrate the potential of the proposed design techniques in emulating feedforward neural networks

    Hardware Learning in Analogue VLSI Neural Networks

    Get PDF
    corecore