534 research outputs found

    FEEDFORWARD ARTIFICIAL NEURAL NETWORK DESIGN UTILISING SUBTHRESHOLD MODE CMOS DEVICES

    Get PDF
    This thesis reviews various previously reported techniques for simulating artificial neural networks and investigates the design of fully-connected feedforward networks based on MOS transistors operating in the subthreshold mode of conduction as they are suitable for performing compact, low power, implantable pattern recognition systems. The principal objective is to demonstrate that the transfer characteristic of the devices can be fully exploited to design basic processing modules which overcome the linearity range, weight resolution, processing speed, noise and mismatch of components problems associated with weak inversion conduction, and so be used to implement networks which can be trained to perform practical tasks. A new four-quadrant analogue multiplier, one of the most important cells in the design of artificial neural networks, is developed. Analytical as well as simulation results suggest that the new scheme can efficiently be used to emulate both the synaptic and thresholding functions. To complement this thresholding-synapse, a novel current-to-voltage converter is also introduced. The characteristics of the well known sample-and-hold circuit as a weight memory scheme are analytically derived and simulation results suggest that a dummy compensated technique is required to obtain the required minimum of 8 bits weight resolution. Performance of the combined load and thresholding-synapse arrangement as well as an on-chip update/refresh mechanism are analytically evaluated and simulation studies on the Exclusive OR network as a benchmark problem are provided and indicate a useful level of functionality. Experimental results on the Exclusive OR network and a 'QRS' complex detector based on a 10:6:3 multilayer perceptron are also presented and demonstrate the potential of the proposed design techniques in emulating feedforward neural networks

    The design and implementation of a switched current neural network

    Get PDF

    Analogue-to-digital conversion and image enhancement using neuron-mos technology

    Get PDF
    This thesis describes the development of two novel circuits that use a newly developed technology, that of neuron-MOS, for the purposes of analogue-to-digital conversion and image enhancement. Neuron-MOS has the potential to reduce both the complexity and number of transistors required for analogue and digital circuits. A reduced area, low transistor-count- analogue-to-digital converter that is suitable for inclusion in a massively parallel array of identical image processing elements is developed. Supporting the function of the array some fundamental image enhancement operations, such as edge enhancement, are examined exploiting the unique features of neuron-MOS technology

    Hardware Learning in Analogue VLSI Neural Networks

    Get PDF

    Neural-network dedicated processor for solving competitive assignment problems

    Get PDF
    A neural-network processor for solving first-order competitive assignment problems consists of a matrix of N x M processing units, each of which corresponds to the pairing of a first number of elements of (R sub i) with a second number of elements (C sub j), wherein limits of the first number are programmed in row control superneurons, and limits of the second number are programmed in column superneurons as MIN and MAX values. The cost (weight) W sub ij of the pairings is programmed separately into each PU. For each row and column of PU's, a dedicated constraint superneuron insures that the number of active neurons within the associated row or column fall within a specified range. Annealing is provided by gradually increasing the PU gain for each row and column or increasing positive feedback to each PU, the latter being effective to increase hysteresis of each PU or by combining both of these techniques

    Improving the Immunity of Hybrid SET/MOS Circuits Using Boltzmann Machine Network

    Get PDF
    Rapid progress in the fabrication technology of silicon nano devices has pushed the device dimension toward 1- 100nm length scale, which renders the basic working principles of CMOS devices more dependent upon quantum effects and doping fluctuations. When device dimensions are scaled down to a few nanometers, quantum effects such as single electron tunneling (SET) and energy quantization lead to interesting new device characteristics that can be exploited to create extremely compact circuits. The SET is one type of nanoscale electronic devices based on quantum tunneling and Coulomb blockade effect, where one or more Coulomb islands are sandwiched between two tunnel junctions which connect respectively with the drain electrode and the source electrode, and are capacitively coupled with one or more gate electrodes. However, both pure SET devices and hybrid SET-MOS circuits face a big problem – the background charges, which influence the accuracy of the circuit. In order to improve their immunity against these charges, we introduce the neuron network ‘Boltzmann machine’ into the circuit. This idea is to improve the accuracy with increasing time redundancy. Single-electron circuits show stochastic behaviors in their operation because of the probabilistic nature of electron tunneling phenomena. They can therefore be successfully used for implementing the stochastic neuron operation of Boltzmann machines. This thesis proposes applications of Boltzmann machine network to improve the immunity of hybrid SET/MOS circuits to overcome random background charges. Detailed unit neuron block and whole neuron network model are used to design hybrid SET/MOS circuits. Two applications based on Boltzmann machine are proposed: (1) Multi-bit A/D converter, and (2) One-bit full adder. Simulation was done using Cadence Spectre simulator with 180nm CMOS model and SET MIB macro model for performance evaluation. And it is expected that our idea can be extended to other hybrid SETMOS

    MDAC synapse for analog neural networks

    Get PDF
    Journal ArticleEfficient weight storage and multiplication are important design challenges which must be addressed in analog neural network implementations. Many schemes which treat storage and multiplication separately have been previously reported for implementation of synapses. We present a novel synapse circuit that integrates the weight storage and multiplication into a single, compact multiplying digital-to-analog converter (MDAC) circuit. The circuit has a small layout area (5400 μm2 in a 1.5-μm process) and exhibits good linearity over its entire input range. We have fabricated several synapses and characterized their presponses. Average maximum INL and DNL values of 0.2 LSB and 0.4 LSB, respectively, have been measured. We also report on the performance of an analog recurrent neural network which uses these new synapses

    Design and implementation of multipattern generators in analog VLSI

    Get PDF
    Journal ArticleIn recent years, computational biologists have shown through simulation that small neural networks with fixed connectivity are capable of producing multiple output rhythms in response to transient inputs. It is believed that such networks may play a key role in certain biological behaviors such as dynamic gait control. In this paper, we present a novel method for designing continuous-time recurrent neural networks (CTRNNs) that contain multiple embedded limit cycles, and we show that it is possible to switch the networks between these embedded limit cycles with simple transient inputs. We also describe the design and testing of a fully integrated four-neuron CTRNN chip that is used to implement the neural network pattern generators. We provide two example multipattern generators and show that the measured waveforms from the chip agree well with numerical simulations
    corecore