4 research outputs found

    A Prototype CVNS Distributed Neural Network

    Get PDF
    Artificial neural networks are widely used in many applications such as signal processing, classification, and control. However, the practical implementation of them is challenged by the number of inputs, storing the weights, and realizing the activation function.In this work, Continuous Valued Number System (CVNS) distributed neural networks are proposed which are providing the network with self-scaling property. This property aids the network to cope spontaneously with different number of inputs. The proposed CVNS DNN can change the dynamic range of the activation function spontaneously according to the number of inputs providing a proper functionality for the network.In addition, multi-valued CVNS DRAMs are proposed to store the weights as CVNS digits. These memories scan store up to 16 levels, equal to 4 bits, on each storage cell. In addition, they use error correction codes to detect and correct the error over the stored values.A synapse-neuron module is proposed to decrease the design cost. It contains both synapse and neuron and the relevant components. In these modules, the activation function is realized through analog circuits which are far more compact compared to the digital look-up-tables while quite accurate.Furthermore, the redundancy between CVNS digits together with the distributed structure of the neuron make the proposal stable against process violations and reduce the noise to signal ration

    Mixed-Signal VLSI Implementation of CVNS Artificial Neural Networks

    Get PDF
    In this work, mixed-signal implementation of Continuous Valued Number System (CVNS) neural network is proposed. The proposed network resolves the limited signal processing precision issue present in mixed-signal neural networks. This is realized by the CVNS addition, the CVNS multiplication and the CVNS sigmoid function evaluation algorithms proposed in this dissertation. The proposed algorithms provide accurate results in low-resolution environment. In addition, an area-efficient low sensitivity CVNS Madaline is proposed. The proposed Madaline is more robust to input and weight errors when compared to the previously developed structures. Moreover, its area consumption is lower. Furthermore, a new approximation scheme for hyperbolic tangent activation function is proposed. Using the proposed approximation scheme results in efficient implementation of digital ASIC neural networks in terms of area, delay and power consumption

    Mixed-Signal Neural Network Implementation with Programmable Neuron

    Get PDF
    This thesis introduces implementation of mixed-signal building blocks of an artificial neural network; namely the neuron and the synaptic multiplier. This thesis, also, investigates the nonlinear dynamic behavior of a single artificial neuron and presents a Distributed Arithmetic (DA)-based Finite Impulse Response (FIR) filter. All the introduced structures are designed and custom laid out

    A Mixed-Signal Feed-Forward Neural Network Architecture Using A High-Resolution Multiplying D/A Conversion Method

    Get PDF
    Artificial Neural Networks (ANNs) are parallel processors capable of learning from a set of sample data using a specific learning rule. Such systems are commonly used in applications where human brain may surpass conventional computers such as image processing, speech/character recognition, intelligent control and robotics to name a few. In this thesis, a mixed-signal neural network architecture is proposed employs a high resolution Multiplying Digital to Analog Converter (MDAC) designed using Delta Sigma Modulation (DSM). To reduce chip are, multiplexing is used in addition to analog implementation of arithmetic operations. This work employs a new method for filtering the high bit-rate signals using neurons nonlinear transfer function already existing in the network. Therefore, a configuration of a few MOS transistors are replacing the large resistors required to implement the low-pass filter in the network. This configuration noticeably decreases the chip area and also makes multiplexing feasible for hardware implementation
    corecore