85,740 research outputs found

    A Software Package for Neural Network Applications Development

    Get PDF
    Original Backprop (Version 1.2) is an MS-DOS package of four stand-alone C-language programs that enable users to develop neural network solutions to a variety of practical problems. Original Backprop generates three-layer, feed-forward (series-coupled) networks which map fixed-length input vectors into fixed length output vectors through an intermediate (hidden) layer of binary threshold units. Version 1.2 can handle up to 200 input vectors at a time, each having up to 128 real-valued components. The first subprogram, TSET, appends a number (up to 16) of classification bits to each input, thus creating a training set of input output pairs. The second subprogram, BACKPROP, creates a trilayer network to do the prescribed mapping and modifies the weights of its connections incrementally until the training set is leaned. The learning algorithm is the 'back-propagating error correction procedures first described by F. Rosenblatt in 1961. The third subprogram, VIEWNET, lets the trained network be examined, tested, and 'pruned' (by the deletion of unnecessary hidden units). The fourth subprogram, DONET, makes a TSR routine by which the finished product of the neural net design-and-training exercise can be consulted under other MS-DOS applications

    Implementation of Single Layer Perceptron Model using MATLAB

    Get PDF
    ANN consists of hundreds of single units, artificial neurons or processing elements . Neurons are connected with weights, which constitute the neural structure and are organized in layers. Perceptron is single layer artificial neuron network and it works with continuous or binary inputs. In the modern sense perceptron is an algorithm for learning a binary classifier. In ANN the inputs are applied via series of weights and Actual output are compared to the target outputs. Then to adjust the weihts, learning rule is used and bias the network so that actual output move closer to the target output .The perceptron learning rules comes under the category of supervised learning. In this Paper , implementation of single layer perceptron model using single perceptron learning rule through MAT LAB is discussed

    Capacity of two-layer feedforward neural networks with binary weights

    Get PDF
    The low er and upper bounds for the information capacity of two-layer feedforward neural networks with binary interconnections, integer thresholds for the hidden units, and zero threshold for the output unit is obtained through two steps, First, through a constructive approach based on statistical analysis, it is shown that a specifically constructed (N -2L -1) network with N input units, 2L hidden units, and one output unit is capable of implementing, with almost probability one, any dichotomy of O(W/1n W) random samples drawn from some continuous distributions, where W is the total number of weights of the network, This quantity is then used as a lower bound for the information capacity C of all (N -2L -1) networks with binary weights, Second, an upper bound is obtained and shown to be O(W) by a simple counting argument. Therefore, we have Omega(W/ln W) less than or equal to C less than or equal to O(W)

    The mutual information of a stochastic binary channel: validity of the Replica Symmetry Ansatz

    Full text link
    We calculate the mutual information (MI) of a two-layered neural network with noiseless, continuous inputs and binary, stochastic outputs under several assumptions on the synaptic efficiencies. The interesting regime corresponds to the limit where the number of both input and output units is large but their ratio is kept fixed at a value α\alpha. We first present a solution for the MI using the replica technique with a replica symmetric (RS) ansatz. Then we find an exact solution for this quantity valid in a neighborhood of α=0\alpha = 0. An analysis of this solution shows that the system must have a phase transition at some finite value of α\alpha. This transition shows a singularity in the third derivative of the MI. As the RS solution turns out to be infinitely differentiable, it could be regarded as a smooth approximation to the MI. This is checked numerically in the validity domain of the exact solution.Comment: Latex, 29 pages, 2 Encapsulated Post Script figures. To appear in Journal of Physics
    corecore