618 research outputs found

    Characterization of a Class of Sigmoid Functions with Applications to Neural Networks

    Get PDF
    Sigmoid functions, whose graphs are S-shaped curves, appear in a great variety of contexts, such as the transfer functions in many neural networks. Their ubiquity is no accident; these curves are the among the simplest non-linear curves, striking a graceful balance between linear and non-linear behavior

    Existence and uniqueness results for neural network approximations

    Full text link

    Fast ConvNets Using Group-wise Brain Damage

    Full text link
    We revisit the idea of brain damage, i.e. the pruning of the coefficients of a neural network, and suggest how brain damage can be modified and used to speedup convolutional layers. The approach uses the fact that many efficient implementations reduce generalized convolutions to matrix multiplications. The suggested brain damage process prunes the convolutional kernel tensor in a group-wise fashion by adding group-sparsity regularization to the standard training process. After such group-wise pruning, convolutions can be reduced to multiplications of thinned dense matrices, which leads to speedup. In the comparison on AlexNet, the method achieves very competitive performance

    Continuous Learning in a Hierarchical Multiscale Neural Network

    Full text link
    We reformulate the problem of encoding a multi-scale representation of a sequence in a language model by casting it in a continuous learning framework. We propose a hierarchical multi-scale language model in which short time-scale dependencies are encoded in the hidden state of a lower-level recurrent neural network while longer time-scale dependencies are encoded in the dynamic of the lower-level network by having a meta-learner update the weights of the lower-level neural network in an online meta-learning fashion. We use elastic weights consolidation as a higher-level to prevent catastrophic forgetting in our continuous learning framework.Comment: 5 pages, 2 figures, accepted as short paper at ACL 201

    Expand-and-Cluster: Exact Parameter Recovery of Neural Networks

    Full text link
    Can we recover the hidden parameters of an Artificial Neural Network (ANN) by probing its input-output mapping? We propose a systematic method, called `Expand-and-Cluster' that needs only the number of hidden layers and the activation function of the probed ANN to identify all network parameters. In the expansion phase, we train a series of networks of increasing size using the probed data of the ANN as a teacher. Expansion stops when a minimal loss is consistently reached in networks of a given size. In the clustering phase, weight vectors of the expanded students are clustered, which allows structured pruning of superfluous neurons in a principled way. We find that an overparameterization of a factor four is sufficient to reliably identify the minimal number of neurons and to retrieve the original network parameters in 80%80\% of tasks across a family of 150 toy problems of variable difficulty. Furthermore, shallow and deep teacher networks trained on MNIST data can be identified with less than 5%5\% overhead in the neuron number. Thus, while direct training of a student network with a size identical to that of the teacher is practically impossible because of the highly non-convex loss function, training with mild overparameterization followed by clustering and structured pruning correctly identifies the target network.Comment: Preprint: 14 pages, 6 figures. Appendix: 8 pages, 7 figure
    • …
    corecore