4,827 research outputs found

    Knowledge Transfer with Jacobian Matching

    Full text link
    Classical distillation methods transfer representations from a "teacher" neural network to a "student" network by matching their output activations. Recent methods also match the Jacobians, or the gradient of output activations with the input. However, this involves making some ad hoc decisions, in particular, the choice of the loss function. In this paper, we first establish an equivalence between Jacobian matching and distillation with input noise, from which we derive appropriate loss functions for Jacobian matching. We then rely on this analysis to apply Jacobian matching to transfer learning by establishing equivalence of a recent transfer learning procedure to distillation. We then show experimentally on standard image datasets that Jacobian-based penalties improve distillation, robustness to noisy inputs, and transfer learning

    Finite-element modeling of a composite bridge deck

    Get PDF
    Fiber Reinforced Polymer (FRP) materials are being widely used for structural applications, an example being bridge decks. In this study a finite-element model using the software ANSYS is developed for an 8&inches;-thick low-profile FRP bridge deck (Prodeck 8) made of E-glass fiber and Polyester resin. The bridge deck is subjected to a patch load at the center and the finite-element results obtained in the form of deflections, strains, and equivalent flexural rigidity are compared with experimental results. A good correlation is found to exist between the finite-element results and the experimental results. A failure analysis, based on maximum stress, maximum strain and Tsai-Wu theories of the Prodeck 8 is carried and first ply failure is determined. Finally, the Prodeck 8 is evaluated for critical load by performing a buckling analysis

    Data-free parameter pruning for Deep Neural Networks

    Full text link
    Deep Neural nets (NNs) with millions of parameters are at the heart of many state-of-the-art computer vision systems today. However, recent works have shown that much smaller models can achieve similar levels of performance. In this work, we address the problem of pruning parameters in a trained NN model. Instead of removing individual weights one at a time as done in previous works, we remove one neuron at a time. We show how similar neurons are redundant, and propose a systematic way to remove them. Our experiments in pruning the densely connected layers show that we can remove upto 85\% of the total parameters in an MNIST-trained network, and about 35\% for AlexNet without significantly affecting performance. Our method can be applied on top of most networks with a fully connected layer to give a smaller network.Comment: BMVC 201
    • …
    corecore