20,302 research outputs found

    Progressive Label Distillation: Learning Input-Efficient Deep Neural Networks

    Get PDF
    Much of the focus in the area of knowledge distillation has been on distilling knowledge from a larger teacher network to a smaller student network. However, there has been little research on how the concept of distillation can be leveraged to distill the knowledge encapsulated in the training data itself into a reduced form. In this study, we explore the concept of progressive label distillation, where we leverage a series of teacher-student network pairs to progressively generate distilled training data for learning deep neural networks with greatly reduced input dimensions. To investigate the efficacy of the proposed progressive label distillation approach, we experimented with learning a deep limited vocabulary speech recognition network based on generated 500ms input utterances distilled progressively from 1000ms source training data, and demonstrated a significant increase in test accuracy of almost 78% compared to direct learning.Comment: 9 page

    A priori and a posteriori W1,∞W^{1,\infty} error analysis of a QC method for complex lattices

    Get PDF
    In this paper we prove a priori and a posteriori error estimates for a multiscale numerical method for computing equilibria of multilattices under an external force. The error estimates are derived in a W1,∞W^{1,\infty} norm in one space dimension. One of the features of our analysis is that we establish an equivalent way of formulating the coarse-grained problem which greatly simplifies derivation of the error bounds (both, a priori and a posteriori). We illustrate our error estimates with numerical experiments.Comment: 23 page
    • …
    corecore