9,896 research outputs found

    Progressive Label Distillation: Learning Input-Efficient Deep Neural Networks

    Get PDF
    Much of the focus in the area of knowledge distillation has been on distilling knowledge from a larger teacher network to a smaller student network. However, there has been little research on how the concept of distillation can be leveraged to distill the knowledge encapsulated in the training data itself into a reduced form. In this study, we explore the concept of progressive label distillation, where we leverage a series of teacher-student network pairs to progressively generate distilled training data for learning deep neural networks with greatly reduced input dimensions. To investigate the efficacy of the proposed progressive label distillation approach, we experimented with learning a deep limited vocabulary speech recognition network based on generated 500ms input utterances distilled progressively from 1000ms source training data, and demonstrated a significant increase in test accuracy of almost 78% compared to direct learning.Comment: 9 page

    Low-mass dilepton production in pppp and AAAA collisions

    Full text link
    We adopt a factorized QCD formalism to describe the transverse momentum distribution of low-mass lepton pairs produced in pppp collisions, when the pair transverse momentum QT≫QQ_T \gg Q, with the pair's invariant mass QQ as low as Q∼ΛQCDQ \sim \Lambda_{\mathrm{QCD}}. We extend this formalism to dilepton production in AAAA collisions by including the nuclear-dependent power correction due to parton multiple scattering.Comment: 4 pages, 1 figures - To appear in the conference proceedings for Quark Matter 2009, March 30 - April 4, Knoxville, Tennesse
    • …
    corecore