9,896 research outputs found
Progressive Label Distillation: Learning Input-Efficient Deep Neural Networks
Much of the focus in the area of knowledge distillation has been on
distilling knowledge from a larger teacher network to a smaller student
network. However, there has been little research on how the concept of
distillation can be leveraged to distill the knowledge encapsulated in the
training data itself into a reduced form. In this study, we explore the concept
of progressive label distillation, where we leverage a series of
teacher-student network pairs to progressively generate distilled training data
for learning deep neural networks with greatly reduced input dimensions. To
investigate the efficacy of the proposed progressive label distillation
approach, we experimented with learning a deep limited vocabulary speech
recognition network based on generated 500ms input utterances distilled
progressively from 1000ms source training data, and demonstrated a significant
increase in test accuracy of almost 78% compared to direct learning.Comment: 9 page
Low-mass dilepton production in and collisions
We adopt a factorized QCD formalism to describe the transverse momentum
distribution of low-mass lepton pairs produced in collisions, when the
pair transverse momentum , with the pair's invariant mass as low
as . We extend this formalism to dilepton
production in collisions by including the nuclear-dependent power
correction due to parton multiple scattering.Comment: 4 pages, 1 figures - To appear in the conference proceedings for
Quark Matter 2009, March 30 - April 4, Knoxville, Tennesse
- …