25,107 research outputs found

    Augmented Superfield Approach to Gauge-invariant Massive 2-Form Theory

    Get PDF
    We discuss the complete sets of the off-shell nilpotent (i.e. s^2_{(a)b} = 0) and absolutely anticommuting (i.e. s_b s_{ab} + s_{ab} s_b = 0) Becchi-Rouet-Stora-Tyutin (BRST) (s_b) and anti-BRST (s_{ab}) symmetries for the (3+1)-dimensional (4D) gauge-invariant massive 2-form theory within the framework of augmented superfield approach to BRST formalism. In this formalism, we obtain the coupled (but equivalent) Lagrangian densities which respect both BRST and anti-BRST symmetries on the constrained hypersurface defined by the Curci-Ferrari type conditions. The absolute anticommutativity property of the (anti-)BRST transformations (and corresponding generators) is ensured by the existence of the Curci-Ferrari type conditions which emerge very naturally in this formalism. Furthermore, the gauge-invariant restriction plays a decisive role in deriving the proper (anti-)BRST transformations for the St{\"u}ckelberg-like vector field.Comment: LaTeX file, 22 pages, no figures, version to appear in Eur. Phys. J. C (2017

    Secost: Sequential co-supervision for large scale weakly labeled audio event detection

    Full text link
    Weakly supervised learning algorithms are critical for scaling audio event detection to several hundreds of sound categories. Such learning models should not only disambiguate sound events efficiently with minimal class-specific annotation but also be robust to label noise, which is more apparent with weak labels instead of strong annotations. In this work, we propose a new framework for designing learning models with weak supervision by bridging ideas from sequential learning and knowledge distillation. We refer to the proposed methodology as SeCoST (pronounced Sequest) -- Sequential Co-supervision for training generations of Students. SeCoST incrementally builds a cascade of student-teacher pairs via a novel knowledge transfer method. Our evaluations on Audioset (the largest weakly labeled dataset available) show that SeCoST achieves a mean average precision of 0.383 while outperforming prior state of the art by a considerable margin.Comment: Accepted IEEE ICASSP 202
    corecore