61,980 research outputs found

    Primitive Vassiliev Invariants and Factorization in Chern-Simons Perturbation Theory

    Full text link
    The general structure of the perturbative expansion of the vacuum expectation value of a Wilson line operator in Chern-Simons gauge field theory is analyzed. The expansion is organized according to the independent group structures that appear at each order. It is shown that the analysis is greatly simplified if the group factors are chosen in a certain way that we call canonical. This enables us to show that the logarithm of a polinomial knot invariant can be written in terms of primitive Vassiliev invariants only.Comment: 15 pages, latex, 2 figure

    More on softly broken N=2 QCD

    Get PDF
    We extend previous work on the soft breaking of N=2N=2 supersymmetric QCD. We present the formalism for the breaking due to a dilaton spurion for a general gauge group and obtain the exact effective potential. We obtain some general features of the vacuum structure in the pure SU(N)SU(N) Yang-Mills theory and we also derive a general mass formula for this class of theories, in particular we present explicit results for the mass spectrum in the SU(2)SU(2) case. Finally we analyze the vacuum structure of the SU(2)SU(2) theory with one massless hypermultiplet. This theory presents dyon condensation and a first order phase transition in the supersymmetry breaking parameter driven by non-mutually local BPS states. This could be a hint of Argyres-Douglas-like phases in non-supersymmetric gauge theories.Comment: 35 pages, 9 Postscript figure

    Compression-aware Training of Deep Networks

    Get PDF
    In recent years, great progress has been made in a variety of application domains thanks to the development of increasingly deeper neural networks. Unfortunately, the huge number of units of these networks makes them expensive both computationally and memory-wise. To overcome this, exploiting the fact that deep networks are over-parametrized, several compression strategies have been proposed. These methods, however, typically start from a network that has been trained in a standard manner, without considering such a future compression. In this paper, we propose to explicitly account for compression in the training process. To this end, we introduce a regularizer that encourages the parameter matrix of each layer to have low rank during training. We show that accounting for compression during training allows us to learn much more compact, yet at least as effective, models than state-of-the-art compression techniques.Comment: Accepted at NIPS 201
    • …
    corecore