18,702 research outputs found

    Coverings and Truncations of Graded Selfinjective Algebras

    Get PDF
    Let Ξ›\Lambda be a graded self-injective algebra. We describe its smash product \Lambda# k\mathbb Z^* with the group Z\mathbb Z, its Beilinson algebra and their relationship. Starting with Ξ›\Lambda, we construct algebras with finite global dimension, called Ο„\tau-slice algebras, we show that their trivial extensions are all isomorphic, and their repetitive algebras are the same \Lambda# k\mathbb Z^*. There exist Ο„\tau-mutations similar to the BGP reflections for the Ο„\tau-slice algebras. We also recover Iyama's absolute nn-complete algebra as truncation of the Koszul dual of certain self-injective algebra.Comment: Manuscript revised, introduction and abstract rewritte

    On nn-translation algebras

    Full text link
    Motivated by Iyama's higher representation theory, we introduce nn-translation quivers and nn-translation algebras. The classical ZQ\mathbb Z Q construction of the translation quiver is generalized to construct an (n+1)(n+1)-translation quiver from an nn-translation quiver, using trivial extension and smash product. We prove that the quadratic dual of nn-translation algebras have (nβˆ’1)(n-1)-almost splitting sequences in the category of its projective modules. We also present a non-Koszul 11-translation algebra whose trivial extension is 22-translation algebra, thus also provides a class of examples of (3,mβˆ’1)(3,m-1)-Koszul algebras (and also a class of (mβˆ’1,3)(m-1,3)-Koszul algebras) for all mβ‰₯2m \ge 2.Comment: The paper is revised, according to the referees' suggestions and comments. The definitions of nn-translation quiver, admissibility are rewritten, and the results related to these definition are revised. The results concerning nn-almost split sequence is revised. The Section 7 is removed and Section 6 is split into 3 sections. The mistake and typos pointed out are correcte

    Knowledge Distillation with Adversarial Samples Supporting Decision Boundary

    Full text link
    Many recent works on knowledge distillation have provided ways to transfer the knowledge of a trained network for improving the learning process of a new one, but finding a good technique for knowledge distillation is still an open problem. In this paper, we provide a new perspective based on a decision boundary, which is one of the most important component of a classifier. The generalization performance of a classifier is closely related to the adequacy of its decision boundary, so a good classifier bears a good decision boundary. Therefore, transferring information closely related to the decision boundary can be a good attempt for knowledge distillation. To realize this goal, we utilize an adversarial attack to discover samples supporting a decision boundary. Based on this idea, to transfer more accurate information about the decision boundary, the proposed algorithm trains a student classifier based on the adversarial samples supporting the decision boundary. Experiments show that the proposed method indeed improves knowledge distillation and achieves the state-of-the-arts performance.Comment: Accepted to AAAI 201

    Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons

    Full text link
    An activation boundary for a neuron refers to a separating hyperplane that determines whether the neuron is activated or deactivated. It has been long considered in neural networks that the activations of neurons, rather than their exact output values, play the most important role in forming classification friendly partitions of the hidden feature space. However, as far as we know, this aspect of neural networks has not been considered in the literature of knowledge transfer. In this paper, we propose a knowledge transfer method via distillation of activation boundaries formed by hidden neurons. For the distillation, we propose an activation transfer loss that has the minimum value when the boundaries generated by the student coincide with those by the teacher. Since the activation transfer loss is not differentiable, we design a piecewise differentiable loss approximating the activation transfer loss. By the proposed method, the student learns a separating boundary between activation region and deactivation region formed by each neuron in the teacher. Through the experiments in various aspects of knowledge transfer, it is verified that the proposed method outperforms the current state-of-the-art.Comment: Accepted to AAAI 201
    • …
    corecore