18,702 research outputs found
Coverings and Truncations of Graded Selfinjective Algebras
Let be a graded self-injective algebra. We describe its smash
product \Lambda# k\mathbb Z^* with the group , its Beilinson
algebra and their relationship. Starting with , we construct algebras
with finite global dimension, called -slice algebras, we show that their
trivial extensions are all isomorphic, and their repetitive algebras are the
same \Lambda# k\mathbb Z^*. There exist -mutations similar to the BGP
reflections for the -slice algebras. We also recover Iyama's absolute
-complete algebra as truncation of the Koszul dual of certain self-injective
algebra.Comment: Manuscript revised, introduction and abstract rewritte
On -translation algebras
Motivated by Iyama's higher representation theory, we introduce
-translation quivers and -translation algebras. The classical construction of the translation quiver is generalized to construct an
-translation quiver from an -translation quiver, using trivial
extension and smash product. We prove that the quadratic dual of
-translation algebras have -almost splitting sequences in the
category of its projective modules. We also present a non-Koszul
-translation algebra whose trivial extension is -translation algebra,
thus also provides a class of examples of -Koszul algebras (and also a
class of -Koszul algebras) for all .Comment: The paper is revised, according to the referees' suggestions and
comments. The definitions of -translation quiver, admissibility are
rewritten, and the results related to these definition are revised. The
results concerning -almost split sequence is revised. The Section 7 is
removed and Section 6 is split into 3 sections. The mistake and typos pointed
out are correcte
Knowledge Distillation with Adversarial Samples Supporting Decision Boundary
Many recent works on knowledge distillation have provided ways to transfer
the knowledge of a trained network for improving the learning process of a new
one, but finding a good technique for knowledge distillation is still an open
problem. In this paper, we provide a new perspective based on a decision
boundary, which is one of the most important component of a classifier. The
generalization performance of a classifier is closely related to the adequacy
of its decision boundary, so a good classifier bears a good decision boundary.
Therefore, transferring information closely related to the decision boundary
can be a good attempt for knowledge distillation. To realize this goal, we
utilize an adversarial attack to discover samples supporting a decision
boundary. Based on this idea, to transfer more accurate information about the
decision boundary, the proposed algorithm trains a student classifier based on
the adversarial samples supporting the decision boundary. Experiments show that
the proposed method indeed improves knowledge distillation and achieves the
state-of-the-arts performance.Comment: Accepted to AAAI 201
Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons
An activation boundary for a neuron refers to a separating hyperplane that
determines whether the neuron is activated or deactivated. It has been long
considered in neural networks that the activations of neurons, rather than
their exact output values, play the most important role in forming
classification friendly partitions of the hidden feature space. However, as far
as we know, this aspect of neural networks has not been considered in the
literature of knowledge transfer. In this paper, we propose a knowledge
transfer method via distillation of activation boundaries formed by hidden
neurons. For the distillation, we propose an activation transfer loss that has
the minimum value when the boundaries generated by the student coincide with
those by the teacher. Since the activation transfer loss is not differentiable,
we design a piecewise differentiable loss approximating the activation transfer
loss. By the proposed method, the student learns a separating boundary between
activation region and deactivation region formed by each neuron in the teacher.
Through the experiments in various aspects of knowledge transfer, it is
verified that the proposed method outperforms the current state-of-the-art.Comment: Accepted to AAAI 201
- β¦