5,419 research outputs found

    Representations of Hopf Ore extensions of group algebras and pointed Hopf algebras of rank one

    Full text link
    In this paper, we study the representation theory of Hopf-Ore extensions of group algebras and pointed Hopf algebras of rank one over an arbitrary field kk. Let H=kG(\chi, a,\d) be a Hopf-Ore extension of kGkG and H′H' a rank one quotient Hopf algebra of HH, where kk is a field, GG is a group, aa is a central element of GG and χ\chi is a kk-valued character for GG with χ(a)≠1\chi(a)\neq 1. We first show that the simple weight modules over HH and H′H' are finite dimensional. Then we describe the structures of all simple weight modules over HH and H′H', and classify them. We also consider the decomposition of the tensor product of two simple weight modules over H′H' into the direct sum of indecomposable modules. Furthermore, we describe the structures of finite dimensional indecomposable weight modules over HH and H′H', and classify them. Finally, when χ(a)\chi(a) is a primitive nn-th root of unity for some n>2n>2, we determine all finite dimensional indecomposable projective objects in the category of weight modules over H′H'.Comment: arXiv admin note: substantial text overlap with arXiv:1206.394

    Information Scrambling in Quantum Neural Networks

    Get PDF
    The quantum neural network is one of the promising applications for near-term noisy intermediate-scale quantum computers. A quantum neural network distills the information from the input wave function into the output qubits. In this Letter, we show that this process can also be viewed from the opposite direction: the quantum information in the output qubits is scrambled into the input. This observation motivates us to use the tripartite information—a quantity recently developed to characterize information scrambling—to diagnose the training dynamics of quantum neural networks. We empirically find strong correlation between the dynamical behavior of the tripartite information and the loss function in the training process, from which we identify that the training process has two stages for randomly initialized networks. In the early stage, the network performance improves rapidly and the tripartite information increases linearly with a universal slope, meaning that the neural network becomes less scrambled than the random unitary. In the latter stage, the network performance improves slowly while the tripartite information decreases. We present evidences that the network constructs local correlations in the early stage and learns large-scale structures in the latter stage. We believe this two-stage training dynamics is universal and is applicable to a wide range of problems. Our work builds bridges between two research subjects of quantum neural networks and information scrambling, which opens up a new perspective to understand quantum neural networks
    • …
    corecore