1,995 research outputs found

    Torpid Mixing of Markov Chains for the Six-vertex Model on Z^2

    Get PDF
    In this paper, we study the mixing time of two widely used Markov chain algorithms for the six-vertex model, Glauber dynamics and the directed-loop algorithm, on the square lattice Z^2. We prove, for the first time that, on finite regions of the square lattice these Markov chains are torpidly mixing under parameter settings in the ferroelectric phase and the anti-ferroelectric phase

    Strain-induced pseudomagnetic field and quantum oscillations in kagome crystals

    Get PDF
    A kagome lattice is composed of corner-sharing triangles arranged on a honeycomb lattice such that each honeycomb bond hosts a kagome site while each kagome triangle encloses a honeycomb site. Such close relation implies that the two lattices share common features. We predict here that a kagome crystal, similar to the honeycomb lattice graphene, reacts to elastic strain in a unique way that the bulk electronic states in the vicinity of Dirac points are reorganized by the strain-induced pseudomagnetic field into flat Landau levels, while the degenerate edge states in the undeformed crystal become separated in the energy dimension. When the strain is tuned continuously, the resulting scanning pseudomagnetic field gives rise to quantum oscillations in both density of states (DOS) and electric conductivity.Comment: 8 pages, 5 figure

    Table-to-text Generation by Structure-aware Seq2seq Learning

    Full text link
    Table-to-text generation aims to generate a description for a factual table which can be viewed as a set of field-value records. To encode both the content and the structure of a table, we propose a novel structure-aware seq2seq architecture which consists of field-gating encoder and description generator with dual attention. In the encoding phase, we update the cell memory of the LSTM unit by a field gate and its corresponding field value in order to incorporate field information into table representation. In the decoding phase, dual attention mechanism which contains word level attention and field level attention is proposed to model the semantic relevance between the generated description and the table. We conduct experiments on the \texttt{WIKIBIO} dataset which contains over 700k biographies and corresponding infoboxes from Wikipedia. The attention visualizations and case studies show that our model is capable of generating coherent and informative descriptions based on the comprehensive understanding of both the content and the structure of a table. Automatic evaluations also show our model outperforms the baselines by a great margin. Code for this work is available on https://github.com/tyliupku/wiki2bio.Comment: Accepted by AAAI201
    • …
    corecore