7,855 research outputs found

    Magnetic Frustration and Iron-Vacancy Ordering in Iron-Chalcogenide

    Full text link
    We show that the magnetic and vacancy orders in the 122 (A1βˆ’yFe2βˆ’xSe2)(A_{1-y}Fe_{2-x}Se_2) iron-chalcogenides can be naturally derived from the J1βˆ’J2βˆ’J3J_1-J_2-J_3 model with J1J_1 being the ferromagnetic (FM) nearest neighbor exchange coupling and J2,J3J_{2}, J_3 being the antiferromagnetic (AFM) next and third nearest neighbor ones respectively, previously proposed to describe the magnetism in the 11(FeTe/Se) systems. In the 11 systems, the magnetic exchange couplings are extremely frustrated in the ordered bi-collinear antiferromagnetic state so that the magnetic transition temperature is low. In the 122 systems, the formation of iron vacancy order reduces the magnetic frustration and significantly increases the magnetic transition temperature and the ordered magnetic moment. The pattern of the 245 iron-vacancy order (5Γ—5\sqrt{5}\times \sqrt{5}) observed in experiments is correlated to the maximum reduction of magnetic frustration. The nature of the iron-vacancy ordering may hence be electronically driven. We explore other possible vacancy patterns and magnetic orders associated with them. We also calculate the spin wave excitations and their novel features to test our model.Comment: Figures are modified and more discussion is adde

    Simple Recurrent Units for Highly Parallelizable Recurrence

    Full text link
    Common recurrent neural architectures scale poorly due to the intrinsic difficulty in parallelizing their state computations. In this work, we propose the Simple Recurrent Unit (SRU), a light recurrent unit that balances model capacity and scalability. SRU is designed to provide expressive recurrence, enable highly parallelized implementation, and comes with careful initialization to facilitate training of deep models. We demonstrate the effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over cuDNN-optimized LSTM on classification and question answering datasets, and delivers stronger results than LSTM and convolutional models. We also obtain an average of 0.7 BLEU improvement over the Transformer model on translation by incorporating SRU into the architecture.Comment: EMNL
    • …
    corecore