23,267 research outputs found

    The inertia of weighted unicyclic graphs

    Full text link
    Let GwG_w be a weighted graph. The \textit{inertia} of GwG_w is the triple In(Gw)=(i+(Gw),iβˆ’(Gw),In(G_w)=\big(i_+(G_w),i_-(G_w), i0(Gw)) i_0(G_w)\big), where i+(Gw),iβˆ’(Gw),i0(Gw)i_+(G_w),i_-(G_w),i_0(G_w) are the number of the positive, negative and zero eigenvalues of the adjacency matrix A(Gw)A(G_w) of GwG_w including their multiplicities, respectively. i+(Gw)i_+(G_w), iβˆ’(Gw)i_-(G_w) is called the \textit{positive, negative index of inertia} of GwG_w, respectively. In this paper we present a lower bound for the positive, negative index of weighted unicyclic graphs of order nn with fixed girth and characterize all weighted unicyclic graphs attaining this lower bound. Moreover, we characterize the weighted unicyclic graphs of order nn with two positive, two negative and at least nβˆ’6n-6 zero eigenvalues, respectively.Comment: 23 pages, 8figure

    Memory-augmented Neural Machine Translation

    Get PDF
    Neural machine translation (NMT) has achieved notable success in recent times, however it is also widely recognized that this approach has limitations with handling infrequent words and word pairs. This paper presents a novel memory-augmented NMT (M-NMT) architecture, which stores knowledge about how words (usually infrequently encountered ones) should be translated in a memory and then utilizes them to assist the neural model. We use this memory mechanism to combine the knowledge learned from a conventional statistical machine translation system and the rules learned by an NMT system, and also propose a solution for out-of-vocabulary (OOV) words based on this framework. Our experiments on two Chinese-English translation tasks demonstrated that the M-NMT architecture outperformed the NMT baseline by 9.09.0 and 2.72.7 BLEU points on the two tasks, respectively. Additionally, we found this architecture resulted in a much more effective OOV treatment compared to competitive methods

    Flexible and Creative Chinese Poetry Generation Using Neural Memory

    Full text link
    It has been shown that Chinese poems can be successfully generated by sequence-to-sequence neural models, particularly with the attention mechanism. A potential problem of this approach, however, is that neural models can only learn abstract rules, while poem generation is a highly creative process that involves not only rules but also innovations for which pure statistical models are not appropriate in principle. This work proposes a memory-augmented neural model for Chinese poem generation, where the neural model and the augmented memory work together to balance the requirements of linguistic accordance and aesthetic innovation, leading to innovative generations that are still rule-compliant. In addition, it is found that the memory mechanism provides interesting flexibility that can be used to generate poems with different styles

    Joint-2D-SL0 Algorithm for Joint Sparse Matrix Reconstruction

    Get PDF
    Sparse matrix reconstruction has a wide application such as DOA estimation and STAP. However, its performance is usually restricted by the grid mismatch problem. In this paper, we revise the sparse matrix reconstruction model and propose the joint sparse matrix reconstruction model based on one-order Taylor expansion. And it can overcome the grid mismatch problem. Then, we put forward the Joint-2D-SL0 algorithm which can solve the joint sparse matrix reconstruction problem efficiently. Compared with the Kronecker compressive sensing method, our proposed method has a higher computational efficiency and acceptable reconstruction accuracy. Finally, simulation results validate the superiority of the proposed method

    The naturalness in the BLMSSM and B-LSSM

    Full text link
    In order to interpret the Higgs mass and its decays more naturally, we hope to intrude the BLMSSM and B-LSSM. In the both models, the right-handed neutrino superfields are introduced to better explain the neutrino mass problems. In addition, there are other superfields considered to make these models more natural than MSSM. In this paper, the method of Ο‡2\chi^2 analyses will be adopted in the BLMSSM and B-LSSM to calculate the Higgs mass, Higgs decays and muon gβˆ’2g-2. With the fine-tuning in the region 0.67%βˆ’2.5%0.67\%-2.5\% and 0.67%βˆ’5%0.67\%-5\%, we can obtain the reasonable theoretical values that are in accordance with the experimental results respectively in the BLMSSM and B-LSSM. Meanwhile, the best-fitted benchmark points in the BLMSSM and B-LSSM will be acquired at minimal (Ο‡minBL)2=2.34736(\chi^{BL}_{min})^2 = 2.34736 and (Ο‡minBβˆ’L)2=2.47754(\chi^{B-L}_{min})^2 = 2.47754, respectively
    • …
    corecore