23,267 research outputs found
The inertia of weighted unicyclic graphs
Let be a weighted graph. The \textit{inertia} of is the triple
, where
are the number of the positive, negative and zero
eigenvalues of the adjacency matrix of including their
multiplicities, respectively. , is called the
\textit{positive, negative index of inertia} of , respectively. In this
paper we present a lower bound for the positive, negative index of weighted
unicyclic graphs of order with fixed girth and characterize all weighted
unicyclic graphs attaining this lower bound. Moreover, we characterize the
weighted unicyclic graphs of order with two positive, two negative and at
least zero eigenvalues, respectively.Comment: 23 pages, 8figure
Memory-augmented Neural Machine Translation
Neural machine translation (NMT) has achieved notable success in recent
times, however it is also widely recognized that this approach has limitations
with handling infrequent words and word pairs. This paper presents a novel
memory-augmented NMT (M-NMT) architecture, which stores knowledge about how
words (usually infrequently encountered ones) should be translated in a memory
and then utilizes them to assist the neural model. We use this memory mechanism
to combine the knowledge learned from a conventional statistical machine
translation system and the rules learned by an NMT system, and also propose a
solution for out-of-vocabulary (OOV) words based on this framework. Our
experiments on two Chinese-English translation tasks demonstrated that the
M-NMT architecture outperformed the NMT baseline by and BLEU points
on the two tasks, respectively. Additionally, we found this architecture
resulted in a much more effective OOV treatment compared to competitive
methods
Flexible and Creative Chinese Poetry Generation Using Neural Memory
It has been shown that Chinese poems can be successfully generated by
sequence-to-sequence neural models, particularly with the attention mechanism.
A potential problem of this approach, however, is that neural models can only
learn abstract rules, while poem generation is a highly creative process that
involves not only rules but also innovations for which pure statistical models
are not appropriate in principle. This work proposes a memory-augmented neural
model for Chinese poem generation, where the neural model and the augmented
memory work together to balance the requirements of linguistic accordance and
aesthetic innovation, leading to innovative generations that are still
rule-compliant. In addition, it is found that the memory mechanism provides
interesting flexibility that can be used to generate poems with different
styles
Joint-2D-SL0 Algorithm for Joint Sparse Matrix Reconstruction
Sparse matrix reconstruction has a wide application such as DOA estimation and STAP. However, its performance is usually restricted by the grid mismatch problem. In this paper, we revise the sparse matrix reconstruction model and propose the joint sparse matrix reconstruction model based on one-order Taylor expansion. And it can overcome the grid mismatch problem. Then, we put forward the Joint-2D-SL0 algorithm which can solve the joint sparse matrix reconstruction problem efficiently. Compared with the Kronecker compressive sensing method, our proposed method has a higher computational efficiency and acceptable reconstruction accuracy. Finally, simulation results validate the superiority of the proposed method
The naturalness in the BLMSSM and B-LSSM
In order to interpret the Higgs mass and its decays more naturally, we hope
to intrude the BLMSSM and B-LSSM. In the both models, the right-handed neutrino
superfields are introduced to better explain the neutrino mass problems. In
addition, there are other superfields considered to make these models more
natural than MSSM. In this paper, the method of analyses will be
adopted in the BLMSSM and B-LSSM to calculate the Higgs mass, Higgs decays and
muon . With the fine-tuning in the region and ,
we can obtain the reasonable theoretical values that are in accordance with the
experimental results respectively in the BLMSSM and B-LSSM. Meanwhile, the
best-fitted benchmark points in the BLMSSM and B-LSSM will be acquired at
minimal and ,
respectively
- β¦