550 research outputs found

    Neural System Combination for Machine Translation

    Full text link
    Neural machine translation (NMT) becomes a new approach to machine translation and generates much more fluent results compared to statistical machine translation (SMT). However, SMT is usually better than NMT in translation adequacy. It is therefore a promising direction to combine the advantages of both NMT and SMT. In this paper, we propose a neural system combination framework leveraging multi-source NMT, which takes as input the outputs of NMT and SMT systems and produces the final translation. Extensive experiments on the Chinese-to-English translation task show that our model archives significant improvement by 5.3 BLEU points over the best single system output and 3.4 BLEU points over the state-of-the-art traditional system combination methods.Comment: Accepted as a short paper by ACL-201

    Subgraph Contrastive Link Representation Learning

    Full text link
    Graph representation learning (GRL) has emerged as a powerful technique for solving graph analytics tasks. It can effectively convert discrete graph data into a low-dimensional space where the graph structural information and graph properties are maximumly preserved. While there is rich literature on node and whole-graph representation learning, GRL for link is relatively less studied and less understood. One common practice in previous works is to generate link representations by directly aggregating the representations of their incident nodes, which is not capable of capturing effective link features. Moreover, common GRL methods usually rely on full-graph training, suffering from poor scalability and high resource consumption on large-scale graphs. In this paper, we design Subgraph Contrastive Link Representation Learning (SCLRL) -- a self-supervised link embedding framework, which utilizes the strong correlation between central links and their neighborhood subgraphs to characterize links. We extract the "link-centric induced subgraphs" as input, with a subgraph-level contrastive discrimination as pretext task, to learn the intrinsic and structural link features via subgraph mini-batch training. Extensive experiments conducted on five datasets demonstrate that SCLRL has significant performance advantages in link representation learning on benchmark datasets and prominent efficiency advantages in terms of training speed and memory consumption on large-scale graphs, when compared with existing link representation learning methods.Comment: 8 pages, 4 figure
    • …
    corecore