6,371 research outputs found
Source side pre-ordering using recurrent neural networks for English-Myanmar machine translation
Word reordering has remained one of the challenging problems for machine translation when translating between language pairs with different word orders e.g. English and Myanmar. Without reordering between these languages, a source sentence may be translated directly with similar word order and translation can not be meaningful. Myanmar is a subject-objectverb (SOV) language and an effective reordering is essential for translation. In this paper, we applied a pre-ordering approach using recurrent neural networks to pre-order words of the source Myanmar sentence into target English’s word order. This neural pre-ordering model is automatically derived from parallel word-aligned data with syntactic and lexical features based on dependency parse trees of the source sentences. This can generate arbitrary permutations that may be non-local on the sentence and can be combined into English-Myanmar machine translation. We exploited the model to reorder English sentences into Myanmar-like word order as a preprocessing stage for machine translation, obtaining improvements quality comparable to baseline rule-based pre-ordering approach on asian language treebank (ALT) corpus
Text Coherence Analysis Based on Deep Neural Network
In this paper, we propose a novel deep coherence model (DCM) using a
convolutional neural network architecture to capture the text coherence. The
text coherence problem is investigated with a new perspective of learning
sentence distributional representation and text coherence modeling
simultaneously. In particular, the model captures the interactions between
sentences by computing the similarities of their distributional
representations. Further, it can be easily trained in an end-to-end fashion.
The proposed model is evaluated on a standard Sentence Ordering task. The
experimental results demonstrate its effectiveness and promise in coherence
assessment showing a significant improvement over the state-of-the-art by a
wide margin.Comment: 4 pages, 2 figures, CIKM 201
Order-Preserving Abstractive Summarization for Spoken Content Based on Connectionist Temporal Classification
Connectionist temporal classification (CTC) is a powerful approach for
sequence-to-sequence learning, and has been popularly used in speech
recognition. The central ideas of CTC include adding a label "blank" during
training. With this mechanism, CTC eliminates the need of segment alignment,
and hence has been applied to various sequence-to-sequence learning problems.
In this work, we applied CTC to abstractive summarization for spoken content.
The "blank" in this case implies the corresponding input data are less
important or noisy; thus it can be ignored. This approach was shown to
outperform the existing methods in term of ROUGE scores over Chinese Gigaword
and MATBN corpora. This approach also has the nice property that the ordering
of words or characters in the input documents can be better preserved in the
generated summaries.Comment: Accepted by Interspeech 201
Energy-Efficient Inference Accelerator for Memory-Augmented Neural Networks on an FPGA
Memory-augmented neural networks (MANNs) are designed for question-answering
tasks. It is difficult to run a MANN effectively on accelerators designed for
other neural networks (NNs), in particular on mobile devices, because MANNs
require recurrent data paths and various types of operations related to
external memory access. We implement an accelerator for MANNs on a
field-programmable gate array (FPGA) based on a data flow architecture.
Inference times are also reduced by inference thresholding, which is a
data-based maximum inner-product search specialized for natural language tasks.
Measurements on the bAbI data show that the energy efficiency of the
accelerator (FLOPS/kJ) was higher than that of an NVIDIA TITAN V GPU by a
factor of about 125, increasing to 140 with inference thresholdingComment: Accepted to DATE 201
- …