2,723 research outputs found

    Memory-enhanced Decoder for Neural Machine Translation

    Full text link
    We propose to enhance the RNN decoder in a neural machine translator (NMT) with external memory, as a natural but powerful extension to the state in the decoding RNN. This memory-enhanced RNN decoder is called \textsc{MemDec}. At each time during decoding, \textsc{MemDec} will read from this memory and write to this memory once, both with content-based addressing. Unlike the unbounded memory in previous work\cite{RNNsearch} to store the representation of source sentence, the memory in \textsc{MemDec} is a matrix with pre-determined size designed to better capture the information important for the decoding process at each time step. Our empirical study on Chinese-English translation shows that it can improve by 4.84.8 BLEU upon Groundhog and 5.35.3 BLEU upon on Moses, yielding the best performance achieved with the same training set.Comment: 11 page

    Deep Neural Machine Translation with Linear Associative Unit

    Full text link
    Deep Neural Networks (DNNs) have provably enhanced the state-of-the-art Neural Machine Translation (NMT) with their capability in modeling complex functions and capturing complex linguistic structures. However NMT systems with deep architecture in their encoder or decoder RNNs often suffer from severe gradient diffusion due to the non-linear recurrent activations, which often make the optimization much more difficult. To address this problem we propose novel linear associative units (LAU) to reduce the gradient propagation length inside the recurrent unit. Different from conventional approaches (LSTM unit and GRU), LAUs utilizes linear associative connections between input and output of the recurrent unit, which allows unimpeded information flow through both space and time direction. The model is quite simple, but it is surprisingly effective. Our empirical study on Chinese-English translation shows that our model with proper configuration can improve by 11.7 BLEU upon Groundhog and the best reported results in the same setting. On WMT14 English-German task and a larger WMT14 English-French task, our model achieves comparable results with the state-of-the-art.Comment: 10 pages, ACL 201

    Human Capital and Risky Asset Allocation

    Get PDF
    Much research has been done to examine the relation between investors\u27 human capital and their financial asset allocation. While some showed that the value of human capital should be taken into consideration to make financial asset allocation decisions on the composition of investing portfolios, most argued not. In this paper, we selected the monthly return of 9 industrial ETFs from June of 2007 to July 2011, used the present value of total future income as estimate of human capital, and relied on the Mean-Variance Optimal Asset Allocation framework to reexamine if human capital will impact investors optimal financial portfolios. Based on our tests, we found significant connection between human capital and risky asset allocation, which resulted in significant change to weights allocated to the risk assets to create a Mean-Variance optimal portfolio
    • …
    corecore