25 research outputs found

    Asymmetry of Processing Trade in China : Theory and Empirics

    Get PDF

    Context-Aware Self-Attention Networks

    Full text link
    Self-attention model have shown its flexibility in parallel computation and the effectiveness on modeling both long- and short-term dependencies. However, it calculates the dependencies between representations without considering the contextual information, which have proven useful for modeling dependencies among neural representations in various natural language tasks. In this work, we focus on improving self-attention networks through capturing the richness of context. To maintain the simplicity and flexibility of the self-attention networks, we propose to contextualize the transformations of the query and key layers, which are used to calculates the relevance between elements. Specifically, we leverage the internal representations that embed both global and deep contexts, thus avoid relying on external resources. Experimental results on WMT14 English-German and WMT17 Chinese-English translation tasks demonstrate the effectiveness and universality of the proposed methods. Furthermore, we conducted extensive analyses to quantity how the context vectors participate in the self-attention model.Comment: AAAI 201

    Dynamic Layer Aggregation for Neural Machine Translation with Routing-by-Agreement

    Full text link
    With the promising progress of deep neural networks, layer aggregation has been used to fuse information across layers in various fields, such as computer vision and machine translation. However, most of the previous methods combine layers in a static fashion in that their aggregation strategy is independent of specific hidden states. Inspired by recent progress on capsule networks, in this paper we propose to use routing-by-agreement strategies to aggregate layers dynamically. Specifically, the algorithm learns the probability of a part (individual layer representations) assigned to a whole (aggregated representations) in an iterative way and combines parts accordingly. We implement our algorithm on top of the state-of-the-art neural machine translation model TRANSFORMER and conduct experiments on the widely-used WMT14 English-German and WMT17 Chinese-English translation datasets. Experimental results across language pairs show that the proposed approach consistently outperforms the strong baseline model and a representative static aggregation model.Comment: AAAI 201

    Achieving green environment targets in the world’s top 10 emitter countries: the role of green innovations and renewable electricity production

    Get PDF
    The rapid pace of industrialisation and economic development in recent decades is not without its environmental consequences. Electricity production, though an important determinant of economic development, remained under studied in the existing literature and only a few models on the electricity productionenvironmental degradation nexus are available. As a first attempt, this study examines the impact of renewable and non-renewable electricity generation and eco-innovations on CO2 emissions in the world’s top emitting countries under the umbrella of the Environmental Kuznets Curve (E.K.C.) Hypothesis. Second-generation panel data techniques, i.e., C.I.P.S. and Bai and Carrion-ISilvestre (2009) unit root tests, Westerlund and Edgerton (2008) and Banerjee and Carrion-i-Silvestre (2017) cointegration techniques and Cross-Sectionally Augmented Distributed Lag Model for short and long run coefficient estimations have been employed in the study. It is found that renewable electricity production and eco-innovations have negative effects, whereas non-renewable electricity production has positive effect on CO2 emission. Moreover, the estimation demonstrated the E.K.C. validation in these countries. It is recommended that fossil fuel dependency in the electricity sector should be reduced by devising policies directed towards green electricity measures. More investment in green innovations to achieve green environment and sustainable growth is also recommended by the study

    Is ChatGPT A Good Translator? Yes With GPT-4 As The Engine

    Full text link
    This report provides a preliminary evaluation of ChatGPT for machine translation, including translation prompt, multilingual translation, and translation robustness. We adopt the prompts advised by ChatGPT to trigger its translation ability and find that the candidate prompts generally work well with minor performance differences. By evaluating on a number of benchmark test sets, we find that ChatGPT performs competitively with commercial translation products (e.g., Google Translate) on high-resource European languages but lags behind significantly on low-resource or distant languages. As for the translation robustness, ChatGPT does not perform as well as the commercial systems on biomedical abstracts or Reddit comments but exhibits good results on spoken language. Further, we explore an interesting strategy named pivot prompting\mathbf{pivot~prompting} for distant languages, which asks ChatGPT to translate the source sentence into a high-resource pivot language before into the target language, improving the translation performance noticeably. With the launch of the GPT-4 engine, the translation performance of ChatGPT is significantly boosted, becoming comparable to commercial translation products, even for distant languages. Human analysis on Google Translate and ChatGPT suggests that ChatGPT with GPT-3.5 tends to generate more hallucinations and mis-translation errors while that with GPT-4 makes the least errors. In other words, ChatGPT has already become a good translator. Please refer to our Github project for more details: https://github.com/wxjiao/Is-ChatGPT-A-Good-TranslatorComment: Analyzed/compared the outputs between ChatGPT and Google Translate; both automatic and human evaluatio
    corecore