13 research outputs found

    Heterogeneous Encoders Scaling In The Transformer For Neural Machine Translation

    Full text link
    Although the Transformer is currently the best-performing architecture in the homogeneous configuration (self-attention only) in Neural Machine Translation, many State-of-the-Art models in Natural Language Processing are made of a combination of different Deep Learning approaches. However, these models often focus on combining a couple of techniques only and it is unclear why some methods are chosen over others. In this work, we investigate the effectiveness of integrating an increasing number of heterogeneous methods. Based on a simple combination strategy and performance-driven synergy criteria, we designed the Multi-Encoder Transformer, which consists of up to five diverse encoders. Results showcased that our approach can improve the quality of the translation across a variety of languages and dataset sizes and it is particularly effective in low-resource languages where we observed a maximum increase of 7.16 BLEU compared to the single-encoder model

    Generating Diverse Translation by Manipulating Multi-Head Attention

    Full text link
    Transformer model has been widely used on machine translation tasks and obtained state-of-the-art results. In this paper, we report an interesting phenomenon in its encoder-decoder multi-head attention: different attention heads of the final decoder layer align to different word translation candidates. We empirically verify this discovery and propose a method to generate diverse translations by manipulating heads. Furthermore, we make use of these diverse translations with the back-translation technique for better data augmentation. Experiment results show that our method generates diverse translations without severe drop in translation quality. Experiments also show that back-translation with these diverse translations could bring significant improvement on performance on translation tasks. An auxiliary experiment of conversation response generation task proves the effect of diversity as well.Comment: Accepted by AAAI 202

    Dual contextual module for neural machine translation

    Get PDF

    Plant disease detection using leaf images and an involutional neural network

    Get PDF
    The human population and domestic animals rely heavily on agriculture for their food and livelihood. Agriculture is an important contributor to the national economy of many countries. Plant diseases lead to a significant reduction in agricultural yield, posing a threat to global food security. It is crucial to detect plant diseases in a timely manner to prevent economic losses. Expert diagnosis and pathogen analysis are widely used for the detection of diseases in plants. However, both expert diagnosis and pathogen analysis rely on the real-time investigation experience of experts, which is prone to errors. In this work, an image analysis-based method is proposed for detecting and classifying plant diseases using an involution neural network and self-attention-based model. This method uses digital images of plant leaves and identifies diseases on the basis of image features. Different diseases affect leaf characteristics in different ways; therefore, their visual patterns are highly useful in disease recognition. For rigorous evaluation of the method, leaf images of different crops, including apple, grape, peach, cherry, corn, pepper, potato, and strawberry, are taken from a publicly available PlantVillage dataset to train the developed model. The experiments are not performed separately for different crops; instead, the model is trained to work for multiple crops. The experimental results demonstrate that the proposed method performed well, with an average classification accuracy of approximately 98.73% (κ = 98.04) for 8 different crops with 23 classes. The results are also compared with those of several existing methods, and it is found that the proposed method outperforms the other methods considered in this work
    corecore