2,919 research outputs found

    Adam Deep Learning with SOM for Human Sentiment Classification

    Get PDF
    Nowadays, with the improvement in communication through social network services, a massive amount of data is being generated from user's perceptions, emotions, posts, comments, reactions, etc., and extracting significant information from those massive data, like sentiment, has become one of the complex and convoluted tasks. On other hand, traditional Natural Language Processing (NLP) approaches are less feasible to be applied and therefore, this research work proposes an approach by integrating unsupervised machine learning (Self-Organizing Map), dimensionality reduction (Principal Component Analysis) and computational classification (Adam Deep Learning) to overcome the problem. Moreover, for further clarification, a comparative study between various well known approaches and the proposed approach was conducted. The proposed approach was also used in different sizes of social network data sets to verify its superior efficient and feasibility, mainly in the case of Big Data. Overall, the experiments and their analysis suggest that the proposed approach is very promissing

    Learning Robust Representations of Text

    Full text link
    Deep neural networks have achieved remarkable results across many language processing tasks, however these methods are highly sensitive to noise and adversarial attacks. We present a regularization based method for limiting network sensitivity to its inputs, inspired by ideas from computer vision, thus learning models that are more robust. Empirical evaluation over a range of sentiment datasets with a convolutional neural network shows that, compared to a baseline model and the dropout method, our method achieves superior performance over noisy inputs and out-of-domain data.Comment: 5 pages with 2 pages reference, 2 tables, 1 figur

    Simple Recurrent Units for Highly Parallelizable Recurrence

    Full text link
    Common recurrent neural architectures scale poorly due to the intrinsic difficulty in parallelizing their state computations. In this work, we propose the Simple Recurrent Unit (SRU), a light recurrent unit that balances model capacity and scalability. SRU is designed to provide expressive recurrence, enable highly parallelized implementation, and comes with careful initialization to facilitate training of deep models. We demonstrate the effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over cuDNN-optimized LSTM on classification and question answering datasets, and delivers stronger results than LSTM and convolutional models. We also obtain an average of 0.7 BLEU improvement over the Transformer model on translation by incorporating SRU into the architecture.Comment: EMNL

    How to Fine-Tune BERT for Text Classification?

    Full text link
    Language model pre-training has proven to be useful in learning universal language representations. As a state-of-the-art language model pre-training model, BERT (Bidirectional Encoder Representations from Transformers) has achieved amazing results in many language understanding tasks. In this paper, we conduct exhaustive experiments to investigate different fine-tuning methods of BERT on text classification task and provide a general solution for BERT fine-tuning. Finally, the proposed solution obtains new state-of-the-art results on eight widely-studied text classification datasets
    corecore