3,572 research outputs found

    Movie Reviews Sentiment Analysis Using BERT

    Get PDF
    Sentiment analysis (SA) or opinion mining is analysis of emotions and opinions from texts. It is one of the active research areas in Natural Language Processing (NLP). Various approaches have been deployed in the literature to address the problem. These techniques devise complex and sophisticated frameworks in order to attain optimal accuracy with their focus on polarity classification or binary classification. In this paper, we aim to fine-tune BERT in a simple but robust approach for movie reviews sentiment analysis to provide better accuracy than state-of-the-art (SOTA) methods. We start by conducting sentiment classification for every review, followed by computing overall sentiment polarity for all the reviews. Both polarity classification and fine-grained classification or multi-scale sentiment distribution are implemented and tested on benchmark datasets in our work. To optimally adapt BERT for sentiment classification, we concatenate it with a Bidirectional LSTM (BiLSTM) layer. We also implemented and evaluated some accuracy improvement techniques including Synthetic Minority Over-sampling TEchnique (SMOTE) and NLP Augmenter (NLPAUG) to improve the model for prediction of multi-scale sentiment distribution. We found that including NLPAUG improved accuracy, however SMOTE did not work well. Lastly, a heuristic algorithm is applied to compute overall polarity of predicted reviews from the model output vector. We call our model BERT+BiLSTM-SA, where SA stands for Sentiment Analysis. Our best-performing approach comprises BERT and BiLSTM on binary, three-class, and four-class sentiment classifications, and SMOTE augmentation, in addition to BERT and BiLSTM, on five-class sentiment classification. Our approach performs at par with SOTA techniques on both classifications. For example, on binary classification, we obtain 97.67% accuracy, while the best performing SOTA model, NB-weighted-BON+dvcosine,has 97.40% accuracy on the popular IMDb dataset. The baseline, Entailment as Few-Shot Learners (EFL), is outperformed on this task by 1.30%. On the other hand, for five-class classification on SST-5, the best SOTA model, RoBERTa+large+Self-explaining, has 55.5% accuracy, while we obtain 59.48% accuracy. We outperform the baseline on this task, BERT-large, by 3.6%

    Movie Reviews Sentiment Analysis Using BERT

    Get PDF
    Sentiment analysis (SA) or opinion mining is analysis of emotions and opinions from texts. It is one of the active research areas in Natural Language Processing (NLP). Various approaches have been deployed in the literature to address the problem. These techniques devise complex and sophisticated frameworks in order to attain optimal accuracy with their focus on polarity classification or binary classification. In this paper, we aim to fine-tune BERT in a simple but robust approach for movie reviews sentiment analysis to provide better accuracy than state-of-the-art (SOTA) methods. We start by conducting sentiment classification for every review, followed by computing overall sentiment polarity for all the reviews. Both polarity classification and fine-grained classification or multi-scale sentiment distribution are implemented and tested on benchmark datasets in our work. To optimally adapt BERT for sentiment classification, we concatenate it with a Bidirectional LSTM (BiLSTM) layer. We also implemented and evaluated some accuracy improvement techniques including Synthetic Minority Over-sampling TEchnique (SMOTE) and NLP Augmenter (NLPAUG) to improve the model for prediction of multi-scale sentiment distribution. We found that including NLPAUG improved accuracy, however SMOTE did not work well. Lastly, a heuristic algorithm is applied to compute overall polarity of predicted reviews from the model output vector. We call our model BERT+BiLSTM-SA, where SA stands for Sentiment Analysis. Our best-performing approach comprises BERT and BiLSTM on binary, three-class, and four-class sentiment classifications, and SMOTE augmentation, in addition to BERT and BiLSTM, on five-class sentiment classification. Our approach performs at par with SOTA techniques on both classifications. For example, on binary classification, we obtain 97.67% accuracy, while the best performing SOTA model, NB-weighted-BON+dvcosine,has 97.40% accuracy on the popular IMDb dataset. The baseline, Entailment as Few-Shot Learners (EFL), is outperformed on this task by 1.30%. On the other hand, for five-class classification on SST-5, the best SOTA model, RoBERTa+large+Self-explaining, has 55.5% accuracy, while we obtain 59.48% accuracy. We outperform the baseline on this task, BERT-large, by 3.6%

    Robust Energy Consumption Prediction with a Missing Value-Resilient Metaheuristic-based Neural Network in Mobile App Development

    Full text link
    Energy consumption is a fundamental concern in mobile application development, bearing substantial significance for both developers and end-users. Moreover, it is a critical determinant in the consumer's decision-making process when considering a smartphone purchase. From the sustainability perspective, it becomes imperative to explore approaches aimed at mitigating the energy consumption of mobile devices, given the significant global consequences arising from the extensive utilisation of billions of smartphones, which imparts a profound environmental impact. Despite the existence of various energy-efficient programming practices within the Android platform, the dominant mobile ecosystem, there remains a need for documented machine learning-based energy prediction algorithms tailored explicitly for mobile app development. Hence, the main objective of this research is to propose a novel neural network-based framework, enhanced by a metaheuristic approach, to achieve robust energy prediction in the context of mobile app development. The metaheuristic approach here plays a crucial role in not only identifying suitable learning algorithms and their corresponding parameters but also determining the optimal number of layers and neurons within each layer. To the best of our knowledge, prior studies have yet to employ any metaheuristic algorithm to address all these hyperparameters simultaneously. Moreover, due to limitations in accessing certain aspects of a mobile phone, there might be missing data in the data set, and the proposed framework can handle this. In addition, we conducted an optimal algorithm selection strategy, employing 13 metaheuristic algorithms, to identify the best algorithm based on accuracy and resistance to missing values. The comprehensive experiments demonstrate that our proposed approach yields significant outcomes for energy consumption prediction.Comment: The paper is submitted to a related journa

    Smart Cage Active Contours and their application to brain image segmentation

    Get PDF
    In this work we present a new segmentation method named Smart Cage Active Contours (SCAC) that combines a parametrized active contour framework named Cage Active Contours (CAC), based on a ne trans- formations, with Active Shape Models (ASM). Our method e ectively restricts the shapes the evolving contours can take without the need of the training images to be manually landmarked. We apply our method to segment the caudate nuclei subcortical structure of a set of 40 subjects in magnetic resonance brain images, with promising results

    Meta-Prior: Meta learning for Adaptive Inverse Problem Solvers

    Full text link
    Deep neural networks have become a foundational tool for addressing imaging inverse problems. They are typically trained for a specific task, with a supervised loss to learn a mapping from the observations to the image to recover. However, real-world imaging challenges often lack ground truth data, rendering traditional supervised approaches ineffective. Moreover, for each new imaging task, a new model needs to be trained from scratch, wasting time and resources. To overcome these limitations, we introduce a novel approach based on meta-learning. Our method trains a meta-model on a diverse set of imaging tasks that allows the model to be efficiently fine-tuned for specific tasks with few fine-tuning steps. We show that the proposed method extends to the unsupervised setting, where no ground truth data is available. In its bilevel formulation, the outer level uses a supervised loss, that evaluates how well the fine-tuned model performs, while the inner loss can be either supervised or unsupervised, relying only on the measurement operator. This allows the meta-model to leverage a few ground truth samples for each task while being able to generalize to new imaging tasks. We show that in simple settings, this approach recovers the Bayes optimal estimator, illustrating the soundness of our approach. We also demonstrate our method's effectiveness on various tasks, including image processing and magnetic resonance imaging
    • …
    corecore