3 research outputs found

    Opinion mining using combinational approach for different domains

    Get PDF
    An increase in use of web produces large content of information about products. Online reviews are used to make decision by peoples. Opinion mining is vast research area in which different types of reviews are analyzed. Several issues are existing in this area. Domain adaptation is emerging issue in opinion mining. Labling of data for every domain is time consuming and costly task. Hence the need arises for model that train one domain and applied it on other domain reducing cost aswell as time. This is called domain adaptation which is addressed in this paper. Using maximum entropy and clustering technique source domains data is trained. Trained data from source domain is applied on target data to labeling purpose A result shows moderate accuracy for 5 fold cross validation and combination of source domains for Blitzer et al (2007) multi domain product dataset

    MASKER: Masked Keyword Regularization for Reliable Text Classification

    Full text link
    Pre-trained language models have achieved state-of-the-art accuracies on various text classification tasks, e.g., sentiment analysis, natural language inference, and semantic textual similarity. However, the reliability of the fine-tuned text classifiers is an often underlooked performance criterion. For instance, one may desire a model that can detect out-of-distribution (OOD) samples (drawn far from training distribution) or be robust against domain shifts. We claim that one central obstacle to the reliability is the over-reliance of the model on a limited number of keywords, instead of looking at the whole context. In particular, we find that (a) OOD samples often contain in-distribution keywords, while (b) cross-domain samples may not always contain keywords; over-relying on the keywords can be problematic for both cases. In light of this observation, we propose a simple yet effective fine-tuning method, coined masked keyword regularization (MASKER), that facilitates context-based prediction. MASKER regularizes the model to reconstruct the keywords from the rest of the words and make low-confidence predictions without enough context. When applied to various pre-trained language models (e.g., BERT, RoBERTa, and ALBERT), we demonstrate that MASKER improves OOD detection and cross-domain generalization without degrading classification accuracy. Code is available at https://github.com/alinlab/MASKER.Comment: AAAI 2021. First two authors contributed equall
    corecore