2 research outputs found

    AI Unveiled Personalities: Profiling Optimistic and Pessimistic Attitudes in Hindi Dataset using Transformer-based Models

    Get PDF
    Both optimism and pessimism are intricately intertwined with an individual's inherent personality traits and people of all personality types can exhibit a wide range of attitudes and behaviours, including levels of optimism and pessimism. This paper undertakes a comprehensive analysis of optimistic and pessimistic tendencies present within Hindi textual data, employing transformer-based models. The research represents a pioneering effort to define and establish an interaction between the personality and attitude chakras within the realm of human psychology. Introducing an innovative "Chakra" system to illustrate complex interrelationships within human psychology, this work aligns the Myers-Briggs Type Indicator (MBTI) personality traits with optimistic and pessimistic attitudes, enriching our understanding of emotional projection in text. The study employs meticulously fine-tuned transformer models—specifically mBERT, XLM-RoBERTa, IndicBERT, mDeBERTa and a novel stacked mDeBERTa —trained on the novel Hindi dataset ‘मनोभाव’ (pronounced as Manobhav). Remarkably, the proposed Stacked mDeBERTa model outperforms others, recording an accuracy of 0.7785 along with elevated precision, recall, and F1 score values. Notably, its ROC AUC score of 0.7226 underlines its robustness in distinguishing between positive and negative emotional attitudes. The comparative analysis highlights the superiority of the Stacked mDeBERTa model in effectively capturing emotional attitudes in Hindi text

    Sentiment Analysis Using XLM-R Transformer and Zero-shot Transfer Learning on Resource-poor Indian Language

    Get PDF
    Sentiment analysis on social media relies on comprehending the natural language and using a robust machine learning technique that learns multiple layers of representations or features of the data and produces state-of-the-art prediction results. The cultural miscellanies, geographically limited trending topic hash-tags, access to aboriginal language keyboards, and conversational comfort in native language compound the linguistic challenges of sentiment analysis. This research evaluates the performance of cross-lingual contextual word embeddings and zero-shot transfer learning in projecting predictions from resource-rich English to resource-poor Hindi language. The cross-lingual XLM-RoBERTa classification model is trained and fine-tuned using the English language Benchmark SemEval 2017 dataset Task 4 A and subsequently zero-shot transfer learning is used to evaluate the classification model on two Hindi sentence-level sentiment analysis datasets, namely, IITP-Movie and IITP-Product review datasets. The proposed model compares favorably to state-of-the-art approaches and gives an effective solution to sentence-level (tweet-level) analysis of sentiments in a resource-poor scenario. The proposed model compares favorably to state-of-the-art approaches and achieves an average performance accuracy of 60.93 on both the Hindi datasets
    corecore