19 research outputs found

    Evaluating large language models' ability to understand metaphor and sarcasm using a screening test for Asperger syndrome

    Full text link
    Metaphors and sarcasm are precious fruits of our highly-evolved social communication skills. However, children with Asperger syndrome are known to have difficulties in comprehending sarcasm, even if they possess a certain level of verbal IQ sufficient for understanding metaphors. Given that, a screening test that scores the ability to understand metaphor and sarcasm has been used to differentiate Asperger syndrome from other symptoms exhibiting akin external behaviors (e.g., attention-deficit/hyperactivity disorder). This study uses the standardized test to examine the capability of recent large language models (LLMs) in understanding human nuanced communication. The results divulged that, whereas their ability to comprehend metaphors has been improved with the increase of the number of model parameters, the improvement in sarcasm understanding was not observed. This implies that an alternative approach is imperative to imbue LLMs with the capacity to grasp sarcasm, which has been associated with the amygdala, a pivotal cerebral region for emotional learning, in the case of humans

    An Evaluation of State-of-the-Art Large Language Models for Sarcasm Detection

    Full text link
    Sarcasm, as defined by Merriam-Webster, is the use of words by someone who means the opposite of what he is trying to say. In the field of sentimental analysis of Natural Language Processing, the ability to correctly identify sarcasm is necessary for understanding people's true opinions. Because the use of sarcasm is often context-based, previous research has used language representation models, such as Support Vector Machine (SVM) and Long Short-Term Memory (LSTM), to identify sarcasm with contextual-based information. Recent innovations in NLP have provided more possibilities for detecting sarcasm. In BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, Jacob Devlin et al. (2018) introduced a new language representation model and demonstrated higher precision in interpreting contextualized language. As proposed by Hazarika et al. (2018), CASCADE is a context-driven model that produces good results for detecting sarcasm. This study analyzes a Reddit corpus using these two state-of-the-art models and evaluates their performance against baseline models to find the ideal approach to sarcasm detection

    Travellers’ perspectives on historic squares and railway stations in Italian heritage cities revealed through sentiment analysis

    Get PDF
    This study undertakes sentiment analysis of online reviews of public exterior spaces – historic squares and railway stations – in popular destinations in Italy, with the aim of offering new perspectives of community engagement in urban design analysis. The experience of walking through urban spaces in Italian heritage cities is evaluated under indicators of place quality and connectivity, i.e., aesthetic perception, social interaction, body mobility, facilities and amenities, sense of safety, and destination loyalty. Such advanced analysis can reshape the way we interpret the thoughts and emotions of wider communities so that these are included in local place-focused development strategies.info:eu-repo/semantics/acceptedVersio

    Sentiment analysis of COVID-19 cases in Greece using Twitter data.

    Get PDF
    Syndromic surveillance with the use of Internet data has been used to track and forecast epidemics for the last two decades, using different sources from social media to search engine records. More recently, studies have addressed how the World Wide Web could be used as a valuable source for analysing the reactions of the public to outbreaks and revealing emotions and sentiment impact from certain events, notably that of pandemics. Objective: The objective of this research is to evaluate the capability of Twitter messages (tweets) in estimating the sentiment impact of COVID-19 cases in Greece in real time as related to cases. Methods: 153,528 tweets were gathered from 18,730 Twitter users totalling 2,840,024 words for exactly one year and were examined towards two sentimental lexicons: one in English language translated into Greek (using the Vader library) and one in Greek. We then used the specific sentimental ranking included in these lexicons to track i) the positive and negative impact of COVID-19 and ii) six types of sentiments: Surprise, Disgust, Anger, Happiness, Fear and Sadness and iii) the correlations between real cases of COVID-19 and sentiments and correlations between sentiments and the volume of data. Results: Surprise (25.32%) mainly and secondly Disgust (19.88%) were found to be the prevailing sentiments of COVID-19. The correlation coefficient (R2 ) for the Vader lexicon is &#8722; 0.07454 related to cases and &#8722; 0.,70668 to the tweets, while the other lexicon had 0.167387 and &#8722; 0.93095 respectively, all measured at significance level of p < 0.01. Evidence shows that the sentiment does not correlate with the spread of COVID-19, possibly since the interest in COVID-19 declined after a certain time

    A Quantum Probability Driven Framework for Joint Multi-Modal Sarcasm, Sentiment and Emotion Analysis

    Full text link
    Sarcasm, sentiment, and emotion are three typical kinds of spontaneous affective responses of humans to external events and they are tightly intertwined with each other. Such events may be expressed in multiple modalities (e.g., linguistic, visual and acoustic), e.g., multi-modal conversations. Joint analysis of humans' multi-modal sarcasm, sentiment, and emotion is an important yet challenging topic, as it is a complex cognitive process involving both cross-modality interaction and cross-affection correlation. From the probability theory perspective, cross-affection correlation also means that the judgments on sarcasm, sentiment, and emotion are incompatible. However, this exposed phenomenon cannot be sufficiently modelled by classical probability theory due to its assumption of compatibility. Neither do the existing approaches take it into consideration. In view of the recent success of quantum probability (QP) in modeling human cognition, particularly contextual incompatible decision making, we take the first step towards introducing QP into joint multi-modal sarcasm, sentiment, and emotion analysis. Specifically, we propose a QUantum probabIlity driven multi-modal sarcasm, sEntiment and emoTion analysis framework, termed QUIET. Extensive experiments on two datasets and the results show that the effectiveness and advantages of QUIET in comparison with a wide range of the state-of-the-art baselines. We also show the great potential of QP in multi-affect analysis

    Transformer based contextualization of pre-trained word embeddings for irony detection in Twitter

    Full text link
    [EN] Human communication using natural language, specially in social media, is influenced by the use of figurative language like irony. Recently, several workshops are intended to explore the task of irony detection in Twitter by using computational approaches. This paper describes a model for irony detection based on the contextualization of pre-trained Twitter word embeddings by means of the Transformer architecture. This approach is based on the same powerful architecture as BERT but, differently to it, our approach allows us to use in-domain embeddings. We performed an extensive evaluation on two corpora, one for the English language and another for the Spanish language. Our system was the first ranked system in the Spanish corpus and, to our knowledge, it has achieved the second-best result on the English corpus. These results support the correctness and adequacy of our proposal. We also studied and interpreted how the multi-head self-attention mechanisms are specialized on detecting irony by means of considering the polarity and relevance of individual words and even the relationships among words. This analysis is a first step towards understanding how the multi-head self-attention mechanisms of the Transformer architecture address the irony detection problem.This work has been partially supported by the Spanish Ministerio de Ciencia, Innovacion y Universidades and FEDER founds under project AMIC (TIN2017-85854-C4-2-R) and the GiSPRO project (PROMETEU/2018/176). Work of Jose-Angel Gonzalez is financed by Universitat Politecnica de Valencia under grant PAID-01-17.González-Barba, JÁ.; Hurtado Oliver, LF.; Pla Santamaría, F. (2020). Transformer based contextualization of pre-trained word embeddings for irony detection in Twitter. Information Processing & Management. 57(4):1-15. https://doi.org/10.1016/j.ipm.2020.102262S115574Farías, D. I. H., Patti, V., & Rosso, P. (2016). Irony Detection in Twitter. ACM Transactions on Internet Technology, 16(3), 1-24. doi:10.1145/2930663Greene, R., Cushman, S., Cavanagh, C., Ramazani, J., & Rouzer, P. (Eds.). (2012). The Princeton Encyclopedia of Poetry and Poetics. doi:10.1515/9781400841424Van Hee, C., Lefever, E., & Hoste, V. (2018). We Usually Don’t Like Going to the Dentist: Using Common Sense to Detect Irony on Twitter. Computational Linguistics, 44(4), 793-832. doi:10.1162/coli_a_00337Hochreiter, S., & Schmidhuber, J. (1997). Long Short-Term Memory. Neural Computation, 9(8), 1735-1780. doi:10.1162/neco.1997.9.8.1735Joshi, A., Bhattacharyya, P., & Carman, M. J. (2017). Automatic Sarcasm Detection. ACM Computing Surveys, 50(5), 1-22. doi:10.1145/3124420Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., & Soricut, R. (2019). Albert: A lite bert for self-supervised learning of language representations.Mohammad, S. M., & Turney, P. D. (2012). CROWDSOURCING A WORD-EMOTION ASSOCIATION LEXICON. Computational Intelligence, 29(3), 436-465. doi:10.1111/j.1467-8640.2012.00460.xMuecke, D. C. (1978). Irony markers. Poetics, 7(4), 363-375. doi:10.1016/0304-422x(78)90011-6Potamias, R. A., Siolas, G., & Stafylopatis, A. (2019). A transformer-based approach to irony and sarcasm detection. arXiv:1911.10401.Rosso, P., Rangel, F., Farías, I. H., Cagnina, L., Zaghouani, W., & Charfi, A. (2018). A survey on author profiling, deception, and irony detection for the Arabic language. Language and Linguistics Compass, 12(4), e12275. doi:10.1111/lnc3.12275Sulis, E., Irazú Hernández Farías, D., Rosso, P., Patti, V., & Ruffo, G. (2016). Figurative messages and affect in Twitter: Differences between #irony, #sarcasm and #not. Knowledge-Based Systems, 108, 132-143. doi:10.1016/j.knosys.2016.05.035Wilson, D., & Sperber, D. (1992). On verbal irony. Lingua, 87(1-2), 53-76. doi:10.1016/0024-3841(92)90025-eYus, F. (2016). Propositional attitude, affective attitude and irony comprehension. Pragmatics & Cognition, 23(1), 92-116. doi:10.1075/pc.23.1.05yusZhang, S., Zhang, X., Chan, J., & Rosso, P. (2019). Irony detection via sentiment-based transfer learning. Information Processing & Management, 56(5), 1633-1644. doi:10.1016/j.ipm.2019.04.00

    Efficient pneumonia detection using Vision Transformers on chest X-rays

    Get PDF
    Pneumonia is a widespread and acute respiratory infection that impacts people of all ages. Early detection and treatment of pneumonia are essential for avoiding complications and enhancing clinical results. We can reduce mortality, improve healthcare efficiency, and contribute to the global battle against a disease that has plagued humanity for centuries by devising and deploying effective detection methods. Detecting pneumonia is not only a medical necessity but also a humanitarian imperative and a technological frontier. Chest X-rays are a frequently used imaging modality for diagnosing pneumonia. This paper examines in detail a cutting-edge method for detecting pneumonia implemented on the Vision Transformer (ViT) architecture on a public dataset of chest X-rays available on Kaggle. To acquire global context and spatial relationships from chest X-ray images, the proposed framework deploys the ViT model, which integrates self-attention mechanisms and transformer architecture. According to our experimentation with the proposed Vision Transformer-based framework, it achieves a higher accuracy of 97.61%, sensitivity of 95%, and specificity of 98% in detecting pneumonia from chest X-rays. The ViT model is preferable for capturing global context, comprehending spatial relationships, and processing images that have different resolutions. The framework establishes its efficacy as a robust pneumonia detection solution by surpassing convolutional neural network (CNN) based architectures

    Detecció automàtica de la ironia en espanyol

    Full text link
    Treballs Finals de Grau de Lingüística. Facultat de Filologia. Universitat de Barcelona, Curs: 2021-2022, Tutor: Mariona Taulé[cat] La detecció automàtica del llenguatge figurat està generant cada vegada més interès en l’àmbit de la lingüística computacional. Aquest estudi, emmarcat dins els àmbits del Processament del Llenguatge Natural (PLN) i l’Aprenentatge Automàtic, documenta el procés de construcció d’un classificador de la ironia en textos en espanyol a partir de models com un Support Vector Machine (SVM) i BERT multilingüe. Els millors resultats obtinguts pertanyen al SVM, amb un valor F1 de 0.85 punts. Pel que fa al marc teòric, es parteix de les definicions d’ironia i sarcasme que ofereixen autores com Reyes (1994) i Ruiz Gurillo (2012) i de les consideracions de Kerbrat-Orecchioni (1981) i Reus Boyd-Swan (2009), que suggereixen la presència de diverses marques lingüístiques en el to irònic.[eng] The automatic detection of figurative language is increasingly generating more interest in the field of computational linguistics. This study, framed within the fields of Natural Language Processing and Machine Learning, documents the building process of an automatic irony classifier in Spanish texts based on models such as a Support Vector Machine (SVM) and multilingual BERT. The best results are obtained with the SVM, with an F1 value of 0.85 points. Regarding the theoretical framework, we consider the definitions of irony and sarcasm offered by authors such as Reyes (1994) and Ruiz Gurillo (2012) and the ideas of Kerbrat-Orecchioni (1981) and Reus Boyd-Swan (2009), which suggest the presence of several linguistic marks in the ironic tone
    corecore