26 research outputs found

    Irony Detection in Twitter: The Role of Affective Content

    Full text link
    © ACM 2016. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in ACM Transactions on Internet Technology, Vol. 16. http://dx.doi.org/10.1145/2930663.[EN] Irony has been proven to be pervasive in social media, posing a challenge to sentiment analysis systems. It is a creative linguistic phenomenon where affect-related aspects play a key role. In this work, we address the problem of detecting irony in tweets, casting it as a classification problem. We propose a novel model that explores the use of affective features based on a wide range of lexical resources available for English, reflecting different facets of affect. Classification experiments over different corpora show that affective information helps in distinguishing among ironic and nonironic tweets. Our model outperforms the state of the art in almost all cases.The National Council for Science and Technology (CONACyT Mexico) has funded the research work of Delia Irazu Hernandez Farias (Grant No. 218109/313683 CVU-369616). The work of Viviana Patti was partially carried out at the Universitat Politecnica de Valencia within the framework of a fellowship of the University of Turin cofunded by Fondazione CRT (World Wide Style Program 2). The work of Paolo Rosso has been partially funded by the SomEMBED TIN2015-71147-C2-1-P MINECO research project and by the Generalitat Valenciana under the grant ALMAMATER (PrometeoII/2014/030).Hernandez-Farias, DI.; Patti, V.; Rosso, P. (2016). Irony Detection in Twitter: The Role of Affective Content. ACM Transactions on Internet Technology. 16(3):19:1-19:24. https://doi.org/10.1145/2930663S19:119:24163Rob Abbott, Marilyn Walker, Pranav Anand, Jean E. Fox Tree, Robeson Bowmani, and Joseph King. 2011. How can you say such things?!?: Recognizing disagreement in informal political argument. In Proceedings of the Workshop on Languages in Social Media (LSM’11). Association for Computational Linguistics, Stroudsburg, PA, USA, 2--11.Laura Alba-Juez and Salvatore Attardo. 2014. The evaluative palette of verbal irony. In Evaluation in Context, Geoff Thompson and Laura Alba-Juez (Eds.). John Benjamins Publishing Company, Amsterdam/ Philadelphia, 93--116.Magda B. Arnold. 1960. Emotion and Personality. Vol. 1. Columbia University Press, New York, NY.Giuseppe Attardi, Valerio Basile, Cristina Bosco, Tommaso Caselli, Felice Dell’Orletta, Simonetta Montemagni, Viviana Patti, Maria Simi, and Rachele Sprugnoli. 2015. State of the art language technologies for italian: The EVALITA 2014 perspective. Journal of Intelligenza Artificiale 9, 1 (2015), 43--61.Salvatore Attardo. 2000. Irony as relevant inappropriateness. Journal of Pragmatics 32, 6 (2000), 793--826.Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2010. SentiWordNet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC’10). European Language Resources Association (ELRA), Valletta, Malta, 2200,2204.David Bamman and Noah A. Smith. 2015. Contextualized sarcasm detection on twitter. In Proceedings of the 9th International Conference on Web and Social Media, (ICWSM’15). AAAI, Oxford, UK, 574--577.Francesco Barbieri, Horacio Saggion, and Francesco Ronzano. 2014. Modelling sarcasm in twitter, a novel approach. In Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. Association for Computational Linguistics, Baltimore, Maryland, 50--58.Valerio Basile, Andrea Bolioli, Malvina Nissim, Viviana Patti, and Paolo Rosso. 2014. Overview of the evalita 2014 SENTIment POLarity classification task. In Proceedings of the 4th Evaluation Campaign of Natural Language Processing and Speech tools for Italian (EVALITA’14). Pisa University Press, Pisa, Italy, 50--57.Cristina Bosco, Viviana Patti, and Andrea Bolioli. 2013. Developing corpora for sentiment analysis: The case of irony and senti-TUT. IEEE Intelligent Systems 28, 2 (March 2013), 55--63.Andrea Bowes and Albert Katz. 2011. When sarcasm stings. Discourse Processes: A Multidisciplinary Journal 48, 4 (2011), 215--236.Margaret M. Bradley and Peter J. Lang. 1999. Affective Norms for English Words (ANEW): Instruction Manual and Affective Ratings. Technical Report. Center for Research in Psychophysiology, University of Florida, Gainesville, Florida.Konstantin Buschmeier, Philipp Cimiano, and Roman Klinger. 2014. An impact analysis of features in a classification approach to irony detection in product reviews. In Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. Association for Computational Linguistics, Baltimore, Maryland, 42--49.Erik Cambria, Andrew Livingstone, and Amir Hussain. 2012. The hourglass of emotions. In Cognitive Behavioural Systems. Lecture Notes in Computer Science, Vol. 7403. Springer, Berlin, 144--157.Erik Cambria, Daniel Olsher, and Dheeraj Rajagopal. 2014. SenticNet 3: A common and common-sense knowledge base for cognition-driven sentiment analysis. In Proceedings of AAAI Conference on Artificial Intelligence. AAAI, Québec, Canada, 1515--1521.Jorge Carrillo de Albornoz, Laura Plaza, and Pablo Gervás. 2012. SentiSense: An easily scalable concept-based affective lexicon for sentiment analysis. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC’12) (23-25), Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Mehmet Ugur Dogan, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, and Stelios Piperidis (Eds.). European Language Resources Association (ELRA), Istanbul, Turkey, 3562--3567.Paula Carvalho, Luís Sarmento, Mário J. Silva, and Eugénio de Oliveira. 2009. Clues for detecting irony in user-generated contents: Oh&hallip;!! It’s “so easy” ;-). In Proceedings of the 1st International CIKM Workshop on Topic-sentiment Analysis for Mass Opinion (TSA’09). ACM, New York, NY, 53--56.Yoonjung Choi and Janyce Wiebe. 2014. +/-EffectWordNet: Sense-level lexicon acquisition for opinion inference. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP’14). Association for Computational Linguistics, Doha, Qatar, 1181--1191.Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Semi-supervised recognition of sarcastic sentences in twitter and amazon. In Proceedings of the 14th Conference on Computational Natural Language Learning (CoNLL’10). Association for Computational Linguistics, Uppsala, Sweden, 107--116.Shelly Dews, Joan Kaplan, and Ellen Winner. 1995. Why not say it directly? The social functions of irony. Discourse Processes 19, 3 (1995), 347--367.Paul Ekman. 1992. An argument for basic emotions. Cognition and Emotion 6, 3--4 (1992), 169--200.Elisabetta Fersini, Federico Alberto Pozzi, and Enza Messina. 2015. Detecting irony and sarcasm in microblogs: The role of expressive signals and ensemble classifiers. In 2015 IEEE International Conference on Data Science and Advanced Analytics (DSAA’15). IEEE Xplore Digital Library, Paris, France, 1--8.Elena Filatova. 2012. Irony and sarcasm: Corpus generation and analysis using crowdsourcing. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC’12). European Language Resources Association (ELRA), Istanbul, 392--398.Aniruddha Ghosh, Guofu Li, Tony Veale, Paolo Rosso, Ekaterina Shutova, John Barnden, and Antonio Reyes. 2015. SemEval-2015 task 11: Sentiment analysis of figurative language in twitter. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval’15). Association for Computational Linguistics, Denver, Colorado, 470--478.Raymond W. Gibbs. 2000. Irony in talk among friends. Metaphor and Symbol 15, 1--2 (2000), 5--27.Rachel Giora and Salvatore Attardo. 2014. Irony. In Encyclopedia of Humor Studies. SAGE, Thousand Oaks, CA.Rachel Giora and Ofer Fein. 1999. Irony: Context and salience. Metaphor and Symbol 14, 4 (1999), 241--257.Roberto González-Ibáñez, Smaranda Muresan, and Nina Wacholder. 2011. Identifying sarcasm in twitter: A closer look. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (HLT’11). Association for Computational Linguistics, Portland, OR, 581--586.H. Paul Grice. 1975. Logic and conversation. In Syntax and Semantics: Vol. 3: Speech Acts, P. Cole and J. L. Morgan (Eds.). Academic Press, San Diego, CA, 41--58.Irazú Hernández Farías, José-Miguel Benedí, and Paolo Rosso. 2015. Applying basic features from sentiment analysis for automatic irony detection. In Pattern Recognition and Image Analysis. Lecture Notes in Computer Science, Vol. 9117. Springer International Publishing, Santiago de Compostela, Spain, 337--344.Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’04). ACM, Seattle, WA, 168--177.Aditya Joshi, Vinita Sharma, and Pushpak Bhattacharyya. 2015. Harnessing context incongruity for sarcasm detection. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics, Beijing, China, 757--762.Jihen Karoui, Farah Benamara, Véronique Moriceau, Nathalie Aussenac-Gilles, and Lamia Hadrich-Belguith. 2015. Towards a contextual pragmatic model to detect irony in tweets. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics, Beijing, China, 644--650.Roger J. Kreuz and Gina M. Caucci. 2007. Lexical influences on the perception of sarcasm. In Proceedings of the Workshop on Computational Approaches to Figurative Language (FigLanguages’07). Association for Computational Linguistics, Rochester, NY, 1--4.Florian Kunneman, Christine Liebrecht, Margot van Mulken, and Antal van den Bosch. 2015. Signaling sarcasm: From hyperbole to hashtag. Information Processing & Management 51, 4 (2015), 500--509.Christopher Lee and Albert Katz. 1998. The differential role of ridicule in sarcasm and irony. Metaphor and Symbol 13, 1 (1998), 1--15.John S. Leggitt and Raymond W. Gibbs. 2000. Emotional reactions to verbal irony. Discourse Processes 29, 1 (2000), 1--24.Stephanie Lukin and Marilyn Walker. 2013. Really? Well. Apparently bootstrapping improves the performance of sarcasm and nastiness classifiers for online dialogue. In Proceedings of the Workshop on Language Analysis in Social Media. Association for Computational Linguistics, Atlanta, GA, 30--40.Diana Maynard and Mark Greenwood. 2014. Who cares about sarcastic tweets? Investigating the impact of sarcasm on sentiment analysis. In Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC’14) (26-31). European Language Resources Association (ELRA), Reykjavik, Iceland, 4238--4243.Skye McDonald. 2007. Neuropsychological studies of sarcasm. In Irony in Language and Thought: A Cognitive Science Reader, H. Colston and R. Gibbs (Eds.). Lawrence Erlbaum, 217--230.Saif M. Mohammad and Peter D. Turney. 2013. Crowdsourcing a word--emotion association lexicon. Computational Intelligence 29, 3 (2013), 436--465.Saif M. Mohammad, Xiaodan Zhu, Svetlana Kiritchenko, and Joel Martin. 2015. Sentiment, emotion, purpose, and style in electoral tweets. Information Processing & Management 51, 4 (2015), 480--499.Finn Årup Nielsen. 2011. A new ANEW: Evaluation of a word list for sentiment analysis in microblogs. In Proceedings of the ESWC2011 Workshop on “Making Sense of Microposts”: Big Things Come in Small Packages (CEUR Workshop Proceedings), Vol. 718. CEUR-WS.org, Heraklion, Crete, Greece, 93--98.W. Gerrod Parrot. 2001. Emotions in Social Psychology: Essential Readings. Psychology Press, Philadelphia, PA.James W. Pennebaker, Martha E. Francis, and Roger J. Booth. 2001. Linguistic Inquiry and Word Count: LIWC 2001. Mahway: Lawrence Erlbaum Associates, 71.Robert Plutchik. 2001. The nature of emotions. American Scientist 89, 4 (2001), 344--350.Soujanya Poria, Alexander Gelbukh, Amir Hussain, Newton Howard, Dipankar Das, and Sivaji Bandyopadhyay. 2013. Enhanced senticnet with affective labels for concept-based opinion mining. IEEE Intelligent Systems 28, 2 (2013), 31--38.Tomáš Ptáček, Ivan Habernal, and Jun Hong. 2014. Sarcasm detection on Czech and English twitter. In Proceedings of the 25th International Conference on Computational Linguistics (COLING’14). Dublin City University and Association for Computational Linguistics, Dublin, Ireland, 213--223.Ashwin Rajadesingan, Reza Zafarani, and Huan Liu. 2015. Sarcasm detection on twitter: A behavioral modeling approach. In Proceedings of the 8th ACM International Conference on Web Search and Data Mining (WSDM’15). ACM, 97--106.Antonio Reyes and Paolo Rosso. 2014. On the difficulty of automatically detecting irony: Beyond a simple case of negation. Knowledge Information Systems 40, 3 (2014), 595--614.Antonio Reyes, Paolo Rosso, and Tony Veale. 2013. A multidimensional approach for detecting irony in twitter. Language Resources and Evaluation 47, 1 (2013), 239--268.Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalindra De Silva, Nathan Gilbert, and Ruihong Huang. 2013. Sarcasm as contrast between a positive sentiment and negative situation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, (EMNLP’13). Association for Computational Linguistics, Seattle, Washington, 704--714.Simone Shamay-Tsoory, Rachel Tomer, B. D. Berger, Dorith Goldsher, and Judith Aharon-Peretz. 2005. Impaired “affective theory of mind” is associated with right ventromedial prefrontal damage. Cognitive Behavioral Neurology 18, 1 (2005), 55--67.Philip J. Stone and Earl B. Hunt. 1963. A computer approach to content analysis: Studies using the general inquirer system. In Proceedings of the May 21-23, 1963, Spring Joint Computer Conference (AFIPS’63 (Spring)). ACM, New York, NY, 241--256.Emilio Sulis, Delia Irazú Hernández Farías, Paolo Rosso, Viviana Patti, and Giancarlo Ruffo. 2016. Figurative messages and affect in Twitter: Differences between #irony, #sarcasm and #not. Knowledge-Based Systems. In Press. Available online.Maite Taboada and Jack Grieve. 2004. Analyzing appraisal automatically. In Proceedings of the AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications. AAAI, Stanford, CA, 158--161.Yi-jie Tang and Hsin-Hsi Chen. 2014. Chinese irony corpus construction and ironic structure analysis. In Proceedings of the 25th International Conference on Computational Linguistics (COLING’14). Association for Computational Linguistics, Dublin, Ireland, 1269--1278.Tony Veale and Yanfen Hao. 2010. Detecting ironic intent in creative comparisons. In Proceedings of the 19th European Conference on Artificial Intelligence. IOS Press, Amsterdam, The Netherlands, 765--770.Byron C. Wallace. 2015. Computational irony: A survey and new perspectives. Artificial Intelligence Review 43, 4 (2015), 467--483.Byron C. Wallace, Do Kook Choe, and Eugene Charniak. 2015. Sparse, contextually informed models for irony detection: Exploiting user communities, entities and sentiment. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, 1035--1044.Angela P. Wang. 2013. #Irony or #sarcasm—a quantitative and qualitative study based on twitter. In Proceedings of the 27th Pacific Asia Conference on Language, Information, and Computation (PACLIC’13). Department of English, National Chengchi University, Taipei, Taiwan, 349--356.Juanita M. Whalen, Penny M. Pexman, J. Alastair Gill, and Scott Nowson. 2013. Verbal irony use in personal blogs. Behaviour & Information Technology 32, 6 (2013), 560--569.Cynthia Whissell. 2009. Using the revised dictionary of affect in language to quantify the emotional undertones of samples of natural languages. Psychological Reports 2, 105 (2009), 509--521.Deirdre Wilson and Dan Sperber. 1992. On verbal irony. Lingua 87, 1--2 (1992), 53--76.Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing (HLT’05). Association for Computational Linguistics, Stroudsburg, PA, 347--354.Alecia Wolf. 2000. Emotional expression online: Gender differences in emoticon use. CyberPsychology & Behavior 3, 5 (2000), 827--833.Zhibiao Wu and Martha Palmer. 1994. Verbs semantics and lexical selection. In Proceedings of the 32nd Annual Meeting on Association for Computational Linguistics (ACL’94). Association for Computational Linguistics, Stroudsburg, PA, 133--138

    When Correction Turns Positive:Processing Corrective Prosody in Dutch

    Get PDF
    Current research on spoken language does not provide a consistent picture as to whether prosody, the melody and rhythm of speech, conveys a specific meaning. Perception studies show that English listeners assign meaning to prosodic patterns, and, for instance, associate some accents with contrast, whereas Dutch listeners behave more controversially. In two ERP studies we tested how Dutch listeners process words carrying two types of accents, which either provided new information (new information accents) or corrected information (corrective accents), both in single sentences (experiment 1) and after corrective and new information questions (experiment 2). In both experiments corrective accents elicited a sustained positivity as compared to new information accents, which started earlier in context than in single sentences. The positivity was not modulated by the nature of the preceding question, suggesting that the underlying neural mechanism likely reflects the construction of an interpretation to the accented word, either by identifying an alternative in context or by inferring it when no context is present. Our experimental results provide strong evidence for inferential processes related to prosodic contours in Dutch

    Attention allocation in a language with post-focal prominences

    No full text
    Accentuation influences selective attention and the depth of semantic processing during online speech comprehension. We investigated the processing of semantically congruent and incongruent words in a language that presents cues to prosodic prominences in the region of the utterance occurring after the focussed information (the post-focal region). This language is Italian, in particular the variety spoken in Bari. In this variety, questions have a compressed, post-focal accent, whereas in statements there is a low-level pitch in this position. Using event-related potentials, we investigated the processing of congruent and incongruent target words with two prosodic realizations (focussed with accentuation, post-focal realization) and in two-sentence modalities (statement, question). Results indicate an N400 congruence effect that was modulated by position (focal, post-focal) and modality (statement, question): processing was deeper for questions in narrow focus than in post-focal position, while statements showed simila

    Human beta-defensin 3 is up-regulated in cutaneous leprosy type 1 reactions.

    Get PDF
    BACKGROUND: Leprosy, a chronic granulomatous disease affecting the skin and nerves, is caused by Mycobacterium leprae (M. leprae). The type of leprosy developed depends upon the host immune response. Type 1 reactions (T1Rs), that complicate borderline and lepromatous leprosy, are due to an increase in cell-mediated immunity and manifest as nerve damage and skin inflammation. Owing to the increase in inflammation in the skin of patients with T1Rs, we sought to investigate the activation of the innate immune system during reactionary events. Specifically, we investigated the expression levels of human beta-defensins (hBDs) 2 and 3 in the skin of patients with T1Rs, in keratinocytes, and in macrophages stimulated with M. leprae and corticosteroids. RESULTS: Skin biopsies from twenty-three patients with Type 1 reactions were found to have higher transcript levels of hBD3 as compared to fifteen leprosy patients without Type 1 reactions, as measured by qPCR. Moreover, we observed that keratinocytes but not macrophages up-regulated hBD2 and hBD3 in response to M. leprae stimulation in vitro. Corticosteroid treatment of patients with T1Rs caused a suppression of hBD2 and hBD3 in skin biopsies, as measured by qPCR. In vitro, corticosteroids suppressed M. leprae-dependent induction of hBD2 and hBD3 in keratinocytes. CONCLUSIONS: This study demonstrates that hBD3 is induced in leprosy Type 1 Reactions and suppressed by corticosteroids. Furthermore, our findings demonstrate that keratinocytes are responsive to M. leprae and lend support for additional studies on keratinocyte innate immunity in leprosy and T1Rs. TRIAL REGISTRATION: Controlled-Trials.com ISRCTN31894035

    Introduction to the special issue on Language in Social Media: Exploiting discourse and other contextual information

    Get PDF
    International audienceSocial media content is changing the way people interact with each other and share information, personal messages, and opinions about situations, objects, and past experiences. Most social media texts are short online conversational posts or comments that do not contain enough information for natural language processing (NLP) tools, as they are often accompanied by non-linguistic contextual information, including meta-data (e.g., the user’s profile, the social network of the user, and their interactions with other users). Exploiting such different types of context and their interactions makes the automatic processing of social media texts a challenging research task. Indeed, simply applying traditional text mining tools is clearly sub-optimal, as, typically, these tools take into account neither the interactive dimension nor the particular nature of this data, which shares properties with both spoken and written language. This special issue contributes to a deeper understanding of the role of these interactions to process social media data from a new perspective in discourse interpretation. This introduction first provides the necessary background to understand what context is from both the linguistic and computational linguistic perspectives, then presents the most recent context-based approaches to NLP for social media. We conclude with an overview of the papers accepted in this special issue, highlighting what we believe are the future directions in processing social media texts
    corecore