858 research outputs found

    Irony Detection in Twitter: The Role of Affective Content

    Full text link
    © ACM 2016. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in ACM Transactions on Internet Technology, Vol. 16. http://dx.doi.org/10.1145/2930663.[EN] Irony has been proven to be pervasive in social media, posing a challenge to sentiment analysis systems. It is a creative linguistic phenomenon where affect-related aspects play a key role. In this work, we address the problem of detecting irony in tweets, casting it as a classification problem. We propose a novel model that explores the use of affective features based on a wide range of lexical resources available for English, reflecting different facets of affect. Classification experiments over different corpora show that affective information helps in distinguishing among ironic and nonironic tweets. Our model outperforms the state of the art in almost all cases.The National Council for Science and Technology (CONACyT Mexico) has funded the research work of Delia Irazu Hernandez Farias (Grant No. 218109/313683 CVU-369616). The work of Viviana Patti was partially carried out at the Universitat Politecnica de Valencia within the framework of a fellowship of the University of Turin cofunded by Fondazione CRT (World Wide Style Program 2). The work of Paolo Rosso has been partially funded by the SomEMBED TIN2015-71147-C2-1-P MINECO research project and by the Generalitat Valenciana under the grant ALMAMATER (PrometeoII/2014/030).Hernandez-Farias, DI.; Patti, V.; Rosso, P. (2016). Irony Detection in Twitter: The Role of Affective Content. ACM Transactions on Internet Technology. 16(3):19:1-19:24. https://doi.org/10.1145/2930663S19:119:24163Rob Abbott, Marilyn Walker, Pranav Anand, Jean E. Fox Tree, Robeson Bowmani, and Joseph King. 2011. How can you say such things?!?: Recognizing disagreement in informal political argument. In Proceedings of the Workshop on Languages in Social Media (LSM’11). Association for Computational Linguistics, Stroudsburg, PA, USA, 2--11.Laura Alba-Juez and Salvatore Attardo. 2014. The evaluative palette of verbal irony. In Evaluation in Context, Geoff Thompson and Laura Alba-Juez (Eds.). John Benjamins Publishing Company, Amsterdam/ Philadelphia, 93--116.Magda B. Arnold. 1960. Emotion and Personality. Vol. 1. Columbia University Press, New York, NY.Giuseppe Attardi, Valerio Basile, Cristina Bosco, Tommaso Caselli, Felice Dell’Orletta, Simonetta Montemagni, Viviana Patti, Maria Simi, and Rachele Sprugnoli. 2015. State of the art language technologies for italian: The EVALITA 2014 perspective. Journal of Intelligenza Artificiale 9, 1 (2015), 43--61.Salvatore Attardo. 2000. Irony as relevant inappropriateness. Journal of Pragmatics 32, 6 (2000), 793--826.Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. 2010. SentiWordNet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In Proceedings of the 7th International Conference on Language Resources and Evaluation (LREC’10). European Language Resources Association (ELRA), Valletta, Malta, 2200,2204.David Bamman and Noah A. Smith. 2015. Contextualized sarcasm detection on twitter. In Proceedings of the 9th International Conference on Web and Social Media, (ICWSM’15). AAAI, Oxford, UK, 574--577.Francesco Barbieri, Horacio Saggion, and Francesco Ronzano. 2014. Modelling sarcasm in twitter, a novel approach. In Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. Association for Computational Linguistics, Baltimore, Maryland, 50--58.Valerio Basile, Andrea Bolioli, Malvina Nissim, Viviana Patti, and Paolo Rosso. 2014. Overview of the evalita 2014 SENTIment POLarity classification task. In Proceedings of the 4th Evaluation Campaign of Natural Language Processing and Speech tools for Italian (EVALITA’14). Pisa University Press, Pisa, Italy, 50--57.Cristina Bosco, Viviana Patti, and Andrea Bolioli. 2013. Developing corpora for sentiment analysis: The case of irony and senti-TUT. IEEE Intelligent Systems 28, 2 (March 2013), 55--63.Andrea Bowes and Albert Katz. 2011. When sarcasm stings. Discourse Processes: A Multidisciplinary Journal 48, 4 (2011), 215--236.Margaret M. Bradley and Peter J. Lang. 1999. Affective Norms for English Words (ANEW): Instruction Manual and Affective Ratings. Technical Report. Center for Research in Psychophysiology, University of Florida, Gainesville, Florida.Konstantin Buschmeier, Philipp Cimiano, and Roman Klinger. 2014. An impact analysis of features in a classification approach to irony detection in product reviews. In Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. Association for Computational Linguistics, Baltimore, Maryland, 42--49.Erik Cambria, Andrew Livingstone, and Amir Hussain. 2012. The hourglass of emotions. In Cognitive Behavioural Systems. Lecture Notes in Computer Science, Vol. 7403. Springer, Berlin, 144--157.Erik Cambria, Daniel Olsher, and Dheeraj Rajagopal. 2014. SenticNet 3: A common and common-sense knowledge base for cognition-driven sentiment analysis. In Proceedings of AAAI Conference on Artificial Intelligence. AAAI, Québec, Canada, 1515--1521.Jorge Carrillo de Albornoz, Laura Plaza, and Pablo Gervás. 2012. SentiSense: An easily scalable concept-based affective lexicon for sentiment analysis. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC’12) (23-25), Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Mehmet Ugur Dogan, Bente Maegaard, Joseph Mariani, Asuncion Moreno, Jan Odijk, and Stelios Piperidis (Eds.). European Language Resources Association (ELRA), Istanbul, Turkey, 3562--3567.Paula Carvalho, Luís Sarmento, Mário J. Silva, and Eugénio de Oliveira. 2009. Clues for detecting irony in user-generated contents: Oh&hallip;!! It’s “so easy” ;-). In Proceedings of the 1st International CIKM Workshop on Topic-sentiment Analysis for Mass Opinion (TSA’09). ACM, New York, NY, 53--56.Yoonjung Choi and Janyce Wiebe. 2014. +/-EffectWordNet: Sense-level lexicon acquisition for opinion inference. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP’14). Association for Computational Linguistics, Doha, Qatar, 1181--1191.Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Semi-supervised recognition of sarcastic sentences in twitter and amazon. In Proceedings of the 14th Conference on Computational Natural Language Learning (CoNLL’10). Association for Computational Linguistics, Uppsala, Sweden, 107--116.Shelly Dews, Joan Kaplan, and Ellen Winner. 1995. Why not say it directly? The social functions of irony. Discourse Processes 19, 3 (1995), 347--367.Paul Ekman. 1992. An argument for basic emotions. Cognition and Emotion 6, 3--4 (1992), 169--200.Elisabetta Fersini, Federico Alberto Pozzi, and Enza Messina. 2015. Detecting irony and sarcasm in microblogs: The role of expressive signals and ensemble classifiers. In 2015 IEEE International Conference on Data Science and Advanced Analytics (DSAA’15). IEEE Xplore Digital Library, Paris, France, 1--8.Elena Filatova. 2012. Irony and sarcasm: Corpus generation and analysis using crowdsourcing. In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC’12). European Language Resources Association (ELRA), Istanbul, 392--398.Aniruddha Ghosh, Guofu Li, Tony Veale, Paolo Rosso, Ekaterina Shutova, John Barnden, and Antonio Reyes. 2015. SemEval-2015 task 11: Sentiment analysis of figurative language in twitter. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval’15). Association for Computational Linguistics, Denver, Colorado, 470--478.Raymond W. Gibbs. 2000. Irony in talk among friends. Metaphor and Symbol 15, 1--2 (2000), 5--27.Rachel Giora and Salvatore Attardo. 2014. Irony. In Encyclopedia of Humor Studies. SAGE, Thousand Oaks, CA.Rachel Giora and Ofer Fein. 1999. Irony: Context and salience. Metaphor and Symbol 14, 4 (1999), 241--257.Roberto González-Ibáñez, Smaranda Muresan, and Nina Wacholder. 2011. Identifying sarcasm in twitter: A closer look. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (HLT’11). Association for Computational Linguistics, Portland, OR, 581--586.H. Paul Grice. 1975. Logic and conversation. In Syntax and Semantics: Vol. 3: Speech Acts, P. Cole and J. L. Morgan (Eds.). Academic Press, San Diego, CA, 41--58.Irazú Hernández Farías, José-Miguel Benedí, and Paolo Rosso. 2015. Applying basic features from sentiment analysis for automatic irony detection. In Pattern Recognition and Image Analysis. Lecture Notes in Computer Science, Vol. 9117. Springer International Publishing, Santiago de Compostela, Spain, 337--344.Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’04). ACM, Seattle, WA, 168--177.Aditya Joshi, Vinita Sharma, and Pushpak Bhattacharyya. 2015. Harnessing context incongruity for sarcasm detection. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics, Beijing, China, 757--762.Jihen Karoui, Farah Benamara, Véronique Moriceau, Nathalie Aussenac-Gilles, and Lamia Hadrich-Belguith. 2015. Towards a contextual pragmatic model to detect irony in tweets. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics, Beijing, China, 644--650.Roger J. Kreuz and Gina M. Caucci. 2007. Lexical influences on the perception of sarcasm. In Proceedings of the Workshop on Computational Approaches to Figurative Language (FigLanguages’07). Association for Computational Linguistics, Rochester, NY, 1--4.Florian Kunneman, Christine Liebrecht, Margot van Mulken, and Antal van den Bosch. 2015. Signaling sarcasm: From hyperbole to hashtag. Information Processing & Management 51, 4 (2015), 500--509.Christopher Lee and Albert Katz. 1998. The differential role of ridicule in sarcasm and irony. Metaphor and Symbol 13, 1 (1998), 1--15.John S. Leggitt and Raymond W. Gibbs. 2000. Emotional reactions to verbal irony. Discourse Processes 29, 1 (2000), 1--24.Stephanie Lukin and Marilyn Walker. 2013. Really? Well. Apparently bootstrapping improves the performance of sarcasm and nastiness classifiers for online dialogue. In Proceedings of the Workshop on Language Analysis in Social Media. Association for Computational Linguistics, Atlanta, GA, 30--40.Diana Maynard and Mark Greenwood. 2014. Who cares about sarcastic tweets? Investigating the impact of sarcasm on sentiment analysis. In Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC’14) (26-31). European Language Resources Association (ELRA), Reykjavik, Iceland, 4238--4243.Skye McDonald. 2007. Neuropsychological studies of sarcasm. In Irony in Language and Thought: A Cognitive Science Reader, H. Colston and R. Gibbs (Eds.). Lawrence Erlbaum, 217--230.Saif M. Mohammad and Peter D. Turney. 2013. Crowdsourcing a word--emotion association lexicon. Computational Intelligence 29, 3 (2013), 436--465.Saif M. Mohammad, Xiaodan Zhu, Svetlana Kiritchenko, and Joel Martin. 2015. Sentiment, emotion, purpose, and style in electoral tweets. Information Processing & Management 51, 4 (2015), 480--499.Finn Årup Nielsen. 2011. A new ANEW: Evaluation of a word list for sentiment analysis in microblogs. In Proceedings of the ESWC2011 Workshop on “Making Sense of Microposts”: Big Things Come in Small Packages (CEUR Workshop Proceedings), Vol. 718. CEUR-WS.org, Heraklion, Crete, Greece, 93--98.W. Gerrod Parrot. 2001. Emotions in Social Psychology: Essential Readings. Psychology Press, Philadelphia, PA.James W. Pennebaker, Martha E. Francis, and Roger J. Booth. 2001. Linguistic Inquiry and Word Count: LIWC 2001. Mahway: Lawrence Erlbaum Associates, 71.Robert Plutchik. 2001. The nature of emotions. American Scientist 89, 4 (2001), 344--350.Soujanya Poria, Alexander Gelbukh, Amir Hussain, Newton Howard, Dipankar Das, and Sivaji Bandyopadhyay. 2013. Enhanced senticnet with affective labels for concept-based opinion mining. IEEE Intelligent Systems 28, 2 (2013), 31--38.Tomáš Ptáček, Ivan Habernal, and Jun Hong. 2014. Sarcasm detection on Czech and English twitter. In Proceedings of the 25th International Conference on Computational Linguistics (COLING’14). Dublin City University and Association for Computational Linguistics, Dublin, Ireland, 213--223.Ashwin Rajadesingan, Reza Zafarani, and Huan Liu. 2015. Sarcasm detection on twitter: A behavioral modeling approach. In Proceedings of the 8th ACM International Conference on Web Search and Data Mining (WSDM’15). ACM, 97--106.Antonio Reyes and Paolo Rosso. 2014. On the difficulty of automatically detecting irony: Beyond a simple case of negation. Knowledge Information Systems 40, 3 (2014), 595--614.Antonio Reyes, Paolo Rosso, and Tony Veale. 2013. A multidimensional approach for detecting irony in twitter. Language Resources and Evaluation 47, 1 (2013), 239--268.Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalindra De Silva, Nathan Gilbert, and Ruihong Huang. 2013. Sarcasm as contrast between a positive sentiment and negative situation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, (EMNLP’13). Association for Computational Linguistics, Seattle, Washington, 704--714.Simone Shamay-Tsoory, Rachel Tomer, B. D. Berger, Dorith Goldsher, and Judith Aharon-Peretz. 2005. Impaired “affective theory of mind” is associated with right ventromedial prefrontal damage. Cognitive Behavioral Neurology 18, 1 (2005), 55--67.Philip J. Stone and Earl B. Hunt. 1963. A computer approach to content analysis: Studies using the general inquirer system. In Proceedings of the May 21-23, 1963, Spring Joint Computer Conference (AFIPS’63 (Spring)). ACM, New York, NY, 241--256.Emilio Sulis, Delia Irazú Hernández Farías, Paolo Rosso, Viviana Patti, and Giancarlo Ruffo. 2016. Figurative messages and affect in Twitter: Differences between #irony, #sarcasm and #not. Knowledge-Based Systems. In Press. Available online.Maite Taboada and Jack Grieve. 2004. Analyzing appraisal automatically. In Proceedings of the AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications. AAAI, Stanford, CA, 158--161.Yi-jie Tang and Hsin-Hsi Chen. 2014. Chinese irony corpus construction and ironic structure analysis. In Proceedings of the 25th International Conference on Computational Linguistics (COLING’14). Association for Computational Linguistics, Dublin, Ireland, 1269--1278.Tony Veale and Yanfen Hao. 2010. Detecting ironic intent in creative comparisons. In Proceedings of the 19th European Conference on Artificial Intelligence. IOS Press, Amsterdam, The Netherlands, 765--770.Byron C. Wallace. 2015. Computational irony: A survey and new perspectives. Artificial Intelligence Review 43, 4 (2015), 467--483.Byron C. Wallace, Do Kook Choe, and Eugene Charniak. 2015. Sparse, contextually informed models for irony detection: Exploiting user communities, entities and sentiment. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Beijing, China, 1035--1044.Angela P. Wang. 2013. #Irony or #sarcasm—a quantitative and qualitative study based on twitter. In Proceedings of the 27th Pacific Asia Conference on Language, Information, and Computation (PACLIC’13). Department of English, National Chengchi University, Taipei, Taiwan, 349--356.Juanita M. Whalen, Penny M. Pexman, J. Alastair Gill, and Scott Nowson. 2013. Verbal irony use in personal blogs. Behaviour & Information Technology 32, 6 (2013), 560--569.Cynthia Whissell. 2009. Using the revised dictionary of affect in language to quantify the emotional undertones of samples of natural languages. Psychological Reports 2, 105 (2009), 509--521.Deirdre Wilson and Dan Sperber. 1992. On verbal irony. Lingua 87, 1--2 (1992), 53--76.Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing (HLT’05). Association for Computational Linguistics, Stroudsburg, PA, 347--354.Alecia Wolf. 2000. Emotional expression online: Gender differences in emoticon use. CyberPsychology & Behavior 3, 5 (2000), 827--833.Zhibiao Wu and Martha Palmer. 1994. Verbs semantics and lexical selection. In Proceedings of the 32nd Annual Meeting on Association for Computational Linguistics (ACL’94). Association for Computational Linguistics, Stroudsburg, PA, 133--138

    A machine-learning approach to negation and speculation detection for sentiment analysis

    Get PDF
    Recognizing negative and speculative information is highly relevant for sentiment analysis. This paper presents a machine-learning approach to automatically detect this kind of information in the review domain. The resulting system works in two steps: in the first pass, negation/speculation cues are identified, and in the second phase the full scope of these cues is determined. The system is trained and evaluated on the Simon Fraser University Review corpus, which is extensively used in opinion mining. The results show how the proposed method outstrips the baseline by as much as roughly 20% in the negation cue detection and around 13% in the scope recognition, both in terms of F1. In speculation, the performance obtained in the cue prediction phase is close to that obtained by a human rater carrying out the same task. In the scope detection, the results are also promising and represent a substantial improvement on the baseline (up by roughly 10%). A detailed error analysis is also provided. The extrinsic evaluation shows that the correct identification of cues and scopes is vital for the task of sentiment analysis.Maite Taboada from the Natural Sciences and Engineering Research Council of Canada (Discovery Grant 261104- 2008). This work was partly funded by the Spanish Ministry of Education and Science (TIN2009-14057-C03-03 Project) and the Andalusian Ministry of Economy, Innovation and Science (TIC 07629 and TIC 07684 Projects)

    Irony and Sarcasm Detection in Twitter: The Role of Affective Content

    Full text link
    Tesis por compendioSocial media platforms, like Twitter, offer a face-saving ability that allows users to express themselves employing figurative language devices such as irony to achieve different communication purposes. Dealing with such kind of content represents a big challenge for computational linguistics. Irony is closely associated with the indirect expression of feelings, emotions and evaluations. Interest in detecting the presence of irony in social media texts has grown significantly in the recent years. In this thesis, we introduce the problem of detecting irony in social media under a computational linguistics perspective. We propose to address this task by focusing, in particular, on the role of affective information for detecting the presence of such figurative language device. Attempting to take advantage of the subjective intrinsic value enclosed in ironic expressions, we present a novel model, called emotIDM, for detecting irony relying on a wide range of affective features. For characterising an ironic utterance, we used an extensive set of resources covering different facets of affect from sentiment to finer-grained emotions. Results show that emotIDM has a competitive performance across the experiments carried out, validating the effectiveness of the proposed approach. Another objective of the thesis is to investigate the differences among tweets labeled with #irony and #sarcasm. Our aim is to contribute to the less investigated topic in computational linguistics on the separation between irony and sarcasm in social media, again, with a special focus on affective features. We also studied a less explored hashtag: #not. We find data-driven arguments on the differences among tweets containing these hashtags, suggesting that the above mentioned hashtags are used to refer different figurative language devices. We identify promising features based on affect-related phenomena for discriminating among different kinds of figurative language devices. We also analyse the role of polarity reversal in tweets containing ironic hashtags, observing that the impact of such phenomenon varies. In the case of tweets labeled with #sarcasm often there is a full reversal, whereas in the case of those tagged with #irony there is an attenuation of the polarity. We analyse the impact of irony and sarcasm on sentiment analysis, observing a drop in the performance of NLP systems developed for this task when irony is present. Therefore, we explored the possible use of our findings in irony detection for the development of an irony-aware sentiment analysis system, assuming that the identification of ironic content could help to improve the correct identification of sentiment polarity. To this aim, we incorporated emotIDM into a pipeline for determining the polarity of a given Twitter message. We compared our results with the state of the art determined by the "Semeval-2015 Task 11" shared task, demonstrating the relevance of considering affective information together with features alerting on the presence of irony for performing sentiment analysis of figurative language for this kind of social media texts. To summarize, we demonstrated the usefulness of exploiting different facets of affective information for dealing with the presence of irony in Twitter.Las plataformas de redes sociales, como Twitter, ofrecen a los usuarios la posibilidad de expresarse de forma libre y espontanea haciendo uso de diferentes recursos lingüísticos como la ironía para lograr diferentes propósitos de comunicación. Manejar ese tipo de contenido representa un gran reto para la lingüística computacional. La ironía está estrechamente vinculada con la expresión indirecta de sentimientos, emociones y evaluaciones. El interés en detectar la presencia de ironía en textos de redes sociales ha aumentado significativamente en los últimos años. En esta tesis, introducimos el problema de detección de ironía en redes sociales desde una perspectiva de la lingüística computacional. Proponemos abordar dicha tarea enfocándonos, particularmente, en el rol de información relativa al afecto y las emociones para detectar la presencia de dicho recurso lingüístico. Con la intención de aprovechar el valor intrínseco de subjetividad contenido en las expresiones irónicas, presentamos un modelo para detectar la presencia de ironía denominado emotIDM, el cual está basado en una amplia variedad de rasgos afectivos. Para caracterizar instancias irónicas, utilizamos un amplio conjunto de recursos que cubren diferentes ámbitos afectivos: desde sentimientos (positivos o negativos) hasta emociones específicas definidas con una granularidad fina. Los resultados obtenidos muestran que emotIDM tiene un desempeño competitivo en los experimentos realizados, validando la efectividad del enfoque propuesto. Otro objetivo de la tesis es investigar las diferencias entre tweets etiquetados con #irony y #sarcasm. Nuestra finalidad es contribuir a un tema menos investigado en lingüística computacional: la separación entre el uso de ironía y sarcasmo en redes sociales, con especial énfasis en rasgos afectivos. Además, estudiamos un hashtag que ha sido menos analizado: #not. Nuestros resultados parecen evidenciar que existen diferencias entre los tweets que contienen dichos hashtags, sugiriendo que son utilizados para hacer referencia de diferentes recursos lingüísticos. Identificamos un conjunto de características basadas en diferentes fenómenos afectivos que parecen ser útiles para discriminar entre diferentes tipos de recursos lingüísticos. Adicionalmente analizamos la reversión de polaridad en tweets que contienen hashtags irónicos, observamos que el impacto de dicho fenómeno es diferente en cada uno de ellos. En el caso de los tweets que están etiquetados con el hashtag #sarcasm, a menudo hay una reversión total, mientras que en el caso de los tweets etiquetados con el hashtag #irony se produce una atenuación de la polaridad. Llevamos a cabo un estudio del impacto de la ironía y el sarcasmo en el análisis de sentimientos, observamos una disminución en el rendimiento de los sistemas de PLN desarrollados para dicha tarea cuando la ironía está presente. Por consiguiente, exploramos la posibilidad de utilizar nuestros resultados en detección de ironía para el desarrollo de un sistema de análisis de sentimientos que considere de la presencia de ironía, suponiendo que la detección de contenido irónico podría ayudar a mejorar la correcta identificación del sentimiento expresado en un texto dado. Con este objetivo, incorporamos emotIDM como la primera fase en un sistema de análisis de sentimientos para determinar la polaridad de mensajes en Twitter. Comparamos nuestros resultados con el estado del arte establecido en la tarea de evaluación "Semeval-2015 Task 11", demostrando la importancia de utilizar información afectiva en conjunto con características que alertan de la presencia de la ironía para desempeñar análisis de sentimientos en textos con lenguaje figurado que provienen de redes sociales. En resumen, demostramos la utilidad de aprovechar diferentes aspectos de información relativa al afecto y las emociones para tratar cuestiones relativas a la presencia de la ironíLes plataformes de xarxes socials, com Twitter, oferixen als usuaris la possibilitat d'expressar-se de forma lliure i espontània fent ús de diferents recursos lingüístics com la ironia per aconseguir diferents propòsits de comunicació. Manejar aquest tipus de contingut representa un gran repte per a la lingüística computacional. La ironia està estretament vinculada amb l'expressió indirecta de sentiments, emocions i avaluacions. L'interés a detectar la presència d'ironia en textos de xarxes socials ha augmentat significativament en els últims anys. En aquesta tesi, introduïm el problema de detecció d'ironia en xarxes socials des de la perspectiva de la lingüística computacional. Proposem abordar aquesta tasca enfocant-nos, particularment, en el rol d'informació relativa a l'afecte i les emocions per detectar la presència d'aquest recurs lingüístic. Amb la intenció d'aprofitar el valor intrínsec de subjectivitat contingut en les expressions iròniques, presentem un model per a detectar la presència d'ironia denominat emotIDM, el qual està basat en una àmplia varietat de trets afectius. Per caracteritzar instàncies iròniques, utilitzàrem un ampli conjunt de recursos que cobrixen diferents àmbits afectius: des de sentiments (positius o negatius) fins emocions específiques definides de forma molt detallada. Els resultats obtinguts mostres que emotIDM té un rendiment competitiu en els experiments realitzats, validant l'efectivitat de l'enfocament proposat. Un altre objectiu de la tesi és investigar les diferències entre tweets etiquetats com a #irony i #sarcasm. La nostra finalitat és contribuir a un tema menys investigat en lingüística computacional: la separació entre l'ús d'ironia i sarcasme en xarxes socials, amb especial èmfasi amb els trets afectius. A més, estudiem un hashtag que ha sigut menys estudiat: #not. Els nostres resultats pareixen evidenciar que existixen diferències entre els tweets que contenen els hashtags esmentats, cosa que suggerix que s'utilitzen per fer referència de diferents recursos lingüístics. Identifiquem un conjunt de característiques basades en diferents fenòmens afectius que pareixen ser útils per a discriminar entre diferents tipus de recursos lingüístics. Addicionalment analitzem la reversió de polaritat en tweets que continguen hashtags irònics, observant que l'impacte del fenomen esmentat és diferent per a cadascun d'ells. En el cas dels tweet que estan etiquetats amb el hashtag #sarcasm, a sovint hi ha una reversió total, mentre que en el cas dels tweets etiquetats amb el hashtag #irony es produïx una atenuació de polaritat. Duem a terme un estudi de l'impacte de la ironia i el sarcasme en l'anàlisi de sentiments, on observem una disminució en el rendiment dels sistemes de PLN desenvolupats per a aquestes tasques quan la ironia està present. Per consegüent, vam explorar la possibilitat d'utilitzar els nostres resultats en detecció d'ironia per a desenvolupar un sistema d'anàlisi de sentiments que considere la presència d'ironia, suposant que la detecció de contingut irònic podria ajudar a millorar la correcta identificació del sentiment expressat en un text donat. Amb aquest objectiu, incorporem emotIDM com la primera fase en un sistema d'anàlisi de sentiments per determinar la polaritat de missatges en Twitter. Hem comparat els nostres resultats amb l'estat de l'art establert en la tasca d'avaluació "Semeval-2015 Task 11", demostrant la importància d'utilitzar informació afectiva en conjunt amb característiques que alerten de la presència de la ironia per exercir anàlisi de sentiments en textos amb llenguatge figurat que provenen de xarxes socials. En resum, hem demostrat la utilitat d'aprofitar diferents aspectes d'informació relativa a l'afecte i les emocions per tractar qüestions relatives a la presència d'ironia en Twitter.Hernández Farias, DI. (2017). Irony and Sarcasm Detection in Twitter: The Role of Affective Content [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90544TESISCompendi

    Linguistic-based Patterns for Figurative Language Processing: The Case of Humor Recognition and Irony Detection

    Full text link
    El lenguaje figurado representa una de las tareas más difíciles del procesamiento del lenguaje natural. A diferencia del lenguaje literal, el lenguaje figurado hace uso de recursos lingüísticos tales como la ironía, el humor, el sarcasmo, la metáfora, la analogía, entre otros, para comunicar significados indirectos que la mayoría de las veces no son interpretables sólo en términos de información sintáctica o semántica. Por el contrario, el lenguaje figurado refleja patrones del pensamiento que adquieren significado pleno en contextos comunicativos y sociales, lo cual hace que tanto su representación lingüística, así como su procesamiento computacional, se vuelvan tareas por demás complejas. En este contexto, en esta tesis de doctorado se aborda una problemática relacionada con el procesamiento del lenguaje figurado a partir de patrones lingüísticos. En particular, nuestros esfuerzos se centran en la creación de un sistema capaz de detectar automáticamente instancias de humor e ironía en textos extraídos de medios sociales. Nuestra hipótesis principal se basa en la premisa de que el lenguaje refleja patrones de conceptualización; es decir, al estudiar el lenguaje, estudiamos tales patrones. Por tanto, al analizar estos dos dominios del lenguaje figurado, pretendemos dar argumentos respecto a cómo la gente los concibe, y sobre todo, a cómo esa concepción hace que tanto humor como ironía sean verbalizados de una forma particular en diversos medios sociales. En este contexto, uno de nuestros mayores intereses es demostrar cómo el conocimiento que proviene del análisis de diferentes niveles de estudio lingüístico puede representar un conjunto de patrones relevantes para identificar automáticamente usos figurados del lenguaje. Cabe destacar que contrario a la mayoría de aproximaciones que se han enfocado en el estudio del lenguaje figurado, en nuestra investigación no buscamos dar argumentos basados únicamente en ejemplos prototípicos, sino en textos cuyas característicasReyes Pérez, A. (2012). Linguistic-based Patterns for Figurative Language Processing: The Case of Humor Recognition and Irony Detection [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/16692Palanci

    A decade of in-text citation analysis based on natural language processing and machine learning techniques: an overview of empirical studies

    Get PDF
    In-text citation analysis is one of the most frequently used methods in research evaluation. We are seeing significant growth in citation analysis through bibliometric metadata, primarily due to the availability of citation databases such as the Web of Science, Scopus, Google Scholar, Microsoft Academic, and Dimensions. Due to better access to full-text publication corpora in recent years, information scientists have gone far beyond traditional bibliometrics by tapping into advancements in full-text data processing techniques to measure the impact of scientific publications in contextual terms. This has led to technical developments in citation classifications, citation sentiment analysis, citation summarisation, and citation-based recommendation. This article aims to narratively review the studies on these developments. Its primary focus is on publications that have used natural language processing and machine learning techniques to analyse citations

    Automatic extraction of agendas for action from news coverage of violent conflict

    Get PDF
    Words can make people act. Indeed, a simple phrase ‘Will you, please, open the window?’ can cause a person to do so. However, does this still hold, if the request is communicated indirectly via mass media and addresses a large group of people? Different disciplines have approached this problem from different angles, showing that there is indeed a connection between what is being called for in media and what people do. This dissertation, being an interdisciplinary work, bridges different perspectives on the problem and explains how collective mobilisation happens, using the novel term ‘agenda for action’. It also shows how agendas for action can be extracted from text in automated fashion using computational linguistics and machine learning. To demonstrate the potential of agenda for action, the analysis of The NYT and The Guardian coverage of chemical weapons crises in Syria in 2013 is performed. Katsiaryna Stalpouskaya has always been interested in applied and computational linguistics. Pursuing this interest, she joined FP7 EU-INFOCORE project in 2014, where she was responsible for automated content analysis. Katsiaryna’s work on the project resulted in a PhD thesis, which she successfully defended at Ludwig-Maximilians-Universität München in 2019. Currently, she is working as a product owner in the field of text and data analysis
    corecore