12 research outputs found

    Cross-Domain Polarity Models to Evaluate User eXperience in E-learning

    Full text link
    [EN] Virtual learning environments are growing in importance as fast as e-learning is becoming highly demanded by universities and students all over the world. This paper investigates how to automatically evaluate User eXperience in this domain using sentiment analysis techniques. For this purpose, a corpus with the opinions given by a total of 583 users (107 English speakers and 476 Spanish speakers) about three learning management systems in different courses has been built. All the collected opinions were manually labeled with polarity information (positive, negative or neutral) by three human annotators, both at the whole opinion and sentence levels. We have applied our state-of-the-art sentiment analysis models, trained with a corpus of a different semantic domain (a Twitter corpus), to study the use of cross-domain models for this task. Cross-domain models based on deep neural networks (convolutional neural networks, transformer encoders and attentional BLSTM models) have been tested. In order to contrast our results, three commercial systems for the same task (MeaningCloud, Microsoft Text Analytics and Google Cloud) were also tested. The obtained results are very promising and they give an insight to keep going the research of applying sentiment analysis tools on User eXperience evaluation. This is a pioneering idea to provide a better and accurate understanding on human needs in the interaction with virtual learning environments and a step towards the development of automatic tools that capture the feed-back of user perception for designing virtual learning environments centered in user's emotions, beliefs, preferences, perceptions, responses, behaviors and accomplishments that occur before, during and after the interaction.Partially supported by the Spanish MINECO and FEDER founds under Project TIN2017-85854-C4-2-R. Work of J.A. Gonzalez is financed under Grant PAID-01-17Sanchis-Font, R.; Castro-Bleda, MJ.; González-Barba, JÁ.; Pla Santamaría, F.; Hurtado Oliver, LF. (2021). Cross-Domain Polarity Models to Evaluate User eXperience in E-learning. Neural Processing Letters. 53:3199-3215. https://doi.org/10.1007/s11063-020-10260-5S3199321553Ba J, Kiros JR, Hinton GE (2016) Layer normalization. arxiv:1607.06450Bahdanau D, Cho K, Bengio Y (2015) Neural machine translation by jointly learning to align and translate. In: 3rd international conference on learning representations, ICLR 2015, San Diego, CA, USA, May 7–9, 2015, conference track proceedings. arxiv:1409.0473Baziotis C, Pelekis N, Doulkeridis C (2017) Datastories at SemEval-2017 task 4: deep LSTM with attention for message-level and topic-based sentiment analysis. In: Proceedings of the 11th international workshop on semantic evaluation (SemEval-2017). Association for Computational Linguistics, Vancouver, Canada, pp 747–754Cliche M (2017) BB\_twtr at SemEval-2017 task 4: Twitter sentiment analysis with CNNs and LSTMs. In: Proceedings of the 11th international workshop on semantic evaluation (SemEval-2017). Association for Computational Linguistics, Vancouver, Canada, pp 573–580. https://doi.org/10.18653/v1/S17-2094. https://www.aclweb.org/anthology/S17-2094Cohen J (1960) A coefficient of agreement for nominal scales. Educ Psychol Meas 20(1):37Devlin J, Chang MW, Lee K, Toutanova K (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 conference of the North American chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1 (long and short papers). Association for Computational Linguistics, Minneapolis, Minnesota, pp 4171–4186. https://doi.org/10.18653/v1/N19-1423. https://www.aclweb.org/anthology/N19-1423Diaz-Galiano MC, et al (2019) Overview of TASS 2019: one more further for the global Spanish sentiment analysis corpus. In: Proceedings of the Iberian languages evaluation forum (IberLEF 2019), CEUR-WS, Bilbao, Spain, CEUR workshop proceedings, pp 550–560Godin F, Vandersmissen B, De Neve W, Van de Walle R (2015) Multimedia lab @ ACL WNUT NER shared task: named entity recognition for Twitter microposts using distributed word representations. In: Proceedings of the workshop on noisy user-generated text. Association for Computational Linguistics, Beijing, China, pp 146–153. https://doi.org/10.18653/v1/W15-4322. https://www.aclweb.org/anthology/W15-4322González J, Pla F, Hurtado L (2018) Elirf-upv en TASS 2018: Análisis de sentimientos en twitter basado en aprendizaje profundo (elirf-upv at TASS 2018: sentiment analysis in Twitter based on deep learning). In: Proceedings of TASS 2018: workshop on semantic analysis at SEPLN, TASS@SEPLN 2018, co-located with 34nd SEPLN conference (SEPLN 2018), Sevilla, Spain, September 18th, 2018, pp 37–44. http://ceur-ws.org/Vol-2172/p2_elirf_tass2018.pdfGonzález J, Hurtado L, Pla F (2019) Elirf-upv at TASS 2019: transformer encoders for Twitter sentiment analysis in Spanish. In: Proceedings of the Iberian languages evaluation forum co-located with 35th conference of the Spanish Society for Natural Language Processing, IberLEF@SEPLN 2019, Bilbao, Spain, September 24th, 2019, pp 571–578. http://ceur-ws.org/Vol-2421/TASS_paper_2.pdfGonzález JÁ, Pla F, Hurtado LF (2017) ELiRF-UPV at SemEval-2017 task 4: sentiment analysis using deep learning. In: Proceedings of the 11th international workshop on semantic evaluation (SemEval-2017). Association for Computational Linguistics, Vancouver, Canada, pp 723–727. https://doi.org/10.18653/v1/S17-2121. https://www.aclweb.org/anthology/S17-2121González JÁ, Hurtado LF, Pla F (2019) ELiRF-UPV at TASS 2019: transformer encoders for Twitter sentiment analysis in Spanish. In: Proceedings of the Iberian languages evaluation forum (IberLEF 2019), CEUR-WS, Bilbao, Spain, CEUR workshop proceedingsGoogleCloud (2019) Cloud natural language API. https://cloud.google.com/natural-language/. Accessed 27 Dec 2019Hassenzahl M, Tractinsky N (2006) User experience—a research agenda. Behav Inf Technol 25(2):91–97. https://doi.org/10.1080/01449290500330331Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780. https://doi.org/10.1162/neco.1997.9.8.1735Hurtado Oliver LF, Pla F, González Barba J (2017) ELiRF-UPV at TASS 2017: sentiment analysis in Twitter based on deep learning. In: TASS 2017: workshop on semantic analysis at SEPLN, pp 29–34IBM (2019) Natural language understanding. https://www.ibm.com/watson/services/natural-language-understanding/. Accessed 27 Dec 2019ISO 9241-210:2019 (2019) Ergonomics of human-system interaction—part 210: human-centred design for interactive systems. International Standardization Organization (ISO). https://www.iso.org/standard/77520.html. Accessed 27 Dec 2019Kim Y (2014) Convolutional neural networks for sentence classification. In: Proceedings of the 2014 conference on empirical methods in natural language processing, EMNLP 2014, October 25–29, 2014, Doha, Qatar, a meeting of SIGDAT, a special interest group of the ACL, pp 1746–1751. http://aclweb.org/anthology/D/D14/D14-1181.pdfKrippendorff K (2004) Reliability in content analysis. Hum Commun Res 30(3):411–433Kujala S, Roto V, Väänänen-Vainio-Mattila K, Karapanos E, Sinnelä A (2011) UX curve: a method for evaluating long-term user experience. Interact Comput 23(5):473–483Liu B (2012) Sentiment analysis and opinion mining. A comprehensive introduction and survey. Morgan & Claypool Publishers, San RafaelLiu B, Hu M, Cheng J (2005) Opinion observer: analyzing and comparing opinions on the web. In: Proceedings of the 14th international conference on world wide web. ACM, New York, NY, USA, WWW ’05, pp 342–351. https://doi.org/10.1145/1060745.1060797Luque FM (2019) Atalaya at TASS 2019: data augmentation and robust embeddings for sentiment analysis. In: Proceedings of the Iberian languages evaluation forum (IberLEF 2019), CEUR-WS, Bilbao, Spain, CEUR workshop proceedingsManning CD, Surdeanu M, Bauer J, Finkel J, Bethard SJ, McClosky D (2014) The Stanford CoreNLP natural language processing toolkit. In: Association for computational linguistics (ACL) system demonstrations, pp 55–60. http://www.aclweb.org/anthology/P/P14/P14-5010Martínez-Cámara E, Díaz-Galiano M, García-Cumbreras M, García-Vega M, Villena-Román J (2017) Overview of TASS 2017. In: Proceedings of TASS 2017: workshop on semantic analysis at SEPLN (TASS 2017), CEUR-WS, Murcia, Spain, CEUR workshop proceedings, vol 1896MeaningCloud (2019) Demo de Analítica de Textos. https://www.meaningcloud.com/es/demos/demo-analitica-textos. Accessed 27 Dec 2019MeaningCloud (2019) MeaningCloud: Servicios web de analítica y minería de textos. https://www.meaningcloud.com/. Accessed 27 Dec 2019MicrosoftAzure (2019) Text analytics API. https://azure.microsoft.com/es-es/services/cognitive-services/text-analytics/. Accessed 27 Dec 2019Pang B, Lee L, Vaithyanathan S (2002) Thumbs up? sentiment classification using machine learning techniques. In: Proceedings of the ACL-02 conference on empirical methods in natural language processing, vol 10. Association for Computational Linguistics, pp 79–86Pla F, Hurtado LF (2018) Spanish sentiment analysis in Twitter at the TASS workshop. Lang Resour Eval 52(2):645–672. https://doi.org/10.1007/s10579-017-9394-7Rauschenberger M, Schrepp M, Cota MP, Olschner S, Thomaschewski J (2013) Efficient measurement of the user experience of interactive products. How to use the user experience questionnaire (UEQ). Example: Spanish language version. Int J Interact Multimed Artif Intell 2(1):39–45. https://doi.org/10.9781/ijimai.2013.215Rosenthal S, Farra N, Nakov P (2017) SemEval-2017 task 4: sentiment analysis in Twitter. In: Proceedings of the 11th international workshop on semantic evaluation (SemEval-2017). Association for Computational Linguistics, Vancouver, Canada, pp 502–518. https://doi.org/10.18653/v1/S17-2088. https://www.aclweb.org/anthology/S17-2088Sadr H, Pedram MM, Teshnehlab M (2019) A robust sentiment analysis method based on sequential combination of convolutional and recursive neural networks. Neural Process Lett 50:2745–2761. https://doi.org/10.1007/s11063-019-10049-1Sanchis-Font R, Castro-Bleda M, González J (2019) Applying sentiment analysis with cross-domain models to evaluate user experience in virtual learning environments. In: Rojas I, Joya G, Catala A (eds) Advances in computational intelligence. IWANN (2019). Lecture notes in computer science, vol 11506. Springer, Cham, pp 609–620Schuster M, Paliwal K (1997) Bidirectional recurrent neural networks. Trans Signal Process 45(11):2673–2681. https://doi.org/10.1109/78.650093Scott WA (1955) Reliability of content analysis: the case of nominal scale coding. Public Opin Q 19(3):321–325. https://doi.org/10.1086/266577Socher R, Perelygin A, Wu J, Chuang J, Manning CD, Ng A, Potts C (2013) Recursive deep models for semantic compositionality over a sentiment treebank. In: Proceedings of the 2013 conference on empirical methods in natural language processing. Association for Computational Linguistics, Seattle, Washington, USA, pp 1631–1642. https://www.aclweb.org/anthology/D13-1170Turney PD (2002) Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. In: ACL, pp 417–424. http://www.aclweb.org/anthology/P02-1053.pdfVaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I (2017) Attention is all you need. In: Proceedings of the 31st international conference on neural information processing systems, NIPS’17. Curran Associates Inc., USA, pp 6000–6010. http://dl.acm.org/citation.cfm?id=3295222.3295349Wilson T, Hoffmann P, Somasundaran S, Kessler J, Wiebe J, Choi Y, Cardie C, Riloff E, Patwardhan S (2005) OpinionFinder: a system for subjectivity analysis. In: Proceedings of HLT/EMNLP on interactive demonstrations. Association for Computational Linguistics, pp 34–35Zaharias P, Mehlenbacher B (2012) Editorial: exploring user experience (UX) in virtual learning environments. Int J Hum Comput Stud 70(7):475–477. https://doi.org/10.1016/j.ijhcs.2012.05.001Zhang L, Wang S, Liu B (2018) Deep learning for sentiment analysis: a survey. Wiley Interdiscip Rev Data Min Knowl Discov 8(4):e125

    Neural language model based training data augmentation for weakly supervised early rumor detection

    Get PDF
    The scarcity and class imbalance of training data are known issues in current rumor detection tasks. We propose a straight-forward and general-purpose data augmentation technique which is beneficial to early rumor detection relying on event propagation patterns. The key idea is to exploit massive unlabeled event data sets on social media to augment limited labeled rumor source tweets. This work is based on rumor spreading patterns revealed by recent rumor studies and semantic relatedness between labeled and unlabeled data. A state-of-the-art neural language model (NLM) and large credibility-focused Twitter corpora are employed to learn context-sensitive representations of rumor tweets. Six different real-world events based on three publicly available rumor datasets are employed in our experiments to provide a comparative evaluation of the effectiveness of the method. The results show that our method can expand the size of an existing rumor data set nearly by 200% and corresponding social context (i.e., conversational threads) by 100% with reasonable quality. Preliminary experiments with a state-of-the-art deep learning-based rumor detection model show that augmented data can alleviate over-fitting and class imbalance caused by limited train data and can help to train complex neural networks (NNs). With augmented data, the performance of rumor detection can be improved by 12.1% in terms of F-score. Our experiments also indicate that augmented training data can help to generalize rumor detection models on unseen rumors

    Attention-based Approaches for Text Analytics in Social Media and Automatic Summarization

    Full text link
    [ES] Hoy en día, la sociedad tiene acceso y posibilidad de contribuir a grandes cantidades de contenidos presentes en Internet, como redes sociales, periódicos online, foros, blogs o plataformas de contenido multimedia. Todo este tipo de medios han tenido, durante los últimos años, un impacto abrumador en el día a día de individuos y organizaciones, siendo actualmente medios predominantes para compartir, debatir y analizar contenidos online. Por este motivo, resulta de interés trabajar sobre este tipo de plataformas, desde diferentes puntos de vista, bajo el paraguas del Procesamiento del Lenguaje Natural. En esta tesis nos centramos en dos áreas amplias dentro de este campo, aplicadas al análisis de contenido en línea: análisis de texto en redes sociales y resumen automático. En paralelo, las redes neuronales también son un tema central de esta tesis, donde toda la experimentación se ha realizado utilizando enfoques de aprendizaje profundo, principalmente basados en mecanismos de atención. Además, trabajamos mayoritariamente con el idioma español, por ser un idioma poco explorado y de gran interés para los proyectos de investigación en los que participamos. Por un lado, para el análisis de texto en redes sociales, nos enfocamos en tareas de análisis afectivo, incluyendo análisis de sentimientos y detección de emociones, junto con el análisis de la ironía. En este sentido, se presenta un enfoque basado en Transformer Encoders, que consiste en contextualizar \textit{word embeddings} pre-entrenados con tweets en español, para abordar tareas de análisis de sentimiento y detección de ironía. También proponemos el uso de métricas de evaluación como funciones de pérdida, con el fin de entrenar redes neuronales, para reducir el impacto del desequilibrio de clases en tareas \textit{multi-class} y \textit{multi-label} de detección de emociones. Adicionalmente, se presenta una especialización de BERT tanto para el idioma español como para el dominio de Twitter, que tiene en cuenta la coherencia entre tweets en conversaciones de Twitter. El desempeño de todos estos enfoques ha sido probado con diferentes corpus, a partir de varios \textit{benchmarks} de referencia, mostrando resultados muy competitivos en todas las tareas abordadas. Por otro lado, nos centramos en el resumen extractivo de artículos periodísticos y de programas televisivos de debate. Con respecto al resumen de artículos, se presenta un marco teórico para el resumen extractivo, basado en redes jerárquicas siamesas con mecanismos de atención. También presentamos dos instancias de este marco: \textit{Siamese Hierarchical Attention Networks} y \textit{Siamese Hierarchical Transformer Encoders}. Estos sistemas han sido evaluados en los corpora CNN/DailyMail y NewsRoom, obteniendo resultados competitivos en comparación con otros enfoques extractivos coetáneos. Con respecto a los programas de debate, se ha propuesto una tarea que consiste en resumir las intervenciones transcritas de los ponentes, sobre un tema determinado, en el programa "La Noche en 24 Horas". Además, se propone un corpus de artículos periodísticos, recogidos de varios periódicos españoles en línea, con el fin de estudiar la transferibilidad de los enfoques propuestos, entre artículos e intervenciones de los participantes en los debates. Este enfoque muestra mejores resultados que otras técnicas extractivas, junto con una transferibilidad de dominio muy prometedora.[CA] Avui en dia, la societat té accés i possibilitat de contribuir a grans quantitats de continguts presents a Internet, com xarxes socials, diaris online, fòrums, blocs o plataformes de contingut multimèdia. Tot aquest tipus de mitjans han tingut, durant els darrers anys, un impacte aclaparador en el dia a dia d'individus i organitzacions, sent actualment mitjans predominants per compartir, debatre i analitzar continguts en línia. Per aquest motiu, resulta d'interès treballar sobre aquest tipus de plataformes, des de diferents punts de vista, sota el paraigua de l'Processament de el Llenguatge Natural. En aquesta tesi ens centrem en dues àrees àmplies dins d'aquest camp, aplicades a l'anàlisi de contingut en línia: anàlisi de text en xarxes socials i resum automàtic. En paral·lel, les xarxes neuronals també són un tema central d'aquesta tesi, on tota l'experimentació s'ha realitzat utilitzant enfocaments d'aprenentatge profund, principalment basats en mecanismes d'atenció. A més, treballem majoritàriament amb l'idioma espanyol, per ser un idioma poc explorat i de gran interès per als projectes de recerca en els que participem. D'una banda, per a l'anàlisi de text en xarxes socials, ens enfoquem en tasques d'anàlisi afectiu, incloent anàlisi de sentiments i detecció d'emocions, juntament amb l'anàlisi de la ironia. En aquest sentit, es presenta una aproximació basada en Transformer Encoders, que consisteix en contextualitzar \textit{word embeddings} pre-entrenats amb tweets en espanyol, per abordar tasques d'anàlisi de sentiment i detecció d'ironia. També proposem l'ús de mètriques d'avaluació com a funcions de pèrdua, per tal d'entrenar xarxes neuronals, per reduir l'impacte de l'desequilibri de classes en tasques \textit{multi-class} i \textit{multi-label} de detecció d'emocions. Addicionalment, es presenta una especialització de BERT tant per l'idioma espanyol com per al domini de Twitter, que té en compte la coherència entre tweets en converses de Twitter. El comportament de tots aquests enfocaments s'ha provat amb diferents corpus, a partir de diversos \textit{benchmarks} de referència, mostrant resultats molt competitius en totes les tasques abordades. D'altra banda, ens centrem en el resum extractiu d'articles periodístics i de programes televisius de debat. Pel que fa a l'resum d'articles, es presenta un marc teòric per al resum extractiu, basat en xarxes jeràrquiques siameses amb mecanismes d'atenció. També presentem dues instàncies d'aquest marc: \textit{Siamese Hierarchical Attention Networks} i \textit{Siamese Hierarchical Transformer Encoders}. Aquests sistemes s'han avaluat en els corpora CNN/DailyMail i Newsroom, obtenint resultats competitius en comparació amb altres enfocaments extractius coetanis. Pel que fa als programes de debat, s'ha proposat una tasca que consisteix a resumir les intervencions transcrites dels ponents, sobre un tema determinat, al programa "La Noche en 24 Horas". A més, es proposa un corpus d'articles periodístics, recollits de diversos diaris espanyols en línia, per tal d'estudiar la transferibilitat dels enfocaments proposats, entre articles i intervencions dels participants en els debats. Aquesta aproximació mostra millors resultats que altres tècniques extractives, juntament amb una transferibilitat de domini molt prometedora.[EN] Nowadays, society has access, and the possibility to contribute, to large amounts of the content present on the internet, such as social networks, online newspapers, forums, blogs, or multimedia content platforms. These platforms have had, during the last years, an overwhelming impact on the daily life of individuals and organizations, becoming the predominant ways for sharing, discussing, and analyzing online content. Therefore, it is very interesting to work with these platforms, from different points of view, under the umbrella of Natural Language Processing. In this thesis, we focus on two broad areas inside this field, applied to analyze online content: text analytics in social media and automatic summarization. Neural networks are also a central topic in this thesis, where all the experimentation has been performed by using deep learning approaches, mainly based on attention mechanisms. Besides, we mostly work with the Spanish language, due to it is an interesting and underexplored language with a great interest in the research projects we participated in. On the one hand, for text analytics in social media, we focused on affective analysis tasks, including sentiment analysis and emotion detection, along with the analysis of the irony. In this regard, an approach based on Transformer Encoders, based on contextualizing pretrained Spanish word embeddings from Twitter, to address sentiment analysis and irony detection tasks, is presented. We also propose the use of evaluation metrics as loss functions, in order to train neural networks for reducing the impact of the class imbalance in multi-class and multi-label emotion detection tasks. Additionally, a specialization of BERT both for the Spanish language and the Twitter domain, that takes into account inter-sentence coherence in Twitter conversation flows, is presented. The performance of all these approaches has been tested with different corpora, from several reference evaluation benchmarks, showing very competitive results in all the tasks addressed. On the other hand, we focused on extractive summarization of news articles and TV talk shows. Regarding the summarization of news articles, a theoretical framework for extractive summarization, based on siamese hierarchical networks with attention mechanisms, is presented. Also, we present two instantiations of this framework: Siamese Hierarchical Attention Networks and Siamese Hierarchical Transformer Encoders. These systems were evaluated on the CNN/DailyMail and the NewsRoom corpora, obtaining competitive results in comparison to other contemporary extractive approaches. Concerning the TV talk shows, we proposed a text summarization task, for summarizing the transcribed interventions of the speakers, about a given topic, in the Spanish TV talk shows of the ``La Noche en 24 Horas" program. In addition, a corpus of news articles, collected from several Spanish online newspapers, is proposed, in order to study the domain transferability of siamese hierarchical approaches, between news articles and interventions of debate participants. This approach shows better results than other extractive techniques, along with a very promising domain transferability.González Barba, JÁ. (2021). Attention-based Approaches for Text Analytics in Social Media and Automatic Summarization [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/172245TESI

    Data augmentation for named entity recognition in the German legal domain

    Get PDF
    Named Entity Recognition over texts from the legal domain aims to recognize legal entities such as references to legal norms or court decisions. This task is commonly approached with supervised deep learning techniques that require large amounts of training data. However, especially for low-resource languages and specific domains, such training data is often scarce. In this work, we focus on the German legal domain because it is of interest to the Canarėno project, which deals with information extraction from and analysis of legal norms. The objective of the work presented in this thesis is the implementation, evaluation, and comparison of different data augmentation techniques that can be used to expand the available data and thereby improve model performance. Through experiments on different dataset fractions, we show that Mention Replacement and Synonym Replacement can effectively enhance the performance of both recurrent and transformer-based NER models in low-resource environments.Die Anwendung von Named Entity Recognition auf Texte aus dem juristischen Bereich zielt darauf ab, juristische Entitäten wie Referenzen auf Rechtsnormen oder Gerichtsentscheidungen zu erkennen. Diese Aufgabe wird in der Regel mit überwachten Deep-Learning-Techniken angegangen, die große Mengen an Trainingsdaten erfordern. Vor allem für Sprachen mit geringen Ressourcen und für bestimmte Domänen sind solche Trainingsdaten jedoch oft rar. In dieser Arbeit konzentrieren wir uns auf die deutsche Rechtsdomäne, da sie für das Canarėno-Projekt von Interesse ist, das sich mit der Informationsextraktion aus und Analyse von Rechtsnormen beschäftigt. Das Ziel dieser Arbeit ist die Implementierung, Bewertung und der Vergleich verschiedener Techniken, die zur Erweiterung von verfügbaren Daten und damit zur Verbesserung der Modellleistung eingesetzt werden können. Durch Experimente mit verschiedenen Datensatzanteilen zeigen wir, dass Mention Replacement und Synonym Replacement die Leistung von sowohl rekurrenten als auch von transformatorischen NERModellen in ressourcenarmen Umgebungen effektiv verbessern können

    Data Augmentation For Text Classification Tasks

    Get PDF
    Thanks to increases in computing power and the growing availability of large datasets, neural networks have achieved state of the art results in many natural language process- ing (NLP) and computer vision (CV) tasks. These models require a large number of training examples that are balanced between classes, but in many application areas they rely on training sets that are either small or imbalanced, or both. To address this, data augmentation has become standard practice in CV. This research is motivated by the ob- servation that, relative to CV, data augmentation is underused and understudied in NLP. Three methods of data augmentation are implemented and tested: synonym replacement, backtranslation, and contextual augmentation. Tests are conducted with two models: a Recurrent Neural Network (RNN) and Bidirectional Encoder Representations from Trans- formers (BERT). To develop learning curves and study the ability of augmentation methods to rebalance datasets, each of three binary classification datasets are made artificially small and made artificially imbalanced. The results show that these augmentation methods can offer accuracy improvements of over 1% to models with a baseline accuracy as high as 92%. On the two largest datasets, the accuracy of BERT is usually improved by either synonym replacement or backtranslation, while the accuracy of the RNN is usually im- proved by all three augmentation techniques. The augmentation techniques tend to yield the largest accuracy boost when the datasets are smallest or most imbalanced; the per- formance benefits appear to converge to 0% as the dataset becomes larger. The optimal augmentation distance, the extent to which augmented training examples tend to deviate from their original form, decreases as datasets become more balanced. The results show that data augmentation is a powerful method of improving performance when training on datasets with fewer than 10,000 training examples. The accuracy increases that they offer are reduced by recent advancements in transfer learning schemes, but they are certainly not eliminated

    Context-Aware Message-Level Rumour Detection with Weak Supervision

    Get PDF
    Social media has become the main source of all sorts of information beyond a communication medium. Its intrinsic nature can allow a continuous and massive flow of misinformation to make a severe impact worldwide. In particular, rumours emerge unexpectedly and spread quickly. It is challenging to track down their origins and stop their propagation. One of the most ideal solutions to this is to identify rumour-mongering messages as early as possible, which is commonly referred to as "Early Rumour Detection (ERD)". This dissertation focuses on researching ERD on social media by exploiting weak supervision and contextual information. Weak supervision is a branch of ML where noisy and less precise sources (e.g. data patterns) are leveraged to learn limited high-quality labelled data (Ratner et al., 2017). This is intended to reduce the cost and increase the efficiency of the hand-labelling of large-scale data. This thesis aims to study whether identifying rumours before they go viral is possible and develop an architecture for ERD at individual post level. To this end, it first explores major bottlenecks of current ERD. It also uncovers a research gap between system design and its applications in the real world, which have received less attention from the research community of ERD. One bottleneck is limited labelled data. Weakly supervised methods to augment limited labelled training data for ERD are introduced. The other bottleneck is enormous amounts of noisy data. A framework unifying burst detection based on temporal signals and burst summarisation is investigated to identify potential rumours (i.e. input to rumour detection models) by filtering out uninformative messages. Finally, a novel method which jointly learns rumour sources and their contexts (i.e. conversational threads) for ERD is proposed. An extensive evaluation setting for ERD systems is also introduced

    Deep learning for clinical texts in low-data regimes

    Get PDF
    Electronic health records contain a wealth of valuable information for improving healthcare. There are, however, challenges associated with clinical text that prevent computers from maximising the utility of such information. While deep learning (DL) has emerged as a practical paradigm for dealing with the complexities of natural language, applying this class of machine learning algorithms to clinical text raises several research questions. First, we tackled the problem of data sparsity by looking into the task of adverse event detection. As these events are rare, examples thereof are lacking. To compensate for data scarcity, we leveraged large pre-trained language models (LMs) in combination with formally represented medical knowledge. We demonstrated that such a combination exhibits remarkable generalisation abilities despite the low availability of data. Second, we focused on the omnipresence of short forms in clinical texts. This typically leads to out-of-vocabulary problems, which motivates unlocking the underlying words. The novelty of our approach lies in its capacity to learn how to automatically expand short forms without resorting to external resources. Third, we investigated data augmentation to address the issue of data scarcity at its core. To the best of our knowledge, we were one of the firsts to investigate population-based augmentation for scheduling text data augmentation. Interestingly, little improvement was seen in fine-tuning large pre-trained LMs with the augmented data. We suggest that, as LMs proved able to cope well with small datasets, the need for data augmentation was made redundant. We conclude that DL approaches to clinical text mining should be developed by fine-tuning large LMs. One area where such models may struggle is the use of clinical short forms. Our method to automating their expansion fixes this issue. Together, these two approaches provide a blueprint for successfully developing DL approaches to clinical text mining in low-data regimes
    corecore