159 research outputs found
Annotating speaker stance in discourse:the Brexit Blog Corpus
The aim of this study is to explore the possibility of identifying speaker stance in discourse, provide an analytical resource for it and an evaluation of the level of agreement across speakers. We also explore to what extent language users agree about what kind of stances are expressed in natural language use or whether their interpretations diverge. In order to perform this task, a comprehensive cognitive-functional framework of ten stance categories was developed based on previous work on speaker stance in the literature. A corpus of opinionated texts was compiled, the Brexit Blog Corpus (BBC). An analytical protocol and interface (Active Learning and Visual Analytics) for the annotations was set up and the data were independently annotated by two annotators. The annotation procedure, the annotation agreements and the co-occurrence of more than one stance in the utterances are described and discussed. The careful, analytical annotation process has returned satisfactory inter- and intra-annotation agreement scores, resulting in a gold standard corpus, the final version of the BBC
Learning Interpretable Style Embeddings via Prompting LLMs
Style representation learning builds content-independent representations of
author style in text. Stylometry, the analysis of style in text, is often
performed by expert forensic linguists and no large dataset of stylometric
annotations exists for training. Current style representation learning uses
neural methods to disentangle style from content to create style vectors,
however, these approaches result in uninterpretable representations,
complicating their usage in downstream applications like authorship attribution
where auditing and explainability is critical. In this work, we use prompting
to perform stylometry on a large number of texts to create a synthetic dataset
and train human-interpretable style representations we call LISA embeddings. We
release our synthetic stylometry dataset and our interpretable style models as
resources
Detection of stance and sentiment modifiers in political blogs
The automatic detection of seven types of modifiers was studied: Certainty, Uncertainty, Hypotheticality, Prediction, Recommendation, Concession/Contrast and Source. A classifier aimed at detecting local cue words that signal the categories was the most successful method for five of the categories. For Prediction and Hypotheticality, however, better results were obtained with a classifier trained on tokens and bigrams present in the entire sentence. Unsupervised cluster features were shown useful for the categories Source and Uncertainty, when a subset of the training data available was used. However, when all of the 2,095 sentences that had been actively selected and manually annotated were used as training data, the cluster features had a very limited effect. Some of the classification errors made by the models would be possible to avoid by extending the training data set, while other features and feature representations, as well as the incorporation of pragmatic knowledge, would be required for other error types
Visually-Enabled Active Deep Learning for (Geo) Text and Image Classification: A Review
This paper investigates recent research on active learning for (geo) text and image classification, with an emphasis on methods that combine visual analytics and/or deep learning. Deep learning has attracted substantial attention across many domains of science and practice, because it can find intricate patterns in big data; but successful application of the methods requires a big set of labeled data. Active learning, which has the potential to address the data labeling challenge, has already had success in geospatial applications such as trajectory classification from movement data and (geo) text and image classification. This review is intended to be particularly relevant for extension of these methods to GISience, to support work in domains such as geographic information retrieval from text and image repositories, interpretation of spatial language, and related geo-semantics challenges. Specifically, to provide a structure for leveraging recent advances, we group the relevant work into five categories: active learning, visual analytics, active learning with visual analytics, active deep learning, plus GIScience and Remote Sensing (RS) using active learning and active deep learning. Each category is exemplified by recent influential work. Based on this framing and our systematic review of key research, we then discuss some of the main challenges of integrating active learning with visual analytics and deep learning, and point out research opportunities from technical and application perspectives-for application-based opportunities, with emphasis on those that address big data with geospatial components
Evaluating stance-annotated sentences from political blogs regarding the Brexit:a quantitative analysis
This paper offers a formally driven quantitative analysis of stance-annotated sentences in the Brexit Blog Corpus (BBC). Our goal is to identify features that determine the formal profiles of six stance categories (contrariety, hypotheticality, necessity, prediction, source of knowledge and uncertainty) in a subset of the BBC. The study has two parts: firstly, it examines a large number of formal linguistic features, such as punctuation, words and grammatical categories that occur in the sentences in order to describe the specific characteristics of each category, and secondly, it compares characteristics in the entire data set in order to determine stance similarities in the data set. We show that among the six stance categories in the corpus, contrariety and necessity are the most discriminative ones, with the former using longer sentences, more conjunctions, more repetitions and shorter forms than the sentences expressing other stances. necessity has longer lexical forms but shorter sentences, which are syntactically more complex. We show that stance in our data set is expressed in sentences with around 21 words per sentence. The sentences consist mainly of alphabetical characters forming a varied vocabulary without special forms, such as digits or special characters
Mining arguments in scientific abstracts: Application to argumentative quality assessment
Argument mining consists in the automatic identification of argumentative structures in natural language, a task that has been recognized as particularly challenging in the scientific domain. In this work we propose SciARG, a new annotation scheme, and apply it to the identification of argumentative units and relations in abstracts in two scientific disciplines: computational linguistics and biomedicine, which allows us to assess the applicability of our scheme to different knowledge fields. We use our annotated corpus to train and evaluate argument mining models in various experimental settings, including single and multi-task learning. We investigate the possibility of leveraging existing annotations, including discourse relations and rhetorical roles of sentences, to improve the performance of argument mining models. In particular, we explore the potential offered by a sequential transfer- learning approach in which supplementary training tasks are used to fine-tune pre-trained parameter-rich language models. Finally, we analyze the practical usability of the automatically-extracted components and relations for the prediction of argumentative quality dimensions of scientific abstracts.Agencia Nacional de Investigación e InnovaciónMinisterio de EconomÃa, Industria y Competitividad (España
Large Language Models and Control Mechanisms Improve Text Readability of Biomedical Abstracts
Biomedical literature often uses complex language and inaccessible
professional terminologies. That is why simplification plays an important role
in improving public health literacy. Applying Natural Language Processing (NLP)
models to automate such tasks allows for quick and direct accessibility for lay
readers. In this work, we investigate the ability of state-of-the-art large
language models (LLMs) on the task of biomedical abstract simplification, using
the publicly available dataset for plain language adaptation of biomedical
abstracts (\textbf{PLABA}). The methods applied include domain fine-tuning and
prompt-based learning (PBL) on: 1) Encoder-decoder models (T5, SciFive, and
BART), 2) Decoder-only GPT models (GPT-3.5 and GPT-4) from OpenAI and BioGPT,
and 3) Control-token mechanisms on BART-based models. We used a range of
automatic evaluation metrics, including BLEU, ROUGE, SARI, and BERTscore, and
also conducted human evaluations. BART-Large with Control Token (BART-L-w-CT)
mechanisms reported the highest SARI score of 46.54 and T5-base reported the
highest BERTscore 72.62. In human evaluation, BART-L-w-CTs achieved a better
simplicity score over T5-Base (2.9 vs. 2.2), while T5-Base achieved a better
meaning preservation score over BART-L-w-CTs (3.1 vs. 2.6). We also categorised
the system outputs with examples, hoping this will shed some light for future
research on this task. Our code, fine-tuned models, and data splits are
available at \url{https://github.com/HECTA-UoM/PLABA-MU}Comment: working pape
Visual Analytics for the Exploratory Analysis and Labeling of Cultural Data
Cultural data can come in various forms and modalities, such as text traditions, artworks, music, crafted objects, or even as intangible heritage such as biographies of people, performing arts, cultural customs and rites.
The assignment of metadata to such cultural heritage objects is an important task that people working in galleries, libraries, archives, and museums (GLAM) do on a daily basis.
These rich metadata collections are used to categorize, structure, and study collections, but can also be used to apply computational methods.
Such computational methods are in the focus of Computational and Digital Humanities projects and research.
For the longest time, the digital humanities community has focused on textual corpora, including text mining, and other natural language processing techniques.
Although some disciplines of the humanities, such as art history and archaeology have a long history of using visualizations.
In recent years, the digital humanities community has started to shift the focus to include other modalities, such as audio-visual data.
In turn, methods in machine learning and computer vision have been proposed for the specificities of such corpora.
Over the last decade, the visualization community has engaged in several collaborations with the digital humanities, often with a focus on exploratory or comparative analysis of the data at hand.
This includes both methods and systems that support classical Close Reading of the material and Distant Reading methods that give an overview of larger collections, as well as methods in between, such as Meso Reading.
Furthermore, a wider application of machine learning methods can be observed on cultural heritage collections.
But they are rarely applied together with visualizations to allow for further perspectives on the collections in a visual analytics or human-in-the-loop setting.
Visual analytics can help in the decision-making process by guiding domain experts through the collection of interest.
However, state-of-the-art supervised machine learning methods are often not applicable to the collection of interest due to missing ground truth.
One form of ground truth are class labels, e.g., of entities depicted in an image collection, assigned to the individual images.
Labeling all objects in a collection is an arduous task when performed manually, because cultural heritage collections contain a wide variety of different objects with plenty of details.
A problem that arises with these collections curated in different institutions is that not always a specific standard is followed, so the vocabulary used can drift apart from another, making it difficult to combine the data from these institutions for large-scale analysis.
This thesis presents a series of projects that combine machine learning methods with interactive visualizations for the exploratory analysis and labeling of cultural data.
First, we define cultural data with regard to heritage and contemporary data, then we look at the state-of-the-art of existing visualization, computer vision, and visual analytics methods and projects focusing on cultural data collections.
After this, we present the problems addressed in this thesis and their solutions, starting with a series of visualizations to explore different facets of rap lyrics and rap artists with a focus on text reuse.
Next, we engage in a more complex case of text reuse, the collation of medieval vernacular text editions.
For this, a human-in-the-loop process is presented that applies word embeddings and interactive visualizations to perform textual alignments on under-resourced languages supported by labeling of the relations between lines and the relations between words.
We then switch the focus from textual data to another modality of cultural data by presenting a Virtual Museum that combines interactive visualizations and computer vision in order to explore a collection of artworks.
With the lessons learned from the previous projects, we engage in the labeling and analysis of medieval illuminated manuscripts and so combine some of the machine learning methods and visualizations that were used for textual data with computer vision methods.
Finally, we give reflections on the interdisciplinary projects and the lessons learned, before we discuss existing challenges when working with cultural heritage data from the computer science perspective to outline potential research directions for machine learning and visual analytics of cultural heritage data
Sentiment Analysis for Fake News Detection
[Abstract] In recent years, we have witnessed a rise in fake news, i.e., provably false pieces of information created with the intention of deception. The dissemination of this type of news poses a serious threat to cohesion and social well-being, since it fosters political polarization and the distrust of people with respect to their leaders. The huge amount of news that is disseminated through social media makes manual verification unfeasible, which has promoted the design and implementation of automatic systems for fake news detection. The creators of fake news use various stylistic tricks to promote the success of their creations, with one of them being to excite the sentiments of the recipients. This has led to sentiment analysis, the part of text analytics in charge of determining the polarity and strength of sentiments expressed in a text, to be used in fake news detection approaches, either as a basis of the system or as a complementary element. In this article, we study the different
uses of sentiment analysis in the detection of fake news, with a discussion of the most relevant elements and shortcomings, and the requirements that should be met in the near future, such as multilingualism, explainability, mitigation of biases, or treatment of multimedia elements.Xunta de Galicia; ED431G 2019/01Xunta de Galicia; ED431C 2020/11This work has been funded by FEDER/Ministerio de Ciencia, Innovación y Universidades — Agencia Estatal de Investigación through the ANSWERASAP project (TIN2017-85160-C2-1-R); and by Xunta de Galicia through a Competitive Reference Group grant (ED431C 2020/11). CITIC, as Research Center of the Galician University System, is funded by the ConsellerÃa de Educación, Universidade e Formación Profesional of the Xunta de Galicia through the European Regional Development Fund (ERDF/FEDER) with 80%, the Galicia ERDF 2014-20 Operational Programme, and the remaining 20% from the SecretarÃa Xeral de Universidades (ref. ED431G 2019/01). David Vilares is also supported by a 2020 Leonardo Grant for Researchers and Cultural Creators from the BBVA Foundation. Carlos Gómez-RodrÃguez has also received funding from the European Research Council (ERC), under the European Union’s Horizon 2020 research and innovation programme (FASTPARSE, grant No. 714150
- …