21 research outputs found
SuryaKiran at MEDIQA-Sum 2023: Leveraging LoRA for Clinical Dialogue Summarization
Finetuning Large Language Models helps improve the results for
domain-specific use cases. End-to-end finetuning of large language models is
time and resource intensive and has high storage requirements to store the
finetuned version of the large language model. Parameter Efficient Fine Tuning
(PEFT) methods address the time and resource challenges by keeping the large
language model as a fixed base and add additional layers, which the PEFT
methods finetune. This paper demonstrates the evaluation results for one such
PEFT method Low Rank Adaptation (LoRA), for Clinical Dialogue Summarization.
The evaluation results show that LoRA works at par with end-to-end finetuning
for a large language model. The paper presents the evaluations done for solving
both the Subtask A and B from ImageCLEFmedical
{https://www.imageclef.org/2023/medical
A Quantitative Review on Language Model Efficiency Research
Language models (LMs) are being scaled and becoming powerful. Improving their
efficiency is one of the core research topics in neural information processing
systems. Tay et al. (2022) provided a comprehensive overview of efficient
Transformers that have become an indispensable staple in the field of NLP.
However, in the section of "On Evaluation", they left an open question "which
fundamental efficient Transformer one should consider," answered by "still a
mystery" because "many research papers select their own benchmarks."
Unfortunately, there was not quantitative analysis about the performances of
Transformers on any benchmarks. Moreover, state space models (SSMs) have
demonstrated their abilities of modeling long-range sequences with
non-attention mechanisms, which were not discussed in the prior review. This
article makes a meta analysis on the results from a set of papers on efficient
Transformers as well as those on SSMs. It provides a quantitative review on LM
efficiency research and gives suggestions for future research.Comment: 29 pages, 24 table
Integrating Deep Contextualized Word Embeddings into Text Summarization Systems
In questa tesi saranno usate tecniche di deep learning per affrontare unodei problemi più difficili dell’elaborazione automatica del linguaggio naturale:la generazione automatica di riassunti. Dato un corpus di testo, l’obiettivoè quello di generare un riassunto che sia in grado di distillare e comprimerel’informazione dall’intero testo di partenza. Con i primi approcci si é provatoa catturare il significato del testo attraverso l’uso di regole scritte dagliumani. Dopo questa era simbolica basata su regole, gli approcchi statistici hanno preso il sopravvento. Negli ultimi anni il deep learning ha impattato positivamente ogni area dell’elaborazione automatica del linguaggionaturale, incluso la generazione automatica dei riassunti. In questo lavoroi modelli pointer-generator [See et al., 2017] sono utilizzati in combinazionea pre-trained deep contextualized word embeddings [Peters et al., 2018]. Sivaluta l’approccio sui due più grossi dataset per la generazione automaticadei riassunti disponibili ora: il dataset CNN/Daily Mail e il dataset Newsroom. Il dataset CNN/Daily Mail è stato generato partendo dal dataset diQuestion Answering pubblicato da DeepMind [Hermann et al., 2015], concatenando le frasi di highlight delle news e formando cosı̀ dei riassunti multifrase. Il dataset Newsroom [Grusky et al., 2018] è, invece, il primo datasetesplicitamente costruito per la generazione automatica di riassunti. Comprende un milione di coppie articolo-riassunto con diversi gradi di estrattività /astrattività a diversi ratio di compressione.L’approccio è valutato sui test-set con l’uso della metrica Recall-Oriented Understudy for Gisting Evaluation (ROUGE). Questo approccio causa un sostanzioso aumento nelle performance per il dataset Newsroom raggiungendo lo stato dell’arte sul valore di ROUGE-1 e valori competitivi per ROUGE-2 e ROUGE-L
State-of-the-art generalisation research in NLP: a taxonomy and review
The ability to generalise well is one of the primary desiderata of natural
language processing (NLP). Yet, what `good generalisation' entails and how it
should be evaluated is not well understood, nor are there any common standards
to evaluate it. In this paper, we aim to lay the ground-work to improve both of
these issues. We present a taxonomy for characterising and understanding
generalisation research in NLP, we use that taxonomy to present a comprehensive
map of published generalisation studies, and we make recommendations for which
areas might deserve attention in the future. Our taxonomy is based on an
extensive literature review of generalisation research, and contains five axes
along which studies can differ: their main motivation, the type of
generalisation they aim to solve, the type of data shift they consider, the
source by which this data shift is obtained, and the locus of the shift within
the modelling pipeline. We use our taxonomy to classify over 400 previous
papers that test generalisation, for a total of more than 600 individual
experiments. Considering the results of this review, we present an in-depth
analysis of the current state of generalisation research in NLP, and make
recommendations for the future. Along with this paper, we release a webpage
where the results of our review can be dynamically explored, and which we
intend to up-date as new NLP generalisation studies are published. With this
work, we aim to make steps towards making state-of-the-art generalisation
testing the new status quo in NLP.Comment: 35 pages of content + 53 pages of reference
Measuring associational thinking through word embeddings
[EN] The development of a model to quantify semantic similarity and relatedness between words has been the major focus of many studies in various fields, e.g. psychology, linguistics, and natural language processing. Unlike the measures proposed by most previous research, this article is aimed at estimating automatically the strength of associative words that can be semantically related or not. We demonstrate that the performance of the model depends not only on the combination of independently constructed word embeddings (namely, corpus- and network-based embeddings) but also on the way these word vectors interact. The research concludes that the weighted average of the cosine-similarity coefficients derived from independent word embeddings in a double vector space tends to yield high correlations with human judgements. Moreover, we demonstrate that evaluating word associations through a measure that relies on not only the rank ordering of word pairs but also the strength of associations can reveal some findings that go unnoticed by traditional measures such as Spearman's and Pearson's correlation coefficients.s Financial support for this research has been provided by the Spanish Ministry of
Science, Innovation and Universities [grant number RTC 2017-6389-5], the Spanish ¿Agencia Estatal
de Investigación¿ [grant number PID2020-112827GB-I00 / AEI / 10.13039/501100011033], and the
European Union¿s Horizon 2020 research and innovation program [grant number 101017861: project
SMARTLAGOON].
Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature.Periñán-Pascual, C. (2022). Measuring associational thinking through word embeddings. Artificial Intelligence Review. 55(3):2065-2102. https://doi.org/10.1007/s10462-021-10056-62065210255
Neural Graph Transfer Learning in Natural Language Processing Tasks
Natural language is essential in our daily lives as we rely on languages to communicate and exchange information. A fundamental goal for natural language processing (NLP) is to let the machine understand natural language to help or replace human experts to mine knowledge and complete tasks. Many NLP tasks deal with sequential data. For example, a sentence is considered as a sequence of works. Very recently, deep learning-based language models (i.e.,BERT \citep{devlin2018bert}) achieved significant improvement in many existing tasks, including text classification and natural language inference. However, not all tasks can be formulated using sequence models. Specifically, graph-structured data is also fundamental in NLP, including entity linking, entity classification, relation extraction, abstractive meaning representation, and knowledge graphs \citep{santoro2017simple,hamilton2017representation,kipf2016semi}. In this scenario, BERT-based pretrained models may not be suitable. Graph Convolutional Neural Network (GCN) \citep{kipf2016semi} is a deep neural network model designed for graphs. It has shown great potential in text classification, link prediction, question answering and so on. This dissertation presents novel graph models for NLP tasks, including text classification, prerequisite chain learning, and coreference resolution. We focus on different perspectives of graph convolutional network modeling: for text classification, a novel graph construction method is proposed which allows interpretability for the prediction; for prerequisite chain learning, we propose multiple aggregation functions that utilize neighbors for better information exchange; for coreference resolution, we study how graph pretraining can help when labeled data is limited. Moreover, an important branch is to apply pretrained language models for the mentioned tasks. So, this dissertation also focuses on the transfer learning method that generalizes pretrained models to other domains, including medical, cross-lingual, and web data. Finally, we propose a new task called unsupervised cross-domain prerequisite chain learning, and study novel graph-based methods to transfer knowledge over graphs
Report Linking: Information Extraction for Building Topical Knowledge Bases
Human language artifacts represent a plentiful source of rich, unstructured information created by reporters, scientists, and analysts. In this thesis we provide approaches for adding structure: extracting and linking entities, events, and relationships from a collection of documents about a common topic. We pursue this linking at two levels of abstraction. At the document level we propose models for aligning the entities and events described in coherent and related discourses: these models are useful for deduplicating repeated claims, finding implicit arguments to events, and measuring semantic overlap between documents. Then at a higher level of abstraction, we construct knowledge graphs containing salient entities and relations linked to supporting documents: these graphs can be augmented with facts and summaries to give users a structured understanding of the information in a large collection
Automatic Image Captioning with Style
This thesis connects two core topics in machine learning, vision
and language. The problem of choice is image caption generation:
automatically constructing natural language descriptions of image
content. Previous research into image caption generation has
focused on generating purely descriptive captions; I focus on
generating visually relevant captions with a distinct linguistic
style. Captions with style have the potential to ease
communication and add a new layer of personalisation.
First, I consider naming variations in image captions, and
propose a method for predicting context-dependent names that
takes into account visual and linguistic information. This method
makes use of a large-scale image caption dataset, which I also
use to explore naming conventions and report naming conventions
for hundreds of animal classes. Next I propose the SentiCap
model, which relies on recent advances in artificial neural
networks to generate visually relevant image captions with
positive or negative sentiment. To balance descriptiveness and
sentiment, the SentiCap model dynamically switches between two
recurrent neural networks, one tuned for descriptive words and
one for sentiment words. As the first published model for
generating captions with sentiment, SentiCap has influenced a
number of subsequent works. I then investigate the sub-task of
modelling styled sentences without images. The specific task
chosen is sentence simplification: rewriting news article
sentences to make them easier to understand.
For this task I design a neural sequence-to-sequence model that
can work with
limited training data, using novel adaptations for word copying
and sharing
word embeddings. Finally, I present SemStyle, a system for
generating visually
relevant image captions in the style of an arbitrary text corpus.
A shared term
space allows a neural network for vision and content planning to
communicate
with a network for styled language generation. SemStyle achieves
competitive
results in human and automatic evaluations of descriptiveness and
style.
As a whole, this thesis presents two complete systems for styled
caption generation that are first of their kind and demonstrate,
for the first time, that automatic style transfer for image
captions is achievable. Contributions also include novel ideas
for object naming and sentence simplification. This thesis opens
up inquiries into highly personalised image captions; large scale
visually grounded concept naming; and more generally, styled text
generation with content control