32 research outputs found
PDFTriage: Question Answering over Long, Structured Documents
Large Language Models (LLMs) have issues with document question answering
(QA) in situations where the document is unable to fit in the small context
length of an LLM. To overcome this issue, most existing works focus on
retrieving the relevant context from the document, representing them as plain
text. However, documents such as PDFs, web pages, and presentations are
naturally structured with different pages, tables, sections, and so on.
Representing such structured documents as plain text is incongruous with the
user's mental model of these documents with rich structure. When a system has
to query the document for context, this incongruity is brought to the fore, and
seemingly trivial questions can trip up the QA system. To bridge this
fundamental gap in handling structured documents, we propose an approach called
PDFTriage that enables models to retrieve the context based on either structure
or content. Our experiments demonstrate the effectiveness of the proposed
PDFTriage-augmented models across several classes of questions where existing
retrieval-augmented LLMs fail. To facilitate further research on this
fundamental problem, we release our benchmark dataset consisting of 900+
human-generated questions over 80 structured documents from 10 different
categories of question types for document QA
Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback
A key technology for the development of large language models (LLMs) involves
instruction tuning that helps align the models' responses with human
expectations to realize impressive learning abilities. Two major approaches for
instruction tuning characterize supervised fine-tuning (SFT) and reinforcement
learning from human feedback (RLHF), which are currently applied to produce the
best commercial LLMs (e.g., ChatGPT). To improve the accessibility of LLMs for
research and development efforts, various instruction-tuned open-source LLMs
have also been introduced recently, e.g., Alpaca, Vicuna, to name a few.
However, existing open-source LLMs have only been instruction-tuned for English
and a few popular languages, thus hindering their impacts and accessibility to
many other languages in the world. Among a few very recent work to explore
instruction tuning for LLMs in multiple languages, SFT has been used as the
only approach to instruction-tune LLMs for multiple languages. This has left a
significant gap for fine-tuned LLMs based on RLHF in diverse languages and
raised important questions on how RLHF can boost the performance of
multilingual instruction tuning. To overcome this issue, we present Okapi, the
first system with instruction-tuned LLMs based on RLHF for multiple languages.
Okapi introduces instruction and response-ranked data in 26 diverse languages
to facilitate the experiments and development of future multilingual LLM
research. We also present benchmark datasets to enable the evaluation of
generative LLMs in multiple languages. Our experiments demonstrate the
advantages of RLHF for multilingual instruction over SFT for different base
models and datasets. Our framework and resources are released at
https://github.com/nlp-uoregon/Okapi
Fairness-Aware Graph Neural Networks: A Survey
Graph Neural Networks (GNNs) have become increasingly important due to their
representational power and state-of-the-art predictive performance on many
fundamental learning tasks. Despite this success, GNNs suffer from fairness
issues that arise as a result of the underlying graph data and the fundamental
aggregation mechanism that lies at the heart of the large class of GNN models.
In this article, we examine and categorize fairness techniques for improving
the fairness of GNNs. Previous work on fair GNN models and techniques are
discussed in terms of whether they focus on improving fairness during a
preprocessing step, during training, or in a post-processing phase.
Furthermore, we discuss how such techniques can be used together whenever
appropriate, and highlight the advantages and intuition as well. We also
introduce an intuitive taxonomy for fairness evaluation metrics including
graph-level fairness, neighborhood-level fairness, embedding-level fairness,
and prediction-level fairness metrics. In addition, graph datasets that are
useful for benchmarking the fairness of GNN models are summarized succinctly.
Finally, we highlight key open problems and challenges that remain to be
addressed
CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages
The driving factors behind the development of large language models (LLMs)
with impressive learning capabilities are their colossal model sizes and
extensive training datasets. Along with the progress in natural language
processing, LLMs have been frequently made accessible to the public to foster
deeper investigation and applications. However, when it comes to training
datasets for these LLMs, especially the recent state-of-the-art models, they
are often not fully disclosed. Creating training data for high-performing LLMs
involves extensive cleaning and deduplication to ensure the necessary level of
quality. The lack of transparency for training data has thus hampered research
on attributing and addressing hallucination and bias issues in LLMs, hindering
replication efforts and further advancements in the community. These challenges
become even more pronounced in multilingual learning scenarios, where the
available multilingual text datasets are often inadequately collected and
cleaned. Consequently, there is a lack of open-source and readily usable
dataset to effectively train LLMs in multiple languages. To overcome this
issue, we present CulturaX, a substantial multilingual dataset with 6.3
trillion tokens in 167 languages, tailored for LLM development. Our dataset
undergoes meticulous cleaning and deduplication through a rigorous pipeline of
multiple stages to accomplish the best quality for model training, including
language identification, URL-based filtering, metric-based cleaning, document
refinement, and data deduplication. CulturaX is fully released to the public in
HuggingFace to facilitate research and advancements in multilingual LLMs:
https://huggingface.co/datasets/uonlp/CulturaX.Comment: Ongoing Wor
Self-Debiasing Large Language Models: Zero-Shot Recognition and Reduction of Stereotypes
Large language models (LLMs) have shown remarkable advances in language
generation and understanding but are also prone to exhibiting harmful social
biases. While recognition of these behaviors has generated an abundance of bias
mitigation techniques, most require modifications to the training data, model
parameters, or decoding strategy, which may be infeasible without access to a
trainable model. In this work, we leverage the zero-shot capabilities of LLMs
to reduce stereotyping in a technique we introduce as zero-shot self-debiasing.
With two approaches, self-debiasing via explanation and self-debiasing via
reprompting, we show that self-debiasing can significantly reduce the degree of
stereotyping across nine different social groups while relying only on the LLM
itself and a simple prompt, with explanations correctly identifying invalid
assumptions and reprompting delivering the greatest reductions in bias. We hope
this work opens inquiry into other zero-shot techniques for bias mitigation
Bias and Fairness in Large Language Models: A Survey
Rapid advancements of large language models (LLMs) have enabled the
processing, understanding, and generation of human-like text, with increasing
integration into systems that touch our social sphere. Despite this success,
these models can learn, perpetuate, and amplify harmful social biases. In this
paper, we present a comprehensive survey of bias evaluation and mitigation
techniques for LLMs. We first consolidate, formalize, and expand notions of
social bias and fairness in natural language processing, defining distinct
facets of harm and introducing several desiderata to operationalize fairness
for LLMs. We then unify the literature by proposing three intuitive taxonomies,
two for bias evaluation, namely metrics and datasets, and one for mitigation.
Our first taxonomy of metrics for bias evaluation disambiguates the
relationship between metrics and evaluation datasets, and organizes metrics by
the different levels at which they operate in a model: embeddings,
probabilities, and generated text. Our second taxonomy of datasets for bias
evaluation categorizes datasets by their structure as counterfactual inputs or
prompts, and identifies the targeted harms and social groups; we also release a
consolidation of publicly-available datasets for improved access. Our third
taxonomy of techniques for bias mitigation classifies methods by their
intervention during pre-processing, in-training, intra-processing, and
post-processing, with granular subcategories that elucidate research trends.
Finally, we identify open problems and challenges for future work. Synthesizing
a wide range of recent research, we aim to provide a clear guide of the
existing literature that empowers researchers and practitioners to better
understand and prevent the propagation of bias in LLMs
Associations between Th1-mediated diseases and omalizumab: A post-marketing study of the World Health Organization pharmacovigilance database (VigiBase (R))
International audienc
NLP and deep learning methods for curbing the spread of misinformation in India
The current fight against COVID-19 is not only around its prevention and cure but it is also about mitigating the negative impact resulting from misinformation around it. The pervasiveness of social media and access to smartphones has propelled the spread of misinformation on such a large scale that it is considered as one of the main threats to our society by the World Economic Forum. This ‘Infodemic’ has caused widespread rumors, fueled practices that can jeopardize one’s health, and has even resulted in hate violence in certain parts of the world. We built an engine that has the ability to match incoming text, which may contain correct or incorrect information, with a known repository of misinformation. By matching texts on embeddings generated using BERT, we evaluated paraphrased texts to see if they matched texts previously labeled as misinformation. Further, we augmented an existing data corpus of texts by tagging each misinformation with one or more impact categories. We may be able to take specific actions to avert the consequence of misinformation if we can predict the particular ramification of a certain type of misinformation