445 research outputs found
LLM-in-the-loop: Leveraging Large Language Model for Thematic Analysis
Thematic analysis (TA) has been widely used for analyzing qualitative data in
many disciplines and fields. To ensure reliable analysis, the same piece of
data is typically assigned to at least two human coders. Moreover, to produce
meaningful and useful analysis, human coders develop and deepen their data
interpretation and coding over multiple iterations, making TA labor-intensive
and time-consuming. Recently the emerging field of large language models (LLMs)
research has shown that LLMs have the potential replicate human-like behavior
in various tasks: in particular, LLMs outperform crowd workers on
text-annotation tasks, suggesting an opportunity to leverage LLMs on TA. We
propose a human-LLM collaboration framework (i.e., LLM-in-the-loop) to conduct
TA with in-context learning (ICL). This framework provides the prompt to frame
discussions with a LLM (e.g., GPT-3.5) to generate the final codebook for TA.
We demonstrate the utility of this framework using survey datasets on the
aspects of the music listening experience and the usage of a password manager.
Results of the two case studies show that the proposed framework yields similar
coding quality to that of human coders but reduces TA's labor and time demands.Comment: EMNLP 2023 Finding
Is Explanation the Cure? Misinformation Mitigation in the Short Term and Long Term
With advancements in natural language processing (NLP) models, automatic
explanation generation has been proposed to mitigate misinformation on social
media platforms in addition to adding warning labels to identified fake news.
While many researchers have focused on generating good explanations, how these
explanations can really help humans combat fake news is under-explored. In this
study, we compare the effectiveness of a warning label and the state-of-the-art
counterfactual explanations generated by GPT-4 in debunking misinformation. In
a two-wave, online human-subject study, participants (N = 215) were randomly
assigned to a control group in which false contents are shown without any
intervention, a warning tag group in which the false claims were labeled, or an
explanation group in which the false contents were accompanied by GPT-4
generated explanations. Our results show that both interventions significantly
decrease participants' self-reported belief in fake claims in an equivalent
manner for the short-term and long-term. We discuss the implications of our
findings and directions for future NLP-based misinformation debunking
strategies.Comment: EMNLP Findings 202
All the wiser: Fake news intervention using user reading preferences
National Research Foundation (NRF) Singapore under International Research Centres in Singapore Funding Initiativ
- …