1,126 research outputs found
Aspect-Controlled Neural Argument Generation
We rely on arguments in our daily lives to deliver our opinions and base them
on evidence, making them more convincing in turn. However, finding and
formulating arguments can be challenging. In this work, we train a language
model for argument generation that can be controlled on a fine-grained level to
generate sentence-level arguments for a given topic, stance, and aspect. We
define argument aspect detection as a necessary method to allow this
fine-granular control and crowdsource a dataset with 5,032 arguments annotated
with aspects. Our evaluation shows that our generation model is able to
generate high-quality, aspect-specific arguments. Moreover, these arguments can
be used to improve the performance of stance detection models via data
augmentation and to generate counter-arguments. We publish all datasets and
code to fine-tune the language model
Computational Persuasion using Chatbots based on Crowdsourced Argument Graphs & Concerns
As computing becomes involved in every sphere of life, so too is persuasion
a target for applying computer-based solutions. Conversational agents, also
known as chatbots, are versatile tools that have the potential of being used
as agents in dialogical argumentation systems where the chatbot acts as the
persuader and the human agent as the persuadee and thereby offer a costeffective and scalable alternative to in-person consultations
To allow the user to type his or her argument in free-text input (as opposed
to selecting arguments from a menu) the chatbot needs to be able to (1)
âunderstandâ the userâs concern he or she is raising in their argument and (2)
give an appropriate counterargument that addresses the userâs concern.
In this thesis I describe how to (1) acquire arguments for the construction
of the chatbotâs knowledge base with the help of crowdsourcing, (2) how to
automatically identify the concerns that arguments address, and (3) how to
construct the chatbotâs knowledge base in the form of an argument graph that
can be used during persuasive dialogues with users.
I evaluated my methods in four case studies that covered several domains
(physical activity, meat consumption, UK University Fees and COVID-19
vaccination). In each case study I implemented a chatbot that engaged in argumentative dialogues with participants and measured the participantsâ change of
stance before and after engaging in a chat with the bot. In all four case studies
the chatbot showed statistically significant success persuading people to either
consider changing their behaviour or to change their stance
Revisiting the Role of Similarity and Dissimilarity in Best Counter Argument Retrieval
This paper studies the task of best counter-argument retrieval given an input
argument. Following the definition that the best counter-argument addresses the
same aspects as the input argument while having the opposite stance, we aim to
develop an efficient and effective model for scoring counter-arguments based on
similarity and dissimilarity metrics. We first conduct an experimental study on
the effectiveness of available scoring methods, including traditional
Learning-To-Rank (LTR) and recent neural scoring models. We then propose
Bipolar-encoder, a novel BERT-based model to learn an optimal representation
for simultaneous similarity and dissimilarity. Experimental results show that
our proposed method can achieve the accuracy@1 of 49.04\%, which significantly
outperforms other baselines by a large margin. When combined with an
appropriate caching technique, Bipolar-encoder is comparably efficient at
prediction time
Cognitive load of critical thinking strategies
Critical thinking is important for today\u27s life, where individuals daily face unlimited amounts of information, complex problems, and rapid technological and social changes. Therefore, critical thinking should be the focus of general education and educators\u27 efforts (Angeli & Valanides, 2009; Oliver & Utermohlen, 1995). Despite passively agreeing or disagreeing with a line of reasoning, critical thinkers use analytical skills to comprehend and evaluate its merits, considering strengths and weaknesses. Critical thinkers also analyze arguments, recognizing the essentiality of asking for reasons and considering alternative views and developing their own point of view (Paul, 1990). Kuhn and Udell (2007) emphasize that the ability to participate in sound argument is central to critical thinking and is essential to skilled decision making.
Nussbaum and Schraw (2007) emphasized that effective argumentation includes not only considering counterarguments but also evaluating, weighing, and combining the arguments and counterarguments into support for a final conclusion. Nussbaum and Schraw called this process argument-counterargument integration. The authors identified three strategies that could be used to construct an integrative argument in the context of writing reflective essays: a refutation, weighing, and design claim strategy. They also developed a graphic organizer called the argumentation vee diagram (AVD) for helping students write reflective essay.
This study focuses on the weighing and design claim strategies. In the weighing strategy, an arguer can argue that the weight of reasons and evidence on one side of the issue is stronger than that on the other side. In a design claim strategy, a reasoner tends to form her opinion or conclusion based on supporting an argument side (by taking its advantages) and eliminating or reducing the disadvantages of the counterargument side. Based on learning other definitions for argumentation, I define argumentation in this study as a reasoning tool of evaluation through giving reasons and evidence for one\u27s own positions, and evaluating counterarguments of different ideas for different views.
In cognitive psychology, cognitive load theory seems to provide a promising framework for studying and increasing our knowledge about cognitive functioning and learning activities. Cognitive load theory contributes to education and learning by using human cognitive architecture to understand the design of instruction. CLT assumes limited working memory resources when information is being processed (Sweller & Chandler, 1994; Sweller, Van Merriënboer & Paas, 1998; Van Merriënboer & Sweller, 2005).
The Present Research Study Research Questions 1- What is the cognitive load imposed by two different argument-counterargument integration strategies (weighing, and constructing a design claim)? 2- What is the impact of using the AVDs on amount of cognitive load, compared to using a less diagrammatic structure (linear list)?
It is hypothesized that the weighing strategy would impose greater cognitive load, as measured by mental effort rating scale and time, than constructing a design claim strategy. As proposed by Nussbaum (2008), in using weighing strategy a larger number of disparate (non-integrative) elements must be coordinated and maintained in working memory. It is also hypothesized that the AVDs would reduce cognitive load, compared to a linear list, By helping individuals better connect, organize, and remember information (various arguments) (Rulea, Baldwin & Schell, 2008), and therefore freeing up processing capacity for essential cognitive processing (Stull & Mayer, 2007).
The experimental design of the study consisted of four experimental groups that used strategies and two control groups. I tested the hypotheses of the study by using a randomized 2x3 factorial design ANOVA (two strategies prompt x AVD and non- AVD) with a control group included in each factor. Need for cognition (NFC), a construct reflecting the tendency to enjoy and engage in effortful cognitive processing (Petty & Cacioppo, 1986), was measured and used as an indication of participants\u27 tendency to put forth cognitive effort.
Thinking and argument-counterargument integration processes took place through electronic discussion board (WebCampus), considering analysis questions about grading issue Should students be graded on class participation? I chose that analysis question as it represents an issue that is meaningful and important for college students, in that they can relate and engage easily in thinking about it.
The results of the first research question pointed to a significant relationship between the complexity of an essay, as measured by complexity of weighing refutation, and cognitive load as measured by time and cognitive load scale. Weighing refutations also involved more mental effort than design claims even when controlling for the complexity of the arguments. The results also revealed that there was a significant interaction effect for NFC.
The results of the second research question were non-significant. The results showed that the linear list that was used by the control group was as productive as the AVDs. There was no difference between the control and experimental groups in the amount of cognitive load that they reported in terms of mental effort and time spent on the thinking and integration process. Measuring the cognitive load of different argument-counterargument integration strategies will help inform instructional efforts on how best to teach these strategies, design effective instructional techniques for teaching critical thinking, and will also help provide theoretical insight in the cognitive processes involved in using these strategies
è«èż°ă«ăăăè«è©±æ§é ăăăłè«çæ§é ăźè§Łæ
Tohoku Universityć棫ïŒæ
ć ±ç§ćŠïŒthesi
Exploring the Potential of Large Language Models in Computational Argumentation
Computational argumentation has become an essential tool in various fields,
including artificial intelligence, law, and public policy. It is an emerging
research field in natural language processing (NLP) that attracts increasing
attention. Research on computational argumentation mainly involves two types of
tasks: argument mining and argument generation. As large language models (LLMs)
have demonstrated strong abilities in understanding context and generating
natural language, it is worthwhile to evaluate the performance of LLMs on
various computational argumentation tasks. This work aims to embark on an
assessment of LLMs, such as ChatGPT, Flan models and LLaMA2 models, under
zero-shot and few-shot settings within the realm of computational
argumentation. We organize existing tasks into 6 main classes and standardise
the format of 14 open-sourced datasets. In addition, we present a new benchmark
dataset on counter speech generation, that aims to holistically evaluate the
end-to-end performance of LLMs on argument mining and argument generation.
Extensive experiments show that LLMs exhibit commendable performance across
most of these datasets, demonstrating their capabilities in the field of
argumentation. We also highlight the limitations in evaluating computational
argumentation and provide suggestions for future research directions in this
field
Effects of Teaching Argument to First-Year Community-College Students Using a Structural and Dialectical Approach
The purpose of this study was to measure to what extent an experimental method of teaching argument incorporating elements from both Toulminâs (2004) structural approach and Waltonâs (2013) dialectical approach effects first-year college studentsâ ability to write strong arguments. This experimental instruction used critical questioning as a strategy in building a strong argument, incorporating alternative viewpoints, and creating a dialogue between claims and counterclaims, backed logically by verifiable evidence from reliable sources.
Using the Analytic Scoring Rubric of Argumentative Writing (ASRAW; Stapleton & Wu, 2015) that includes the argument elements of claims, data, counterclaim, counterclaim data, rebuttal claim, and rebuttal data, the efficacy of the experimental instruction method was evaluated by collecting and scoring studentsâ preand postoutlines of arguments on topics involving controversial issues and students\u27 argument research-paper outlines. Scores on these three sets of outlines in each class included in the study (Spring n=20 and Fall n=23 2020) were compared to investigate the efficacy of using the experimental instructional approach. The rubric analysis was based on outlines that incorporate the basic elements of a strong argument as defined above, both before and after this instructional method was employed.
The instruction was designed to develop studentsâ understanding of bias in the context of building an argument by helping students learn to explore and integrate alternative viewpoints, to reflect on their own assumptions, to discover bias in sources, and ultimately to build strong arguments from reliable sources that take more than one perspective into account. The instruction consisted of an interactive lecture and pair and group work on a controversial issue in class.
This study took place at a medium-sized community college in an âextendedâ 6- unit composition course designed for students needing more support than a traditional 3- or 4-unit first-year English Composition course. The student population of this community college and of this course was very diverse and representative of Northern Californiaâs demographics, with many students being first- or second-generation immigrants, from economically disadvantaged backgrounds, the first in their family to attend college, or a combination.
Overall, based on the paired-sample t tests for the pre- and postoutline pair, the pre- and research-paper outline pair on the total scores and on the counter-argument and evidence and rebuttals and evidence scores for both Spring and Fall 2020 classes were statistically significant, except for post- and research-paper outlines for Fall 2022 for total, counter-argument and evidence, pre- and postoutlines, and post- and research-paper outlines for rebuttal and rebuttal evidence. Effect size, as measured by Cohenâs d, for pairs that were statistically significant were all large, ranging from 0.80 to 1.26 except for counter-argument and counter-argument evidence for pre- and postoutlines for the Spring 2020 class that were both medium, ranging from 0.58 to 0.65
- âŠ