38 research outputs found
Adversarial Connective-exploiting Networks for Implicit Discourse Relation Classification
Implicit discourse relation classification is of great challenge due to the
lack of connectives as strong linguistic cues, which motivates the use of
annotated implicit connectives to improve the recognition. We propose a feature
imitation framework in which an implicit relation network is driven to learn
from another neural network with access to connectives, and thus encouraged to
extract similarly salient features for accurate classification. We develop an
adversarial model to enable an adaptive imitation scheme through competition
between the implicit network and a rival feature discriminator. Our method
effectively transfers discriminability of connectives to the implicit features,
and achieves state-of-the-art performance on the PDTB benchmark.Comment: To appear in ACL201
Automatic Article Commenting: the Task and Dataset
Comments of online articles provide extended views and improve user
engagement. Automatically making comments thus become a valuable functionality
for online forums, intelligent chatbots, etc. This paper proposes the new task
of automatic article commenting, and introduces a large-scale Chinese dataset
with millions of real comments and a human-annotated subset characterizing the
comments' varying quality. Incorporating the human bias of comment quality, we
further develop automatic metrics that generalize a broad set of popular
reference-based metrics and exhibit greatly improved correlations with human
evaluations.Comment: ACL2018; with supplements; Dataset link available in the pape
MacGyver: Are Large Language Models Creative Problem Solvers?
We explore the creative problem-solving capabilities of modern LLMs in a
novel constrained setting. To this end, we create MACGYVER, an automatically
generated dataset consisting of over 1,600 real-world problems deliberately
designed to trigger innovative usage of objects and necessitate out-of-the-box
thinking. We then present our collection to both LLMs and humans to compare and
contrast their problem-solving abilities. MACGYVER is challenging for both
groups, but in unique and complementary ways. For instance, humans excel in
tasks they are familiar with but struggle with domain-specific knowledge,
leading to a higher variance. In contrast, LLMs, exposed to a variety of
specialized knowledge, attempt broader problems but fail by proposing
physically-infeasible actions. Finally, we provide a detailed error analysis of
LLMs, and demonstrate the potential of enhancing their problem-solving ability
with novel prompting techniques such as iterative step-wise reflection and
divergent-convergent thinking.
This work (1) introduces a fresh arena for intelligent agents focusing on
intricate aspects of physical reasoning, planning, and unconventional thinking,
which supplements the existing spectrum of machine intelligence; and (2)
provides insight into the constrained problem-solving capabilities of both
humans and AI.Comment: NAACL 202