1,446 research outputs found
Grounding Dynamic Spatial Relations for Embodied (Robot) Interaction
This paper presents a computational model of the processing of dynamic
spatial relations occurring in an embodied robotic interaction setup. A
complete system is introduced that allows autonomous robots to produce and
interpret dynamic spatial phrases (in English) given an environment of moving
objects. The model unites two separate research strands: computational
cognitive semantics and on commonsense spatial representation and reasoning.
The model for the first time demonstrates an integration of these different
strands.Comment: in: Pham, D.-N. and Park, S.-B., editors, PRICAI 2014: Trends in
Artificial Intelligence, volume 8862 of Lecture Notes in Computer Science,
pages 958-971. Springe
CAR: Conceptualization-Augmented Reasoner for Zero-Shot Commonsense Question Answering
The task of zero-shot commonsense question answering evaluates models on
their capacity to reason about general scenarios beyond those presented in
specific datasets. Existing approaches for tackling this task leverage external
knowledge from CommonSense Knowledge Bases (CSKBs) by pretraining the model on
synthetic QA pairs constructed from CSKBs. In these approaches, negative
examples (distractors) are formulated by randomly sampling from CSKBs using
fairly primitive keyword constraints. However, two bottlenecks limit these
approaches: the inherent incompleteness of CSKBs limits the semantic coverage
of synthetic QA pairs, and the lack of human annotations makes the sampled
negative examples potentially uninformative and contradictory. To tackle these
limitations above, we propose Conceptualization-Augmented Reasoner (CAR), a
zero-shot commonsense question-answering framework that fully leverages the
power of conceptualization. Specifically, CAR abstracts a commonsense knowledge
triple to many higher-level instances, which increases the coverage of CSKB and
expands the ground-truth answer space, reducing the likelihood of selecting
false-negative distractors. Extensive experiments demonstrate that CAR more
robustly generalizes to answering questions about zero-shot commonsense
scenarios than existing methods, including large language models, such as
GPT3.5 and ChatGPT. Our codes, data, and model checkpoints are available at
https://github.com/HKUST-KnowComp/CAR
Gold: A Global and Local-aware Denoising Framework for Commonsense Knowledge Graph Noise Detection
Commonsense Knowledge Graphs (CSKGs) are crucial for commonsense reasoning,
yet constructing them through human annotations can be costly. As a result,
various automatic methods have been proposed to construct CSKG with larger
semantic coverage. However, these unsupervised approaches introduce spurious
noise that can lower the quality of the resulting CSKG, which cannot be tackled
easily by existing denoising algorithms due to the unique characteristics of
nodes and structures in CSKGs. To address this issue, we propose Gold (Global
and Local-aware Denoising), a denoising framework for CSKGs that incorporates
entity semantic information, global rules, and local structural information
from the CSKG. Experiment results demonstrate that Gold outperforms all
baseline methods in noise detection tasks on synthetic noisy CSKG benchmarks.
Furthermore, we show that denoising a real-world CSKG is effective and even
benefits the downstream zero-shot commonsense question-answering task.Comment: Accepted to EMNLP findings 202
The situated common-sense knowledge in FunGramKB
It has been widely demonstrated that expectation-based schemata, along the lines of Lakoff's propositional Idealized Cognitive Models, play a crucial role in text comprehension. Discourse inferences are grounded on the shared generalized knowledge which is activated from the situational model underlying the text surface dimension. From a cognitive-plausible and linguistic-aware approach to knowledge representation, FunGramKB stands out for being a dynamic repository of lexical, constructional and conceptual knowledge which contributes to simulate human-level reasoning. The objective of this paper is to present a script model as a carrier of the situated common-sense knowledge required to help knowledge engineers construct more "intelligent" natural language processing systems.Periñán Pascual, JC. (2012). The situated common-sense knowledge in FunGramKB. Review of Cognitive Linguistics. 10(1):184-214. doi:10.1075/rcl.10.1.06perS18421410
BRAINTEASER: Lateral Thinking Puzzles for Large Language Models
The success of language models has inspired the NLP community to attend to
tasks that require implicit and complex reasoning, relying on human-like
commonsense mechanisms. While such vertical thinking tasks have been relatively
popular, lateral thinking puzzles have received little attention. To bridge
this gap, we devise BRAINTEASER: a multiple-choice Question Answering task
designed to test the model's ability to exhibit lateral thinking and defy
default commonsense associations. We design a three-step procedure for creating
the first lateral thinking benchmark, consisting of data collection, distractor
generation, and generation of adversarial examples, leading to 1,100 puzzles
with high-quality annotations. To assess the consistency of lateral reasoning
by models, we enrich BRAINTEASER based on a semantic and contextual
reconstruction of its questions. Our experiments with state-of-the-art
instruction- and commonsense language models reveal a significant gap between
human and model performance, which is further widened when consistency across
adversarial formats is considered. We make all of our code and data available
to stimulate work on developing and evaluating lateral thinking models
CodeKGC: Code Language Model for Generative Knowledge Graph Construction
Current generative knowledge graph construction approaches usually fail to
capture structural knowledge by simply flattening natural language into
serialized texts or a specification language. However, large generative
language model trained on structured data such as code has demonstrated
impressive capability in understanding natural language for structural
prediction and reasoning tasks. Intuitively, we address the task of generative
knowledge graph construction with code language model: given a code-format
natural language input, the target is to generate triples which can be
represented as code completion tasks. Specifically, we develop schema-aware
prompts that effectively utilize the semantic structure within the knowledge
graph. As code inherently possesses structure, such as class and function
definitions, it serves as a useful model for prior semantic structural
knowledge. Furthermore, we employ a rationale-enhanced generation method to
boost the performance. Rationales provide intermediate steps, thereby improving
knowledge extraction abilities. Experimental results indicate that the proposed
approach can obtain better performance on benchmark datasets compared with
baselines. Code and datasets are available in
https://github.com/zjunlp/DeepKE/tree/main/example/llm.Comment: Work in progres
- …