41 research outputs found
Ensuring Readability and Data-fidelity using Head-modifier Templates in Deep Type Description Generation
A type description is a succinct noun compound which helps human and machines
to quickly grasp the informative and distinctive information of an entity.
Entities in most knowledge graphs (KGs) still lack such descriptions, thus
calling for automatic methods to supplement such information. However, existing
generative methods either overlook the grammatical structure or make factual
mistakes in generated texts. To solve these problems, we propose a
head-modifier template-based method to ensure the readability and data fidelity
of generated type descriptions. We also propose a new dataset and two automatic
metrics for this task. Experiments show that our method improves substantially
compared with baselines and achieves state-of-the-art performance on both
datasets.Comment: ACL 201
Beneath Surface Similarity: Large Language Models Make Reasonable Scientific Analogies after Structure Abduction
The vital role of analogical reasoning in human cognition allows us to grasp
novel concepts by linking them with familiar ones through shared relational
structures. Despite the attention previous research has given to word
analogies, this work suggests that Large Language Models (LLMs) often overlook
the structures that underpin these analogies, raising questions about the
efficacy of word analogies as a measure of analogical reasoning skills akin to
human cognition. In response to this, our paper introduces a task of analogical
structure abduction, grounded in cognitive psychology, designed to abduce
structures that form an analogy between two systems. In support of this task,
we establish a benchmark called SCAR, containing 400 scientific analogies from
13 distinct fields, tailored for evaluating analogical reasoning with structure
abduction. The empirical evidence underlines the continued challenges faced by
LLMs, including ChatGPT and GPT-4, in mastering this task, signifying the need
for future exploration to enhance their abilities.Comment: Accepted to EMNLP 2023 (Findings
Unsupervised Explanation Generation via Correct Instantiations
While large pre-trained language models (PLM) have shown their great skills
at solving discriminative tasks, a significant gap remains when compared with
humans for explanation-related tasks. Among them, explaining the reason why a
statement is wrong (e.g., against commonsense) is incredibly challenging. The
major difficulty is finding the conflict point, where the statement contradicts
our real world. This paper proposes Neon, a two-phrase, unsupervised
explanation generation framework. Neon first generates corrected instantiations
of the statement (phase I), then uses them to prompt large PLMs to find the
conflict point and complete the explanation (phase II). We conduct extensive
experiments on two standard explanation benchmarks, i.e., ComVE and e-SNLI.
According to both automatic and human evaluations, Neon outperforms baselines,
even for those with human-annotated instantiations. In addition to explaining a
negative prediction, we further demonstrate that Neon remains effective when
generalizing to different scenarios.Comment: Accepted to AAAI-2
LOREN: Logic-Regularized Reasoning for Interpretable Fact Verification
Given a natural language statement, how to verify its veracity against a
large-scale textual knowledge source like Wikipedia? Most existing neural
models make predictions without giving clues about which part of a false claim
goes wrong. In this paper, we propose LOREN, an approach for interpretable fact
verification. We decompose the verification of the whole claim at phrase-level,
where the veracity of the phrases serves as explanations and can be aggregated
into the final verdict according to logical rules. The key insight of LOREN is
to represent claim phrase veracity as three-valued latent variables, which are
regularized by aggregation logical rules. The final claim verification is based
on all latent variables. Thus, LOREN enjoys the additional benefit of
interpretability -- it is easy to explain how it reaches certain results with
claim phrase veracity. Experiments on a public fact verification benchmark show
that LOREN is competitive against previous approaches while enjoying the merit
of faithful and accurate interpretability. The resources of LOREN are available
at: https://github.com/jiangjiechen/LOREN.Comment: Accepted to AAAI 202
Self-supervised arbitrary scale super-resolution framework for anisotropic MRI
In this paper, we propose an efficient self-supervised arbitrary-scale
super-resolution (SR) framework to reconstruct isotropic magnetic resonance
(MR) images from anisotropic MRI inputs without involving external training
data. The proposed framework builds a training dataset using in-the-wild
anisotropic MR volumes with arbitrary image resolution. We then formulate the
3D volume SR task as a SR problem for 2D image slices. The anisotropic volume's
high-resolution (HR) plane is used to build the HR-LR image pairs for model
training. We further adapt the implicit neural representation (INR) network to
implement the 2D arbitrary-scale image SR model. Finally, we leverage the
well-trained proposed model to up-sample the 2D LR plane extracted from the
anisotropic MR volumes to their HR views. The isotropic MR volumes thus can be
reconstructed by stacking and averaging the generated HR slices. Our proposed
framework has two major advantages: (1) It only involves the
arbitrary-resolution anisotropic MR volumes, which greatly improves the model
practicality in real MR imaging scenarios (e.g., clinical brain image
acquisition); (2) The INR-based SR model enables arbitrary-scale image SR from
the arbitrary-resolution input image, which significantly improves model
training efficiency. We perform experiments on a simulated public adult brain
dataset and a real collected 7T brain dataset. The results indicate that our
current framework greatly outperforms two well-known self-supervised models for
anisotropic MR image SR tasks.Comment: 10 pages, 5 figure
Research on the Mechanism of the Role of Big Data Analytic Capabilities on the Growth Performance of Start-Up Enterprises: The Mediating Role of Entrepreneurial Opportunity Recognition and Exploitation
With the advent of the era of big data, the application of big data analytics in entrepreneurial activities has become increasingly prevalent. However, research on the relationship between big data analytic capabilities and entrepreneurial activities is still in its infancy, and the mechanism by which the two interact remains unclear. Drawing on resource-based theory and entrepreneurial process theory, this research examines the impact mechanism of big data analytic capabilities on the growth performance of start-up enterprises and explores the mediating role of entrepreneurial opportunity recognition and entrepreneurial opportunity exploitation. Empirical analysis reveals that big data analytic capabilities have a significant positive impact on the growth performance of start-up enterprises; entrepreneurial opportunity exploitation plays a mediating role in the relationship between big data analytic capabilities and the growth performance of start-up enterprises, but entrepreneurial opportunity recognition does not show a significant mediating effect between the two; and entrepreneurial opportunity recognition and entrepreneurial opportunity exploitation play a chain-mediated role in the relationship between big data analytic capabilities and the growth performance of start-up enterprises. These research findings enrich the study of digital entrepreneurship and provide valuable references for the entrepreneurial practice of start-up enterprises
Compatibility as a Prerequisite: Research on the Factors Influencing the Continuous Use Intention of In-vehicle Games Based on Diffusion of Innovations Theory
As an emerging game category, in-vehicle games have great development potential, but the factors influencing users’ acceptance and continuance intention of in-car games were still not determined. This study used the three perceived attributes of Diffusion of Innovations Theory, compatibility, complexity, and relative advantage, as basis and introduced perceived habits, fit, interaction quality, experience, play value, and continuous use intention to establish the users’ continuance intention model of in-vehicle games. The results of 305 valid questionnaires indicate that compatibility and play value have significant positive influence on continuance intention, of which fit shows stronger effect; perceived habits have significant influence on fit and interaction quality; both fit and quality have significant influence on experience; experience have significant influence on play value. The results of this study can provide reference for promotion design of in-vehicle games and important guidance for future development for in-vehicle game industry