327 research outputs found
Crystal structure of 3-(2-dimethylaminoethyl)-2,3-dihydro-2-thioxoquinazolin-4(1H)-one, C12H15N3OS
Abstract
C12H15N3OS, monoclinic, P21/c (no. 14), a = 7.9840(18) Å, b = 11.331(3) Å, c = 14.428(3) Å, β = 105.702(4)°, V = 1256.5(5) Å3, Z = 4, R
gt(F) = 0.0639, wR
ref(F
2) = 0.1293, T = 296K
Mix-Initiative Response Generation with Dynamic Prefix Tuning
Mixed initiative serves as one of the key factors in controlling conversation
directions. For a speaker, responding passively or leading proactively would
result in rather different responses. However, most dialogue systems focus on
training a holistic response generation model without any distinction among
different initiatives. It leads to the cross-contamination problem, where the
model confuses different initiatives and generates inappropriate responses.
Moreover, obtaining plenty of human annotations for initiative labels can be
expensive. To address this issue, we propose a general mix-Initiative Dynamic
Prefix Tuning framework (IDPT) to decouple different initiatives from the
generation model, which learns initiative-aware prefixes in both supervised and
unsupervised settings. Specifically, IDPT decouples initiative factors into
different prefix parameters and uses the attention mechanism to adjust the
selection of initiatives in guiding generation dynamically. The prefix
parameters can be tuned towards accurate initiative prediction as well as
mix-initiative response generation. Extensive experiments on two public
dialogue datasets show that the proposed IDPT outperforms previous baselines on
both automatic metrics and human evaluations. It also manages to generate
appropriate responses with manipulated initiatives.Comment: Accepted to the main conference of NAACL 202
An Empirical Study on the Language Modal in Visual Question Answering
Generalization beyond in-domain experience to out-of-distribution data is of
paramount significance in the AI domain. Of late, state-of-the-art Visual
Question Answering (VQA) models have shown impressive performance on in-domain
data, partially due to the language priors bias which, however, hinders the
generalization ability in practice. This paper attempts to provide new insights
into the influence of language modality on VQA performance from an empirical
study perspective. To achieve this, we conducted a series of experiments on six
models. The results of these experiments revealed that, 1) apart from prior
bias caused by question types, there is a notable influence of postfix-related
bias in inducing biases, and 2) training VQA models with word-sequence-related
variant questions demonstrated improved performance on the out-of-distribution
benchmark, and the LXMERT even achieved a 10-point gain without adopting any
debiasing methods. We delved into the underlying reasons behind these
experimental results and put forward some simple proposals to reduce the
models' dependency on language priors. The experimental results demonstrated
the effectiveness of our proposed method in improving performance on the
out-of-distribution benchmark, VQA-CPv2. We hope this study can inspire novel
insights for future research on designing bias-reduction approaches.Comment: Accepted by IJCAI202
- …