521 research outputs found
IRGAN: A Minimax Game for Unifying Generative and Discriminative Information Retrieval Models
This paper provides a unified account of two schools of thinking in
information retrieval modelling: the generative retrieval focusing on
predicting relevant documents given a query, and the discriminative retrieval
focusing on predicting relevancy given a query-document pair. We propose a game
theoretical minimax game to iteratively optimise both models. On one hand, the
discriminative model, aiming to mine signals from labelled and unlabelled data,
provides guidance to train the generative model towards fitting the underlying
relevance distribution over documents given the query. On the other hand, the
generative model, acting as an attacker to the current discriminative model,
generates difficult examples for the discriminative model in an adversarial way
by minimising its discrimination objective. With the competition between these
two models, we show that the unified framework takes advantage of both schools
of thinking: (i) the generative model learns to fit the relevance distribution
over documents via the signals from the discriminative model, and (ii) the
discriminative model is able to exploit the unlabelled data selected by the
generative model to achieve a better estimation for document ranking. Our
experimental results have demonstrated significant performance gains as much as
23.96% on Precision@5 and 15.50% on MAP over strong baselines in a variety of
applications including web search, item recommendation, and question answering.Comment: 12 pages; appendix adde
PEPT: Expert Finding Meets Personalized Pre-training
Finding appropriate experts is essential in Community Question Answering
(CQA) platforms as it enables the effective routing of questions to potential
users who can provide relevant answers. The key is to personalized learning
expert representations based on their historical answered questions, and
accurately matching them with target questions. There have been some
preliminary works exploring the usability of PLMs in expert finding, such as
pre-training expert or question representations. However, these models usually
learn pure text representations of experts from histories, disregarding
personalized and fine-grained expert modeling. For alleviating this, we present
a personalized pre-training and fine-tuning paradigm, which could effectively
learn expert interest and expertise simultaneously. Specifically, in our
pre-training framework, we integrate historical answered questions of one
expert with one target question, and regard it as a candidate aware
expert-level input unit. Then, we fuse expert IDs into the pre-training for
guiding the model to model personalized expert representations, which can help
capture the unique characteristics and expertise of each individual expert.
Additionally, in our pre-training task, we design: 1) a question-level masked
language model task to learn the relatedness between histories, enabling the
modeling of question-level expert interest; 2) a vote-oriented task to capture
question-level expert expertise by predicting the vote score the expert would
receive. Through our pre-training framework and tasks, our approach could
holistically learn expert representations including interests and expertise.
Our method has been extensively evaluated on six real-world CQA datasets, and
the experimental results consistently demonstrate the superiority of our
approach over competitive baseline methods
Exploiting Diverse Characteristics and Adversarial Ambivalence for Domain Adaptive Segmentation
Adapting semantic segmentation models to new domains is an important but
challenging problem. Recently enlightening progress has been made, but the
performance of existing methods are unsatisfactory on real datasets where the
new target domain comprises of heterogeneous sub-domains (e.g., diverse weather
characteristics). We point out that carefully reasoning about the multiple
modalities in the target domain can improve the robustness of adaptation
models. To this end, we propose a condition-guided adaptation framework that is
empowered by a special attentive progressive adversarial training (APAT)
mechanism and a novel self-training policy. The APAT strategy progressively
performs condition-specific alignment and attentive global feature matching.
The new self-training scheme exploits the adversarial ambivalences of easy and
hard adaptation regions and the correlations among target sub-domains
effectively. We evaluate our method (DCAA) on various adaptation scenarios
where the target images vary in weather conditions. The comparisons against
baselines and the state-of-the-art approaches demonstrate the superiority of
DCAA over the competitors.Comment: Accepted to AAAI 202
Association of the low-density lipoprotein cholesterol/high-density lipoprotein cholesterol ratio and concentrations of plasma lipids with high-density lipoprotein subclass distribution in the Chinese population
<p>Abstract</p> <p>Background</p> <p>To evaluate the relationship between the low-density lipoprotein cholesterol (LDL-C)/high-density lipoprotein cholesterol (HDL-C) ratio and HDL subclass distribution and to further examine and discuss the potential impact of LDL-C and HDL-C together with TG on HDL subclass metabolism.</p> <p>Results</p> <p>Small-sized preβ<sub>1</sub>-HDL, HDL<sub>3b </sub>and HDL<sub>3a </sub>increased significantly while large-sized HDL<sub>2a </sub>and HDL<sub>2b </sub>decreased significantly as the LDL-C/HDL-C ratio increased. The subjects in low HDL-C level (< 1.03 mmol/L) who had an elevation of the LDL-C/HDL-C ratio and a reduction of HDL<sub>2b</sub>/preβ<sub>1</sub>-HDL regardless of an undesirable or high LDL-C level. At desirable LDL-C levels (< 3.34 mmol/L), the HDL<sub>2b</sub>/preβ<sub>1</sub>-HDL ratio was 5.4 for the subjects with a high HDL-C concentration (≥ 1.55 mmol/L); however, at high LDL-C levels (≥ 3.36 mmol/L), the ratio of LDL-C/HDL-C was 2.8 in subjects, and an extremely low HDL<sub>2b</sub>/preβ<sub>1</sub>-HDL value although with high HDL-C concentration.</p> <p>Conclusion</p> <p>With increase of the LDL-C/HDL-C ratio, there was a general shift toward smaller-sized HDL particles, which implied that the maturation process of HDL was blocked. High HDL-C concentrations can regulate the HDL subclass distribution at desirable and borderline LDL-C levels but cannot counteract the influence of high LDL-C levels on HDL subclass distribution.</p
- …