1,013 research outputs found
Plasma Clusterin and the CLU Gene rs11136000 Variant Are Associated with Mild Cognitive Impairment in Type 2 Diabetic Patients
Objective: Type 2 diabetes mellitus (T2DM) is related to an elevated risk of mild cognitive impairment (MCI). Plasma clusterin is reported associated with the early pathology of Alzheimer's disease (AD) and longitudinal brain atrophy in subjects with MCI. The rs11136000 single nucleotide polymorphism within the clusterin (CLU) gene is also associated with the risk of AD. We aimed to investigate the associations among plasma clusterin, rs11136000 genotype and T2DM-associated MCI. Methods: A total of 231 T2DM patients, including 126 MCI and 105 cognitively healthy controls were enrolled in this study. Demographic parameters were collected and neuropsychological tests were conducted. Plasma clusterin and CLU rs11136000 genotype were examined.Results: Plasma clusterin was significantly higher in MCI patients than in control group (p=0.007). In subjects with MCI, plasma clusterin level was negatively correlated with Montreal cognitive assessment and auditory verbal learning test_delayed recall scores (p=0.027 and p=0.020, respectively). After adjustment for age, educational attainment, and gender, carriers of rs11136000 TT genotype demonstrated reduced risk for MCI compared with the CC genotype carriers (OR=0.158, χ2=4.113, p=0.043). Multivariable regression model showed that educational attainment, duration of diabetes, HDL-c, and plasma clusterin levels are associated with MCI in T2DM patients.Conclusions: Plasma clusterin was associated with MCI and may reflect a protective response in T2DM patients. TT genotype exhibited a reduced risk of MCI compared to CC genotype. Further investigations should be conducted to determine the role of clusterin in cognitive decline
Parameter and Computation Efficient Transfer Learning for Vision-Language Pre-trained Models
With ever increasing parameters and computation, vision-language pre-trained
(VLP) models exhibit prohibitive expenditure in downstream task adaption.
Recent endeavors mainly focus on parameter efficient transfer learning (PETL)
for VLP models by only updating a small number of parameters. However,
excessive computational overhead still plagues the application of VLPs. In this
paper, we aim at parameter and computation efficient transfer learning (PCETL)
for VLP models. In particular, PCETL not only needs to limit the number of
trainable parameters in VLP models, but also to reduce the computational
redundancy during inference, thus enabling a more efficient transfer. To
approach this target, we propose a novel dynamic architecture skipping (DAS)
approach towards effective PCETL. Instead of directly optimizing the intrinsic
architectures of VLP models, DAS first observes the significances of their
modules to downstream tasks via a reinforcement learning (RL) based process,
and then skips the redundant ones with lightweight networks, i.e., adapters,
according to the obtained rewards. In this case, the VLP model can well
maintain the scale of trainable parameters while speeding up its inference on
downstream tasks. To validate DAS, we apply it to two representative VLP
models, namely ViLT and METER, and conduct extensive experiments on a bunch of
VL tasks. The experimental results not only show the great advantages of DAS in
reducing computational complexity, e.g. -11.97% FLOPs of METER on VQA2.0, but
also confirm its competitiveness against existing PETL methods in terms of
parameter scale and performance. Our source code is given in our appendix
Approximated Prompt Tuning for Vision-Language Pre-trained Models
Prompt tuning is a parameter-efficient way to deploy large-scale pre-trained
models to downstream tasks by adding task-specific tokens. In terms of
vision-language pre-trained (VLP) models, prompt tuning often requires a large
number of learnable tokens to bridge the gap between the pre-training and
downstream tasks, which greatly exacerbates the already high computational
overhead. In this paper, we revisit the principle of prompt tuning for
Transformer-based VLP models and reveal that the impact of soft prompt tokens
can be actually approximated via independent information diffusion steps,
thereby avoiding the expensive global attention modeling and reducing the
computational complexity to a large extent. Based on this finding, we propose a
novel Approximated Prompt Tuning (APT) approach towards efficient VL transfer
learning. To validate APT, we apply it to two representative VLP models, namely
ViLT and METER, and conduct extensive experiments on a bunch of downstream
tasks. Meanwhile, the generalization of APT is also validated on CLIP for image
classification. The experimental results not only show the superior performance
gains and computation efficiency of APT against the conventional prompt tuning
methods, e.g., +6.6% accuracy and -64.62% additional computation overhead on
METER, but also confirm its merits over other parameter-efficient transfer
learning approaches
Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing Attack
Previous studies have verified that the functionality of black-box models can
be stolen with full probability outputs. However, under the more practical
hard-label setting, we observe that existing methods suffer from catastrophic
performance degradation. We argue this is due to the lack of rich information
in the probability prediction and the overfitting caused by hard labels. To
this end, we propose a novel hard-label model stealing method termed
\emph{black-box dissector}, which consists of two erasing-based modules. One is
a CAM-driven erasing strategy that is designed to increase the information
capacity hidden in hard labels from the victim model. The other is a
random-erasing-based self-knowledge distillation module that utilizes soft
labels from the substitute model to mitigate overfitting. Extensive experiments
on four widely-used datasets consistently demonstrate that our method
outperforms state-of-the-art methods, with an improvement of at most .
We also validate the effectiveness and practical potential of our method on
real-world APIs and defense methods. Furthermore, our method promotes other
downstream tasks, \emph{i.e.}, transfer adversarial attacks
- …