189 research outputs found
The relationship between health belief and sleep quality of Chinese college students: The mediating role of physical activity and moderating effect of mobile phone addiction
BackgroundPoor sleep quality has become a common health problem encountered by college students.MethodsHealth belief scale (HBS), physical activity rating scale (PARS-3), mobile phone addiction tendency scale (MPATS) and Pittsburgh sleep quality index (PSQI) were adopted to analyze the data collected from survey questionnaires, which were filled out by 1,019 college students (including 429 males and 590 females) from five comprehensive colleges and universities from March 2022 to April 2022. The data collected from survey questionnaires were analyzed using SPSS and its macro-program PROCESS.Results(1) Health belief, physical activity, mobile phone addiction and sleep quality are significantly associated with each other (P < 0.01); (2) physical activity plays a mediating role between health belief and sleep quality, and the mediating effects account for 14.77%; (3) mobile phone addiction can significantly moderate the effect size of health belief (β = 0.062, p < 0.05) and physical activity (β = 0.073, P < 0.05) on sleep quality, and significantly moderate the effect size of health belief on physical activity (β = −0.112, p < 0.001).ConclusionThe health belief of college students can significantly improve their sleep quality; college students’ health belief can not only improve their sleep quality directly, but also improve their sleep quality through physical activity; mobile phone addiction can significantly moderate the effect size of health belief on sleep quality, the effect size of health belief on physical activity, and the effect size of physical activity on sleep quality
Beyond Sole Strength: Customized Ensembles for Generalized Vision-Language Models
Fine-tuning pre-trained vision-language models (VLMs), e.g., CLIP, for the
open-world generalization has gained increasing popularity due to its practical
value. However, performance advancements are limited when relying solely on
intricate algorithmic designs for a single model, even one exhibiting strong
performance, e.g., CLIP-ViT-B/16. This paper, for the first time, explores the
collaborative potential of leveraging much weaker VLMs to enhance the
generalization of a robust single model. The affirmative findings motivate us
to address the generalization problem from a novel perspective, i.e., ensemble
of pre-trained VLMs. We introduce three customized ensemble strategies, each
tailored to one specific scenario. Firstly, we introduce the zero-shot
ensemble, automatically adjusting the logits of different models based on their
confidence when only pre-trained VLMs are available. Furthermore, for scenarios
with extra few-shot samples, we propose the training-free and tuning ensemble,
offering flexibility based on the availability of computing resources. The
proposed ensemble strategies are evaluated on zero-shot, base-to-new, and
cross-dataset generalization, achieving new state-of-the-art performance.
Notably, this work represents an initial stride toward enhancing the
generalization performance of VLMs via ensemble. The code is available at
https://github.com/zhiheLu/Ensemble_VLM.git.Comment: Technical repor
GraphAdapter: Tuning Vision-Language Models With Dual Knowledge Graph
Adapter-style efficient transfer learning (ETL) has shown excellent
performance in the tuning of vision-language models (VLMs) under the low-data
regime, where only a few additional parameters are introduced to excavate the
task-specific knowledge based on the general and powerful representation of
VLMs. However, most adapter-style works face two limitations: (i) modeling
task-specific knowledge with a single modality only; and (ii) overlooking the
exploitation of the inter-class relationships in downstream tasks, thereby
leading to sub-optimal solutions. To mitigate that, we propose an effective
adapter-style tuning strategy, dubbed GraphAdapter, which performs the textual
adapter by explicitly modeling the dual-modality structure knowledge (i.e., the
correlation of different semantics/classes in textual and visual modalities)
with a dual knowledge graph. In particular, the dual knowledge graph is
established with two sub-graphs, i.e., a textual knowledge sub-graph, and a
visual knowledge sub-graph, where the nodes and edges represent the
semantics/classes and their correlations in two modalities, respectively. This
enables the textual feature of each prompt to leverage the task-specific
structure knowledge from both textual and visual modalities, yielding a more
effective classifier for downstream tasks. Extensive experimental results on 11
benchmark datasets reveal that our GraphAdapter significantly outperforms
previous adapter-based methods. The code will be released at
https://github.com/lixinustc/GraphAdapterComment: Accepted by NeurIPS 2023. The manuscript will be further revised
based on the review
- …