296 research outputs found
Benchmarking Inverse Optimization Algorithms for Molecular Materials Discovery
Machine learning-based molecular materials discovery has attracted enormous
attention recently due to its flexibility in dealing with black box models.
Yet, metaheuristic algorithms are not as widely applied to materials discovery
applications. We comprehensively compare 11 different optimization algorithms
for molecular materials design with targeted properties. These algorithms
include Bayesian Optimization (BO) and multiple metaheuristic algorithms. We
performed 5000 material evaluations repeated 5 times with different randomized
initialization to optimize defined target properties. By maximizing the bulk
modulus and minimizing the Fermi energy through perturbing parameterized
molecular representations, we estimated the unique counts of molecular
materials, mean density scan of the objectives space, mean objectives, and
frequency distributed over the materials' representations and objectives. GA,
GWO, and BWO exhibit higher variances for materials count, density scan, and
mean objectives; and BO and Runge Kutta optimization (RUN) display generally
lower variances. These results unveil that nature-inspired algorithms contain
more uncertainties in the defined molecular design tasks, which correspond to
their dependency on multiple hyperparameters. RUN exhibits higher mean
objectives whereas BO displayed low mean objectives compared with other
benchmarked methods. Combined with materials count and density scan, we propose
that BO strives to approximate a more accurate surrogate of the design space by
sampling more molecular materials and hence have lower mean objectives, yet RUN
will repeatedly sample the targeted molecules with higher objective values. Our
work shed light on automated digital molecular materials design and is expected
to elicit future studies on materials optimization such as composite and alloy
design based on specific desired properties.Comment: 15 pages, 5 figures, for the main manuscrip
Attitude, Knowledge, and Practice on Evidence-Based Nursing among Registered Nurses in Traditional Chinese Medicine Hospitals: A Multiple Center Cross-Sectional Survey in China
Objective. This study was to describe RNsā attitude, knowledge, and practice on evidence-based practice (EBP) in traditional Chinese nursing field and to estimate the related sociodemographic and professional factors. Methods. A multiple institutional cross-sectional survey design with self-reported EBP Questionnaire (EBPQ) and self-designed questionnaires were used. Results. The average scores of the total EBPQ were with a mean of 4.24 (SD = 0.79). The score of attitude was the highest one, followed by the knowledge score, and the lowest one is practice. RNs with longer experience reported stronger EBP knowledge (H=6.64, P<0.05). And RNs under higher working pressure reported less positive attitudes (Ļ=0.17, P<0.001), whereas RNs holding negative professional attitude reported lower scores (Spearmanās Ļ: 0.12 to 0.15, P<0.001). Significant statistics were found between RNs with research experience and without in attitude (t=-2.40, P<0.05) and knowledge (t=-2.43, P<0.05). Conclusions. Respondents generally viewed EBP positively and their attitudes towards EBP tended to be more positive than knowledge and practice of EBP. Data also showed that longer working experience, having administrative position, research experience, lighter working load, and better professional attitude might facilitate EBP
Prompt Tuning for Generative Multimodal Pretrained Models
Prompt tuning has become a new paradigm for model tuning and it has
demonstrated success in natural language pretraining and even vision
pretraining. In this work, we explore the transfer of prompt tuning to
multimodal pretraining, with a focus on generative multimodal pretrained
models, instead of contrastive ones. Specifically, we implement prompt tuning
on the unified sequence-to-sequence pretrained model adaptive to both
understanding and generation tasks. Experimental results demonstrate that the
light-weight prompt tuning can achieve comparable performance with finetuning
and surpass other light-weight tuning methods. Besides, in comparison with
finetuned models, the prompt-tuned models demonstrate improved robustness
against adversarial attacks. We further figure out that experimental factors,
including the prompt length, prompt depth, and reparameteratization, have great
impacts on the model performance, and thus we empirically provide a
recommendation for the setups of prompt tuning. Despite the observed
advantages, we still find some limitations in prompt tuning, and we
correspondingly point out the directions for future studies. Codes are
available at \url{https://github.com/OFA-Sys/OFA}Comment: Work in progres
- ā¦