5,060 research outputs found
The practice effect of the methods of colleges and universities cooperation in training primary specialized nurses
目的 探讨院校合作模式培养初级专科护士的实践效果。方法 对7所医院2011年注册护士113名采用普通培养方法和2012年注册护士119名,采用院校合作方式进行培养,通过学生考核成绩、教师评分、满意度调查评价分析院校合作方式培养初级专科护士的实践效果。结果 2012年注册护士的平均考核成绩和教师评分比2011年显著增高,并且学生对院校合作培养模式的平均满意度大大提高。结论 院校合作培养模式有利于专科护士对理论知识的吸收、能提高临床实践的操作水平,得到了护士的认可。Objective: To study the practice effect of the methods of colleges and universities cooperation in training primary specialized nurses. Methods: A total of 113 nurses from 7 hospitals, who had been registered in the year of 2011, were trained by common method, and 119 registered nurses in 2012 were trained by colleges and universities cooperation method. Analyze the practice effect of the methods of colleges and universities cooperation in training primary specialized nurses through the examination result, teachers’ assessment, and satisfaction survey. Results: The average examination scores and teacher evaluation result of the students in grade 2012is significantly higher than the students in grade 2011, and the satisfaction of students to the cultivating methods of Colleges and universities cooperation is also greatly improved. Conclusion: The cultivating methods of Colleges and universities cooperation are not only beneficial for primary specialized nurses to master theory knowledge and improve the clinic operative level, but also obtain the recognition of students
AcTune: Uncertainty-aware Active Self-Training for Semi-Supervised Active Learning with Pretrained Language Models
While pre-trained language model (PLM) fine-tuning has achieved strong
performance in many NLP tasks, the fine-tuning stage can be still demanding in
labeled data. Recent works have resorted to active fine-tuning to improve the
label efficiency of PLM fine-tuning, but none of them investigate the potential
of unlabeled data. We propose {\ours}, a new framework that leverages unlabeled
data to improve the label efficiency of active PLM fine-tuning. AcTune switches
between data annotation and model self-training based on uncertainty: it
selects high-uncertainty unlabeled samples for active annotation and
low-uncertainty ones for model self-training. Under this framework, we design
(1) a region-aware sampling strategy that reduces redundancy when actively
querying for annotations and (2) a momentum-based memory bank that dynamically
aggregates the model's pseudo labels to suppress label noise in self-training.
Experiments on 6 text classification datasets show that AcTune outperforms the
strongest active learning and self-training baselines and improves the label
efficiency of PLM fine-tuning by 56.2\% on average. Our implementation will be
available at \url{https://github.com/yueyu1030/actune}.Comment: NAACL 2022 Main Conference (Code:
https://github.com/yueyu1030/actune
Can job turnover improve technical efficiency? : a study of state-owned enterprises in Shanghai
This paper studies the relationship between job turnover and technical efficiency of state-owned enterprise (SOEs) in Shanghai\u27s manufacturing sector during the period of 1989-1992. Data Envelopment Analysis (DEA) is used to compute measure of technical efficiency for each enterprise. Our findings indicate that, for non-expanding SOEs, the relationship between job turnover (i.e., downsizing) and technical efficiency is a U-shaped one such that efficiency declines at low levels of turnover,but after a certain level, it starts to increase. In addition, we show that small non-expanding SOEs (i.e., with employment size less than 100) start to increase their efficiency at a lower level of turnover than other medium and large SOEs. We also find that for medium and large expanding SOEs, the turnover-efficiency relationship is a positive and linear one
Event-Independent Network for Polyphonic Sound Event Localization and Detection
Polyphonic sound event localization and detection is not only detecting what
sound events are happening but localizing corresponding sound sources. This
series of tasks was first introduced in DCASE 2019 Task 3. In 2020, the sound
event localization and detection task introduces additional challenges in
moving sound sources and overlapping-event cases, which include two events of
the same type with two different direction-of-arrival (DoA) angles. In this
paper, a novel event-independent network for polyphonic sound event
localization and detection is proposed. Unlike the two-stage method we proposed
in DCASE 2019 Task 3, this new network is fully end-to-end. Inputs to the
network are first-order Ambisonics (FOA) time-domain signals, which are then
fed into a 1-D convolutional layer to extract acoustic features. The network is
then split into two parallel branches. The first branch is for sound event
detection (SED), and the second branch is for DoA estimation. There are three
types of predictions from the network, SED predictions, DoA predictions, and
event activity detection (EAD) predictions that are used to combine the SED and
DoA features for on-set and off-set estimation. All of these predictions have
the format of two tracks indicating that there are at most two overlapping
events. Within each track, there could be at most one event happening. This
architecture introduces a problem of track permutation. To address this
problem, a frame-level permutation invariant training method is used.
Experimental results show that the proposed method can detect polyphonic sound
events and their corresponding DoAs. Its performance on the Task 3 dataset is
greatly increased as compared with that of the baseline method.Comment: conferenc
MUBen: Benchmarking the Uncertainty of Pre-Trained Models for Molecular Property Prediction
Large Transformer models pre-trained on massive unlabeled molecular data have
shown great success in predicting molecular properties. However, these models
can be prone to overfitting during fine-tuning, resulting in over-confident
predictions on test data that fall outside of the training distribution. To
address this issue, uncertainty quantification (UQ) methods can be used to
improve the models' calibration of predictions. Although many UQ approaches
exist, not all of them lead to improved performance. While some studies have
used UQ to improve molecular pre-trained models, the process of selecting
suitable backbone and UQ methods for reliable molecular uncertainty estimation
remains underexplored. To address this gap, we present MUBen, which evaluates
different combinations of backbone and UQ models to quantify their performance
for both property prediction and uncertainty estimation. By fine-tuning various
backbone molecular representation models using different molecular descriptors
as inputs with UQ methods from different categories, we critically assess the
influence of architectural decisions and training strategies. Our study offers
insights for selecting UQ and backbone models, which can facilitate research on
uncertainty-critical applications in fields such as materials science and drug
discovery
- …