13 research outputs found

    Can leadership quality buffer the association between emotionally demanding work and risk of long-term sickness absence?

    Get PDF
    We examined whether the association between emotionally demanding work and risk of register-based long-term sickness absence (LTSA, >= 6weeks) was buffered by high leadership quality among 25 416 Danish employees during 52-week follow-up. Emotional demands were measured at the job group level, whereas leadership quality was measured by workers rating their closest manager. Emotionally demanding work was associated with a higher risk of LTSA, regardless if leadership quality was high or low, with neither multiplicative nor additive interaction. We conclude that we found no evidence for high leadership quality buffering the effect of emotionally demanding work on risk of LTSA.Peer reviewe

    Strengthening research integrity: which topic areas should organisations focus on?

    No full text
    Abstract The widespread problems with scientific fraud, questionable research practices, and the reliability of scientific results have led to an increased focus on research integrity (RI). International organisations and networks have been established, declarations have been issued, and codes of conducts have been formed. The abstract principles of these documents are now also being translated into concrete topic areas that Research Performing organisations (RPOs) and Research Funding organisations (RFOs) should focus on. However, so far, we know very little about disciplinary differences in the need for RI support from RPOs and RFOs. The paper attempts to fill this knowledge gap. It reports on a comprehensive focus group study with 30 focus group interviews carried out in eight different countries across Europe focusing on the following research question: “Which RI topics would researchers and stakeholders from the four main areas of research (humanities, social science, natural science incl. technical science, and medical science incl. biomedicine) prioritise for RPOs and RFOs?” The paper reports on the results of these focus group interviews and gives an overview of the priorities of the four main areas of research. The paper ends with six policy recommendations and a reflection on how the results of the study can be used in RPOs and RFOs

    Feasibility, quality and validity of narrative multisource feedback in postgraduate training:a mixed-method study

    No full text
    OBJECTIVES: To examine a narrative multisource feedback (MSF) instrument concerning feasibility, quality of narrative comments, perceptions of users (face validity), consequential validity, discriminating capacity and number of assessors needed. DESIGN: Qualitative text analysis supplemented by quantitative descriptive analysis. SETTING: Internal Medicine Departments in Zealand, Denmark. PARTICIPANTS: 48 postgraduate trainees in internal medicine specialties, 1 clinical supervisor for each trainee and 376 feedback givers (respondents). INTERVENTION: This study examines the use of an electronic, purely narrative MSF instrument. After the MSF process, the trainee and the supervisor answered a postquestionnaire concerning their perception of the process. The authors coded the comments in the MSF reports for valence (positive or negative), specificity, relation to behaviour and whether the comment suggested a strategy for improvement. Four of the authors independently classified the MSF reports as either ‘no reasons for concern’ or ‘possibly some concern’, thereby examining discriminating capacity. Through iterative readings, the authors furthermore tried to identify how many respondents were needed in order to get a reliable impression of a trainee. RESULTS: Out of all comments coded for valence (n=1935), 89% were positive and 11% negative. Out of all coded comments (n=4684), 3.8% were suggesting ways to improve. 92% of trainees and supervisors preferred a narrative MSF to a numerical MSF, and 82% of the trainees discovered performance in need of development, but only 53% had made a specific plan for development. Kappa coefficients for inter-rater correlations between four authors were 0.7–1. There was a significant association (p<0.001) between the number of negative comments and the qualitative judgement by the four authors. It was not possible to define a specific number of respondents needed. CONCLUSIONS: A purely narrative MSF contributes with educational value and experienced supervisors can discriminate between trainees’ performances based on the MSF reports

    Assessor burden, inter-rater agreement and user experience of the RoB-SPEO tool for assessing risk of bias in studies estimating prevalence of exposure to occupational risk factors: An analysis from the WHO/ILO Joint Estimates of the Work-related Burden of Disease and Injury

    Get PDF
    International audienceBackground: As part of the development of the World Health Organization (WHO)/International Labour Organization (ILO) Joint Estimates of the Work-related Burden of Disease and Injury, WHO and ILO carried out several systematic reviews to determine the prevalence of exposure to selected occupational risk factors. Risk of bias assessment for individual studies is a critical step of a systematic review. No tool existed for assessing the risk of bias in prevalence studies of exposure to occupational risk factors, so WHO and ILO developed and pilot tested the RoB-SPEO tool for this purpose. Here, we investigate the assessor burden, inter-rater agreement, and user experience of this new instrument, based on the abovementioned WHO/ILO systematic reviews. Methods: Twenty-seven individual experts applied RoB-SPEO to assess risk of bias. Four systematic reviews provided a total of 283 individual assessments, carried out for 137 studies. For each study, two or more assessors independently assessed risk of bias across the eight RoB-SPEO domains selecting one of RoB-SPEO's six ratings (i.e., &quot;low&quot;, &quot;probably low&quot;, &quot;probably high&quot;, &quot;high&quot;, &quot;unclear&quot; or &quot;cannot be determined&quot;). Assessors were asked to report time taken (i.e. indicator of assessor burden) to complete each assessment and describe their user experience. To gauge assessor burden, we calculated the median and inter-quartile range of times taken per individual risk of bias assessment. To assess inter-rater reliability, we calculated a raw measure of inter-rater agreement (P-i) for each RoB-SPEO domain, between P-i = 0.00, indicating no agreement and P-i = 1.00, indicating perfect agreement. As subgroup analyses, P-i was also disaggregated by systematic review, assessor experience with RoB-SPEO (&lt;= 10 assessments versus &gt; 10 assessments), and assessment time (tertiles: &lt;= 25 min versus 26-66 min versus &gt;= 67 min). To describe user experience, we synthesised the assessors' comments and recommendations. Results: Assessors reported a median of 40 min to complete one assessment (interquartile range 21-120 min). For all domains, raw inter-rater agreement ranged from 0.54 to 0.82. Agreement varied by systematic review and assessor experience with RoB-SPEO between domains, and increased with increasing assessment time. A small number of users recommended further development of instructions for selected RoB-SPEO domains, especially bias in selection of participants into the study (domain 1) and bias due to differences in numerator and denominator (domain 7). Discussion: Overall, our results indicated good agreement across the eight domains of the RoB-SPEO tool. The median assessment time was comparable to that of other risk of bias tools, indicating comparable assessor burden. However, there was considerable variation in time taken to complete assessments. Additional time spent on assessments may improve inter-rater agreement. Further development of the RoB-SPEO tool could focus on refining instructions for selected RoB-SPEO domains and additional testing to assess agreement for different topic areas and with a wider range of assessors from different research backgrounds
    corecore