11,683 research outputs found

    Inter-rater reliability of post-arrest cerebral performance category (CPC) scores.

    Get PDF
    PURPOSE: Cerebral Performance Category (CPC) scores are often an outcome measure for post-arrest neurologic function, collected worldwide to compare performance, evaluate therapies, and formulate recommendations. At most institutions, no formal training is offered in their determination, potentially leading to misclassification. MATERIALS AND METHODS: We identified 171 patients at 2 hospitals between 5/10/2005 and 8/31/2012 with two CPC scores at hospital discharge recorded independently - in an in-house quality improvement database and as part of a national registry. Scores were abstracted retrospectively from the same electronic medical record by two separate non-clinical researchers. These scores were compared to assess inter-rater reliability and stratified based on whether the score was concordant or discordant among reviewers to determine factors related to discordance. RESULTS: Thirty-nine CPC scores (22.8%) were discordant (kappa: 0.66), indicating substantial agreement. When dichotomized into favorable neurologic outcome (CPC 1-2)/ unfavorable neurologic outcome (CPC 3-5), 20 (11.7%) scores were discordant (kappa: 0.70), also indicating substantial agreement. Patients discharged home (as opposed to nursing/other care facility) and patients with suspected cardiac etiology of arrest were statistically more likely to have concordant scores. For the quality improvement database, patients with discordant scores had a statistically higher median CPC score than those with concordant scores. The registry had statistically lower median CPC score (CPC 1) than the quality improvement database (CPC 2); p\u3c0.01 for statistical significance. CONCLUSIONS: CPC scores have substantial inter-rater reliability, which is reduced in patients who have worse outcomes, have a non-cardiac etiology of arrest, and are discharged to a location other than home

    A review of mentorship measurement tools

    Get PDF
    © 2016 Elsevier Ltd. Objectives: To review mentorship measurement tools in various fields to inform nursing educators on selection, application, and developing of mentoring instruments. Design: A literature review informed by PRISMA 2009 guidelines. Data Sources: Six databases: CINHAL, Medline, PsycINFO, Academic Search Premier, ERIC, Business premier resource. Review Methods: Search terms and strategies used: mentor* N3 (behav* or skill? or role? or activit? or function* or relation*) and (scale or tool or instrument or questionnaire or inventory). The time limiter was set from January 1985 to June 2015. Extracted data were content of instruments, samples, psychometrics, theoretical framework, and utility. An integrative review method was used. Results: Twenty-eight papers linked to 22 scales were located, seven from business and industry, 11 from education, 3 from health science, and 1 focused on research mentoring. Mentorship measurement was pioneered by business with a universally accepted theoretical framework, i.e. career function and psychosocial function, and the trend of scale development is developing: from focusing on the positive side of mentorship shifting to negative mentoring experiences and challenges. Nursing educators mainly used instruments from business to assess mentorship among nursing teachers. In education and nursing, measurement has taken to a more specialised focus: researchers in different contexts have developed scales to measure different specific aspects of mentorship. Most tools show psychometric evidence of content homogeneity and construct validity but lack more comprehensive and advanced tests. Conclusion: Mentorship is widely used and conceptualised differently in different fields and is less mature in nursing than in business. Measurement of mentorship is heading to a more specialised and comprehensive process. Business and education provided measurement tools to nursing educators to assess mentorship among staff, but a robust instrument to measure nursing students' mentorship is needed

    DART: A Data Analytics Readiness Assessment Tool For Use In Occupational Safety

    Get PDF
    The safety industry is lagging in Big Data utilization due to various obstacles, which may include lack of analytics readiness (e.g. disparate databases, missing data, low validity) or competencies (e.g. personnel capable of cleaning data and running analyses). A safety-analytics maturity assessment can assist organizations with understanding their current capabilities. Organizations can then mature more advanced analytics capabilities to ultimately predict safety incidents and identify preventative measures directed towards specific risk variables. This study outlines the creation and use of an industry-specific readiness assessment tool. The proposed safety-analytics assessment evaluates the (a) quality of the data currently available, (b) organizational norms around data collection, scaling, and nomenclature, (c) foundational infrastructure for technological capabilities and expertise in data collection, storage, and analysis of safety and health metrics, and (d) measurement culture around employee willingness to participate in reporting, audits, inspections, and observations and how managers use data to improve workplace safety. The Data Analytics Readiness Tool (DART) was piloted at two manufacturing firms to explore the tool's reliability and validity. While there were reliability concerns for inter-rater agreement across readiness factors for individual variables, DART users agreed on and accurately assessed organizational capabilities for each level of analytics

    Measures of neck muscle strength and their measurement properties in adults with chronic neck pain-a systematic review.

    Get PDF
    Measurement of neck muscle strength is common during the assessment of people with chronic neck pain (CNP). This systematic review evaluates the measurement properties (reliability, validity, and responsiveness) of neck muscle strength measures in people with CNP. This systematic review followed a PROSPERO registered protocol (CRD42021233290). Electronic databases MEDLINE (OVID interface), CINAHL, SPORTDiscuss via (EBSCO interface), EMBASE (OVID interface), and Web of Science were searched from inception to 21 June 2021. Screening, data extraction, and quality assessment (Consensus-based Standards for the selection of Health Measurement Instruments (COSMIN) checklist) were conducted independently by two reviewers. The overall strength of evidence was evaluated using the modified Grading of Recommendations Assessment, Development and Evaluation. From 794 records, nine articles were included in this review which concerned six different neck strength outcome measures. All studies evaluated reliability and one evaluated construct validity. The reliability of neck strength measures ranged from good to excellent. However, the risk of bias was rated as doubtful/inadequate for all except one study and the overall certainty of evidence was rated low/very low for all measures except for the measurement error of a handheld dynamometer. A multitude of measures are used to evaluate neck muscle strength in people with CNP, but their measurement properties have not been fully established. Further methodologically rigorous research is required to increase the overall quality of evidence. [Abstract copyright: © 2023. The Author(s).

    Inter-rater agreement and reliability of the COSMIN (COnsensus-based Standards for the selection of health status Measurement Instruments) Checklist

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The COSMIN checklist is a tool for evaluating the methodological quality of studies on measurement properties of health-related patient-reported outcomes. The aim of this study is to determine the inter-rater agreement and reliability of each item score of the COSMIN checklist (n = 114).</p> <p>Methods</p> <p>75 articles evaluating measurement properties were randomly selected from the bibliographic database compiled by the Patient-Reported Outcome Measurement Group, Oxford, UK. Raters were asked to assess the methodological quality of three articles, using the COSMIN checklist. In a one-way design, percentage agreement and intraclass kappa coefficients or quadratic-weighted kappa coefficients were calculated for each item.</p> <p>Results</p> <p>88 raters participated. Of the 75 selected articles, 26 articles were rated by four to six participants, and 49 by two or three participants. Overall, percentage agreement was appropriate (68% was above 80% agreement), and the kappa coefficients for the COSMIN items were low (61% was below 0.40, 6% was above 0.75). Reasons for low inter-rater agreement were need for subjective judgement, and accustom to different standards, terminology and definitions.</p> <p>Conclusions</p> <p>Results indicated that raters often choose the same response option, but that it is difficult on item level to distinguish between articles. When using the COSMIN checklist in a systematic review, we recommend getting some training and experience, completing it by two independent raters, and reaching consensus on one final rating. Instructions for using the checklist are improved.</p
    corecore