6 research outputs found

    A systematic review of the psychometric properties of self-report research utilization measures used in healthcare

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In healthcare, a gap exists between what is known from research and what is practiced. Understanding this gap depends upon our ability to robustly measure research utilization.</p> <p>Objectives</p> <p>The objectives of this systematic review were: to identify self-report measures of research utilization used in healthcare, and to assess the psychometric properties (acceptability, reliability, and validity) of these measures.</p> <p>Methods</p> <p>We conducted a systematic review of literature reporting use or development of self-report research utilization measures. Our search included: multiple databases, ancestry searches, and a hand search. Acceptability was assessed by examining time to complete the measure and missing data rates. Our approach to reliability and validity assessment followed that outlined in the <it>Standards for Educational and Psychological Testing</it>.</p> <p>Results</p> <p>Of 42,770 titles screened, 97 original studies (108 articles) were included in this review. The 97 studies reported on the use or development of 60 unique self-report research utilization measures. Seven of the measures were assessed in more than one study. Study samples consisted of healthcare providers (92 studies) and healthcare decision makers (5 studies). No studies reported data on acceptability of the measures. Reliability was reported in 32 (33%) of the studies, representing 13 of the 60 measures. Internal consistency (Cronbach's Alpha) reliability was reported in 31 studies; values exceeded 0.70 in 29 studies. Test-retest reliability was reported in 3 studies with Pearson's <it>r </it>coefficients > 0.80. No validity information was reported for 12 of the 60 measures. The remaining 48 measures were classified into a three-level validity hierarchy according to the number of validity sources reported in 50% or more of the studies using the measure. Level one measures (n = 6) reported evidence from any three (out of four possible) <it>Standards </it>validity sources (which, in the case of single item measures, was all applicable validity sources). Level two measures (n = 16) had evidence from any two validity sources, and level three measures (n = 26) from only one validity source.</p> <p>Conclusions</p> <p>This review reveals significant underdevelopment in the measurement of research utilization. Substantial methodological advances with respect to construct clarity, use of research utilization and related theory, use of measurement theory, and psychometric assessment are required. Also needed are improved reporting practices and the adoption of a more contemporary view of validity (<it>i.e.</it>, the <it>Standards</it>) in future research utilization measurement studies.</p

    A systematic review of implementation frameworks of innovations in healthcare and resulting generic implementation framework

    Get PDF
    © 2015 Moullin et al. Background: Implementation science and knowledge translation have developed across multiple disciplines with the common aim of bringing innovations to practice. Numerous implementation frameworks, models, and theories have been developed to target a diverse array of innovations. As such, it is plausible that not all frameworks include the full range of concepts now thought to be involved in implementation. Users face the decision of selecting a single or combining multiple implementation frameworks. To aid this decision, the aim of this review was to assess the comprehensiveness of existing frameworks. Methods: A systematic search was undertaken in PubMed to identify implementation frameworks of innovations in healthcare published from 2004 to May 2013. Additionally, titles and abstracts from Implementation Science journal and references from identified papers were reviewed. The orientation, type, and presence of stages and domains, along with the degree of inclusion and depth of analysis of factors, strategies, and evaluations of implementation of included frameworks were analysed. Results: Frameworks were assessed individually and grouped according to their targeted innovation. Frameworks for particular innovations had similar settings, end-users, and 'type' (descriptive, prescriptive, explanatory, or predictive). On the whole, frameworks were descriptive and explanatory more often than prescriptive and predictive. A small number of the reviewed frameworks covered an implementation concept(s) in detail, however, overall, there was limited degree and depth of analysis of implementation concepts. The core implementation concepts across the frameworks were collated to form a Generic Implementation Framework, which includes the process of implementation (often portrayed as a series of stages and/or steps), the innovation to be implemented, the context in which the implementation is to occur (divided into a range of domains), and influencing factors, strategies, and evaluations. Conclusions: The selection of implementation framework(s) should be based not solely on the healthcare innovation to be implemented, but include other aspects of the framework's orientation, e.g., the setting and end-user, as well as the degree of inclusion and depth of analysis of the implementation concepts. The resulting generic structure provides researchers, policy-makers, health administrators, and practitioners a base that can be used as guidance for their implementation efforts
    corecore