231 research outputs found

    Psychological disorder diagnosis is no cure for trait inferences bias

    Get PDF
    According to the Diagnostic and Statistical Manual of Mental Disorders, 5th edition, maladaptive behavior stemming from a psychological disorder should not be attributed to personality. Attribution of behavioral symptoms to personality may undermine treatment-seeking and therapy outcomes and increase the stigmatization of the mentally ill. Although people adjust dispositional inferences given contextual alternative causes, we propose that beliefs in the stability and controllability of mental illness could lead to confounded representations of personality and psychological disorders. In six studies we tested whether people adjust dispositional inferences given a psychological disorder as they do give a physical impairment. Participants made trait ratings from short behavioral descriptions and corresponding contextual accounts. When the putative cause for the behavior was a psychological disorder, people did not reduce the trait inference to the extent they did when the cause was a physical impairment, except when the psychological disorder was presented as controllable/unstable. This suggests a conflation of psychological disorders with personality.info:eu-repo/semantics/acceptedVersio

    Instrumentation issues in implementation science

    Get PDF
    Abstract Background Like many new fields, implementation science has become vulnerable to instrumentation issues that potentially threaten the strength of the developing knowledge base. For instance, many implementation studies report findings based on instruments that do not have established psychometric properties. This article aims to review six pressing instrumentation issues, discuss the impact of these issues on the field, and provide practical recommendations. Discussion This debate centers on the impact of the following instrumentation issues: use of frameworks, theories, and models; role of psychometric properties; use of ‘home-grown’ and adapted instruments; choosing the most appropriate evaluation method and approach; practicality; and need for decision-making tools. Practical recommendations include: use of consensus definitions for key implementation constructs; reporting standards (e.g., regarding psychometrics, instrument adaptation); when to use multiple forms of observation and mixed methods; and accessing instrument repositories and decision aid tools. Summary This debate provides an overview of six key instrumentation issues and offers several courses of action to limit the impact of these issues on the field. With careful attention to these issues, the field of implementation science can potentially move forward at the rapid pace that is respectfully demanded by community stakeholders

    Implementing measurement-based care (iMBC) for depression in community mental health: a dynamic cluster randomized trial study protocol

    Get PDF
    BACKGROUND: Measurement-based care is an evidence-based practice for depression that efficiently identifies treatment non-responders and those who might otherwise deteriorate [1]. However, measurement-based care is underutilized in community mental health with data suggesting fewer than 20 % of behavioral health providers using this practice to inform treatment. It remains unclear whether standardized or tailored approaches to implementation are needed to optimize measurement-based care fidelity and penetration. Moreover, there is some suggestion that prospectively tailored interventions that are designed to fit the dynamic context may optimize public health impact, though no randomized trials have yet tested this notion [2]. This study will address the following three aims: (1) To compare the effect of standardized versus tailored MBC implementation on clinician-level and client-level outcomes; (2) To identify contextual mediators of MBC fidelity; and (3) To explore the impact of MBC fidelity on client outcomes. METHODS/DESIGN: This study is a dynamic cluster randomized trial of standardized versus tailored measurement-based care implementation in Centerstone, the largest provider of community-based mental health services in the USA. This prospective, mixed methods implementation-effectiveness hybrid design allows for evaluation of the two conditions on both clinician-level (e.g., MBC fidelity) and client-level (depression symptom change) outcomes. Central to this investigation is the focus on identifying contextual factors (e.g., attitudes, resources, process, etc.) that mediate MBC fidelity and optimize client outcomes. DISCUSSION: This study will contribute generalizable and practical strategies for implementing systematic symptom monitoring to inform and enhance behavioral healthcare. TRIAL REGISTRATION: Clinicaltrials.gov NCT02266134

    Advancing implementation science through measure development and evaluation: a study protocol

    Get PDF
    Abstract Background Significant gaps related to measurement issues are among the most critical barriers to advancing implementation science. Three issues motivated the study aims: (a) the lack of stakeholder involvement in defining pragmatic measure qualities; (b) the dearth of measures, particularly for implementation outcomes; and (c) unknown psychometric and pragmatic strength of existing measures. Aim 1: Establish a stakeholder-driven operationalization of pragmatic measures and develop reliable, valid rating criteria for assessing the construct. Aim 2: Develop reliable, valid, and pragmatic measures of three critical implementation outcomes, acceptability, appropriateness, and feasibility. Aim 3: Identify Consolidated Framework for Implementation Research and Implementation Outcome Framework-linked measures that demonstrate both psychometric and pragmatic strength. Methods/design For Aim 1, we will conduct (a) interviews with stakeholder panelists (N = 7) and complete a literature review to populate pragmatic measure construct criteria, (b) Q-sort activities (N = 20) to clarify the internal structure of the definition, (c) Delphi activities (N = 20) to achieve consensus on the dimension priorities, (d) test-retest and inter-rater reliability assessments of the emergent rating system, and (e) known-groups validity testing of the top three prioritized pragmatic criteria. For Aim 2, our systematic development process involves domain delineation, item generation, substantive validity assessment, structural validity assessment, reliability assessment, and predictive validity assessment. We will also assess discriminant validity, known-groups validity, structural invariance, sensitivity to change, and other pragmatic features. For Aim 3, we will refine our established evidence-based assessment (EBA) criteria, extract the relevant data from the literature, rate each measure using the EBA criteria, and summarize the data. Discussion The study outputs of each aim are expected to have a positive impact as they will establish and guide a comprehensive measurement-focused research agenda for implementation science and provide empirically supported measures, tools, and methods for accomplishing this work

    Measurement resources for dissemination and implementation research in health

    Get PDF
    BACKGROUND: A 2-day consensus working meeting, hosted by the United States National Institutes of Health and the Veterans Administration, focused on issues related to dissemination and implementation (D&I) research in measurement and reporting. Meeting participants included 23 researchers, practitioners, and decision makers from the USA and Canada who concluded that the field would greatly benefit from measurement resources to enhance the ease, harmonization, and rigor of D&I evaluation efforts. This paper describes the findings from an environmental scan and literature review of resources for D&I measures. FINDINGS: We identified a total of 17 resources, including four web-based repositories and 12 static reviews or tools that attempted to synthesize and evaluate existing measures for D&I research. Thirteen resources came from the health discipline, and 11 were populated from database reviews. Ten focused on quantitative measures, and all were generated as a resource for researchers. Fourteen were organized according to an established D&I theory or framework, with the number of constructs and measures ranging from 1 to more than 450. Measure metadata was quite variable with only six providing information on the psychometric properties of measures. CONCLUSIONS: Additional guidance on the development and use of measures are needed. A number of approaches, resources, and critical areas for future work are discussed. Researchers and stakeholders are encouraged to take advantage of a number of funding mechanisms supporting this type of work. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1186/s13012-016-0401-y) contains supplementary material, which is available to authorized users

    The Society for Implementation Research Collaboration Instrument Review Project: A methodology to promote rigorous evaluation

    Get PDF
    Abstract Background Identification of psychometrically strong instruments for the field of implementation science is a high priority underscored in a recent National Institutes of Health working meeting (October 2013). Existing instrument reviews are limited in scope, methods, and findings. The Society for Implementation Research Collaboration Instrument Review Project’s objectives address these limitations by identifying and applying a unique methodology to conduct a systematic and comprehensive review of quantitative instruments assessing constructs delineated in two of the field’s most widely used frameworks, adopt a systematic search process (using standard search strings), and engage an international team of experts to assess the full range of psychometric criteria (reliability, construct and criterion validity). Although this work focuses on implementation of psychosocial interventions in mental health and health-care settings, the methodology and results will likely be useful across a broad spectrum of settings. This effort has culminated in a centralized online open-access repository of instruments depicting graphical head-to-head comparisons of their psychometric properties. This article describes the methodology and preliminary outcomes. Methods The seven stages of the review, synthesis, and evaluation methodology include (1) setting the scope for the review, (2) identifying frameworks to organize and complete the review, (3) generating a search protocol for the literature review of constructs, (4) literature review of specific instruments, (5) development of an evidence-based assessment rating criteria, (6) data extraction and rating instrument quality by a task force of implementation experts to inform knowledge synthesis, and (7) the creation of a website repository. Results To date, this multi-faceted and collaborative search and synthesis methodology has identified over 420 instruments related to 34 constructs (total 48 including subconstructs) that are relevant to implementation science. Despite numerous constructs having greater than 20 available instruments, which implies saturation, preliminary results suggest that few instruments stem from gold standard development procedures. We anticipate identifying few high-quality, psychometrically sound instruments once our evidence-based assessment rating criteria have been applied. Conclusions The results of this methodology may enhance the rigor of implementation science evaluations by systematically facilitating access to psychometrically validated instruments and identifying where further instrument development is needed

    Intentional research design in implementation science: implications for the use of nomothetic and idiographic assessment

    Get PDF
    The advancement of implementation science is dependent on identifying assessment strategies that can address implementation and clinical outcome variables in ways that are valid, relevant to stakeholders, and scalable. This paper presents a measurement agenda for implementation science that integrates the previously disparate assessment traditions of idiographic and nomothetic approaches. Although idiographic and nomothetic approaches are both used in implementation science, a review of the literature on this topic suggests that their selection can be indiscriminate, driven by convenience, and not explicitly tied to research study design. As a result, they are not typically combined deliberately or effectively. Thoughtful integration may simultaneously enhance both the rigor and relevance of assessments across multiple levels within health service systems. Background on nomothetic and idiographic assessment is provided as well as their potential to support research in implementation science. Drawing from an existing framework, seven structures (of various sequencing and weighting options) and five functions (Convergence, Complementarity, Expansion, Development, Sampling) for integrating conceptually distinct research methods are articulated as they apply to the deliberate, design-driven integration of nomothetic and idiographic assessment approaches. Specific examples and practical guidance are provided to inform research consistent with this framework. Selection and integration of idiographic and nomothetic assessments for implementation science research designs can be improved. The current paper argues for the deliberate application of a clear framework to improve the rigor and relevance of contemporary assessment strategies

    Put the Vanc Down, Flip It and Reverse It: Comparison of Vancomycin and Daptomycin Health Care Utilization and Cost in Outpatient Parenteral Antimicrobial Therapy

    Get PDF
    Vancomycin and daptomycin are frequently used in outpatient parenteral antimicrobial therapy (OPAT). We analyze health care utilization and cost to the health care system for vancomycin vs daptomycin in the outpatient setting and find that vancomycin results in significantly higher health care utilization and similar cost per course compared with daptomycin in OPAT

    Outcomes for implementation science: an enhanced systematic review of instruments using evidence-based rating criteria

    Get PDF
    Abstract Background High-quality measurement is critical to advancing knowledge in any field. New fields, such as implementation science, are often beset with measurement gaps and poor quality instruments, a weakness that can be more easily addressed in light of systematic review findings. Although several reviews of quantitative instruments used in implementation science have been published, no studies have focused on instruments that measure implementation outcomes. Proctor and colleagues established a core set of implementation outcomes including: acceptability, adoption, appropriateness, cost, feasibility, fidelity, penetration, sustainability (Adm Policy Ment Health Ment Health Serv Res 36:24–34, 2009). The Society for Implementation Research Collaboration (SIRC) Instrument Review Project employed an enhanced systematic review methodology (Implement Sci 2: 2015) to identify quantitative instruments of implementation outcomes relevant to mental or behavioral health settings. Methods Full details of the enhanced systematic review methodology are available (Implement Sci 2: 2015). To increase the feasibility of the review, and consistent with the scope of SIRC, only instruments that were applicable to mental or behavioral health were included. The review, synthesis, and evaluation included the following: (1) a search protocol for the literature review of constructs; (2) the literature review of instruments using Web of Science and PsycINFO; and (3) data extraction and instrument quality ratings to inform knowledge synthesis. Our evidence-based assessment rating criteria quantified fundamental psychometric properties as well as a crude measure of usability. Two independent raters applied the evidence-based assessment rating criteria to each instrument to generate a quality profile. Results We identified 104 instruments across eight constructs, with nearly half (n = 50) assessing acceptability and 19 identified for adoption, with all other implementation outcomes revealing fewer than 10 instruments. Only one instrument demonstrated at least minimal evidence for psychometric strength on all six of the evidence-based assessment criteria. The majority of instruments had no information regarding responsiveness or predictive validity. Conclusions Implementation outcomes instrumentation is underdeveloped with respect to both the sheer number of available instruments and the psychometric quality of existing instruments. Until psychometric strength is established, the field will struggle to identify which implementation strategies work best, for which organizations, and under what conditions
    • …
    corecore