45 research outputs found

    What Works in Implementing Patient Decision Aids in Routine Clinical Settings? A Rapid Realist Review and Update from the International Patient Decision Aid Standards Collaboration

    Get PDF
    Background Decades of effectiveness research has established the benefits of using patient decision aids (PtDAs), yet broad clinical implementation has not yet occurred. Evidence to date is mainly derived from highly controlled settings; if clinicians and health care organizations are expected to embed PtDAs as a means to support person-centered care, we need to better understand what this might look like outside of a research setting. Aim This review was conducted in response to the IPDAS Collaboration’s evidence update process, which informs their published standards for PtDA quality and effectiveness. The aim was to develop context-specific program theories that explain why and how PtDAs are successfully implemented in routine healthcare settings. Methods Rapid realist review methodology was used to identify articles that could contribute to theory development. We engaged key experts and stakeholders to identify key sources; this was supplemented by electronic database (Medline and CINAHL), gray literature, and forward/backward search strategies. Initial theories were refined to develop realist context-mechanism-outcome configurations, and these were mapped to the Consolidated Framework for Implementation Research. Results We developed 8 refined theories, using data from 23 implementation studies (29 articles), to describe the mechanisms by which PtDAs become successfully implemented into routine clinical settings. Recommended implementation strategies derived from the program theory include 1) co-production of PtDA content and processes (or local adaptation), 2) training the entire team, 3) preparing and prompting patients to engage, 4) senior-level buy-in, and 5) measuring to improve. Conclusions We recommend key strategies that organizations and individuals intending to embed PtDAs routinely can use as a practical guide. Further work is needed to understand the importance of context in the success of different implementation studies.Additional co-authors: Karina Dahl Steffensen, Christine Stirling, Trudy van der Weijdenon, International Patient Decision Aids (IPDAS) Collaboratio

    Engaging communication experts in a Delphi process to identify patient behaviors that could enhance communication in medical encounters

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The communication literature currently focuses primarily on improving physicians' verbal and non-verbal behaviors during the medical interview. The Four Habits Model is a teaching and research framework for physician communication that is based on evidence linking specific communication behaviors with processes and outcomes of care. The Model conceptualizes basic communication tasks as "Habits" and describes the sequence of physician communication behaviors during the clinical encounter associated with improved outcomes. Using the Four Habits Model as a starting point, we asked communication experts to identify the verbal communication behaviors of patients that are important in outpatient encounters.</p> <p>Methods</p> <p>We conducted a 4-round Delphi process with 17 international experts in communication research, medical education, and health care delivery. All rounds were conducted via the internet. In round 1, experts reviewed a list of proposed patient verbal communication behaviors within the Four Habits Model framework. The proposed patient verbal communication behaviors were identified based on a review of the communication literature. The experts could: approve the proposed list; add new behaviors; or modify behaviors. In rounds 2, 3, and 4, they rated each behavior for its fit (agree or disagree) with a particular habit. After each round, we calculated the percent agreement for each behavior and provided these data in the next round. Behaviors receiving more than 70% of experts' votes (either agree or disagree) were considered as achieving consensus.</p> <p>Results</p> <p>Of the 14 originally-proposed patient verbal communication behaviors, the experts modified all but 2, and they added 20 behaviors to the Model in round 1. In round 2, they were presented with 59 behaviors and 14 options to remove specific behaviors for rating. After 3 rounds of rating, the experts retained 22 behaviors. This set included behaviors such as asking questions, expressing preferences, and summarizing information.</p> <p>Conclusion</p> <p>The process identified communication tasks and verbal communication behaviors for patients similar to those outlined for physicians in the Four Habits Model. This represents an important step in building a single model that can be applied to teaching patients and physicians the communication skills associated with improved satisfaction and positive outcomes of care.</p

    Patients' and Observers' Perceptions of Involvement Differ. Validation Study on Inter-Relating Measures for Shared Decision Making

    Get PDF
    OBJECTIVE: Patient involvement into medical decisions as conceived in the shared decision making method (SDM) is essential in evidence based medicine. However, it is not conclusively evident how best to define, realize and evaluate involvement to enable patients making informed choices. We aimed at investigating the ability of four measures to indicate patient involvement. While use and reporting of these instruments might imply wide overlap regarding the addressed constructs this assumption seems questionable with respect to the diversity of the perspectives from which the assessments are administered. METHODS: The study investigated a nested cohort (N = 79) of a randomized trial evaluating a patient decision aid on immunotherapy for multiple sclerosis. Convergent validities were calculated between observer ratings of videotaped physician-patient consultations (OPTION) and patients' perceptions of the communication (Shared Decision Making Questionnaire, Control Preference Scale & Decisional Conflict Scale). RESULTS: OPTION reliability was high to excellent. Communication performance was low according to OPTION and high according to the three patient administered measures. No correlations were found between observer and patient judges, neither for means nor for single items. Patient report measures showed some moderate correlations. CONCLUSION: Existing SDM measures do not refer to a single construct. A gold standard is missing to decide whether any of these measures has the potential to indicate patient involvement. PRACTICE IMPLICATIONS: Pronounced heterogeneity of the underpinning constructs implies difficulties regarding the interpretation of existing evidence on the efficacy of SDM. Consideration of communication theory and basic definitions of SDM would recommend an inter-subjective focus of measurement. TRIAL REGISTRATION: Controlled-Trials.com ISRCTN25267500

    Assessing the Quality of Decision Support Technologies Using the International Patient Decision Aid Standards instrument (IPDASi)

    Get PDF
    Objectives To describe the development, validation and inter-rater reliability of an instrument to measure the quality of patient decision support technologies (decision aids). Design Scale development study, involving construct, item and scale development, validation and reliability testing. Setting There has been increasing use of decision support technologies – adjuncts to the discussions clinicians have with patients about difficult decisions. A global interest in developing these interventions exists among both for-profit and not-for-profit organisations. It is therefore essential to have internationally accepted standards to assess the quality of their development, process, content, potential bias and method of field testing and evaluation. Methods Scale development study, involving construct, item and scale development, validation and reliability testing. Participants Twenty-five researcher-members of the International Patient Decision Aid Standards Collaboration worked together to develop the instrument (IPDASi). In the fourth Stage (reliability study), eight raters assessed thirty randomly selected decision support technologies. Results IPDASi measures quality in 10 dimensions, using 47 items, and provides an overall quality score (scaled from 0 to 100) for each intervention. Overall IPDASi scores ranged from 33 to 82 across the decision support technologies sampled (n = 30), enabling discrimination. The inter-rater intraclass correlation for the overall quality score was 0.80. Correlations of dimension scores with the overall score were all positive (0.31 to 0.68). Cronbach's alpha values for the 8 raters ranged from 0.72 to 0.93. Cronbach's alphas based on the dimension means ranged from 0.50 to 0.81, indicating that the dimensions, although well correlated, measure different aspects of decision support technology quality. A short version (19 items) was also developed that had very similar mean scores to IPDASi and high correlation between short score and overall score 0.87 (CI 0.79 to 0.92). Conclusions This work demonstrates that IPDASi has the ability to assess the quality of decision support technologies. The existing IPDASi provides an assessment of the quality of a DST's components and will be used as a tool to provide formative advice to DSTs developers and summative assessments for those who want to compare their tools against an existing benchmark

    informed consent and risk communication in France

    No full text
    corecore