261 research outputs found

    What adherence measures should be used in trials of home-based rehabilitation interventions? A systematic review of the validity, reliability and acceptability of measures

    Get PDF
    Objective To systematically review methods for measuring adherence used in home-based rehabilitation trials and to evaluate their validity, reliability, and acceptability. Data Sources In phase 1 we searched the CENTRAL database, NHS Economic Evaluation Database, and Health Technology Assessment Database (January 2000 to April 2013) to identify adherence measures used in randomized controlled trials of allied health professional home-based rehabilitation interventions. In phase 2 we searched the databases of MEDLINE, Embase, CINAHL, Allied and Complementary Medicine Database, PsycINFO, CENTRAL, ProQuest Nursing and Allied Health, and Web of Science (inception to April 2015) for measurement property assessments for each measure. Study Selection Studies assessing the validity, reliability, or acceptability of adherence measures. Data Extraction Two reviewers independently extracted data on participant and measure characteristics, measurement properties evaluated, evaluation methods, and outcome statistics and assessed study quality using the COnsensus-based Standards for the selection of health Measurement INstruments checklist. Data Synthesis In phase 1 we included 8 adherence measures (56 trials). In phase 2, from the 222 measurement property assessments identified in 109 studies, 22 high-quality measurement property assessments were narratively synthesized. Low-quality studies were used as supporting data. StepWatch Activity Monitor validly and acceptably measured short-term step count adherence. The Problematic Experiences of Therapy Scale validly and reliably assessed adherence to vestibular rehabilitation exercises. Adherence diaries had moderately high validity and acceptability across limited populations. The Borg 6 to 20 scale, Bassett and Prapavessis scale, and Yamax CW series had insufficient validity. Low-quality evidence supported use of the Joint Protection Behaviour Assessment. Polar A1 series heart monitors were considered acceptable by 1 study. Conclusions Current rehabilitation adherence measures are limited. Some possess promising validity and acceptability for certain parameters of adherence, situations, and populations and should be used in these situations. Rigorous evaluation of adherence measures in a broader range of populations is needed

    Communicating simply, but not too simply: Reporting of participants and speech and language interventions for aphasia after stroke

    Get PDF
    Speech and language pathology (SLP) for aphasia is a complex intervention delivered to a heterogeneous population within diverse settings. Simplistic descriptions of participants and interventions in research hinder replication, interpretation of results, guideline and research developments through secondary data analyses. This study aimed to describe the availability of participant and intervention descriptors in existing aphasia research datasets. We systematically identified aphasia research datasets containing 10 participants with information on time since stroke and language ability. We extracted participant and SLP intervention descriptions and considered the availability of data compared to historical and current reporting standards. We developed an extension to the Template for Intervention Description and Replication checklist to support meaningful classification and synthesis of the SLP interventions to sup port secondary data analysis. Of 11, 314 identified records we screened 1131 full texts and received 75 dataset contributions. We extracted data from 99 additional public domain datasets. Participant age (97.1%) and sex (90.8%) were commonly available. Prior stroke (25.8%), living context (12.1%) and socio-economic status (2.3%) were rarely available. Therapy impairment target, frequency and duration were most commonly available but predominately described at group level. Home practice (46.3%) and tailoring (functional relevance 46.3%) were inconsistently available. Gaps in the availability of participant and intervention details were significant, hampering clinical implementation of evidence into practice and development of our field of research. Improvements in the quality and consistency of participant and intervention data reported in aphasia research are required to maximise clinical implementation, replication in research and the generation of insights from secondary data analysis. Systematic review registration: PROSPERO CRD42018110947.info:eu-repo/semantics/publishedVersio
    • …
    corecore