27 research outputs found

    Outcome measures in post-stroke arm rehabilitation trials: do existing measures capture outcomes that are important to stroke survivors, carers, and clinicians?

    Get PDF
    Objective: We sought to (1) identify the outcome measures currently used across stroke arm rehabilitation randomized trials, (2) identify and compare outcomes important to stroke survivors, carers and clinicians and (3) describe where existing research outcome measures capture outcomes that matter the most to stroke survivors, carers and clinicians and where there may be discrepancies. Methods: First, we systematically identified and extracted data on outcome measures used in trials within a Cochrane overview of arm rehabilitation interventions. Second, we conducted 16 focus groups with stroke survivors, carers and clinicians using nominal group technique, supplemented with eight semi-structured interviews, to identify these stakeholders’ most important outcomes following post-stroke arm impairment. Finally, we described the constructs of each outcome measure and indicated where stakeholders’ important outcomes were captured by each measure. Results: We extracted 144 outcome measures from 243 post-stroke arm rehabilitation trials. The Fugl-Meyer Assessment Upper Extremity section (used in 79/243 trials; 33%), Action Research Arm Test (56/243; 23%), and modified Ashworth Scale (53/243; 22%) were most frequently used. Stroke survivors (n = 43), carers (n = 10) and clinicians (n = 58) identified 66 unique, important outcomes related to arm impairment following stroke. Between one and three outcomes considered important by the stakeholders were captured by the three most commonly used assessments in research. Conclusion: Post-stroke arm rehabilitation research would benefit from a reduction in the number of outcome measures currently used, and better alignment between what is measured and what is important to stroke survivors, carers and clinicians

    Accuracy of the short-form Montreal Cognitive Assessment: systematic review and validation

    Get PDF
    Introduction: Short‐form versions of the Montreal Cognitive Assessment (SF‐MoCA) are increasingly used to screen for dementia in research and practice. We sought to collate evidence on the accuracy of SF‐MoCAs and to externally validate these assessment tools. Methods: We performed systematic literature searching across multidisciplinary electronic literature databases, collating information on the content and accuracy of all published SF‐MoCAs. We then validated all the SF‐MoCAs against clinical diagnosis using independent stroke (n = 787) and memory clinic (n = 410) data sets. Results: We identified 13 different SF‐MoCAs (21 studies, n = 6477 participants) with differing test content and properties. There was a pattern of high sensitivity across the range of SF‐MoCA tests. In the published literature, for detection of post stroke cognitive impairment, median sensitivity across included studies: 0.88 (range: 0.70‐1.00); specificity: 0.70 (0.39‐0.92). In our independent validation using stroke data, median sensitivity: 0.99 (0.80‐1.00); specificity: 0.40 (0.14‐0.87). To detect dementia in older adults, median sensitivity: 0.88 (0.62‐0.98); median specificity: 0.87 (0.07‐0.98) in the literature and median sensitivity: 0.96 (range: 0.72‐1.00); median specificity: 0.36 (0.14‐0.86) in our validation. Horton's SF‐MoCA (delayed recall, serial subtraction, and orientation) had the most favorable properties in stroke (sensitivity: 0.90, specificity: 0.87, positive predictive value [PPV]: 0.55, and negative predictive value [NPV]: 0.93), whereas Cecato's “MoCA reduced” (clock draw, animal naming, delayed recall, and orientation) performed better in the memory clinic (sensitivity: 0.72, specificity: 0.86, PPV: 0.55, and NPV: 0.93). Conclusions: There are many published SF‐MoCAs. Clinicians and researchers using a SF‐MoCA should be explicit about the content. For all SF‐MoCA, sensitivity is high and similar to the full scale suggesting potential utility as an initial cognitive screening tool. However, choice of SF‐MoCA should be informed by the clinical population to be studied

    Big data and data repurposing – using existing data to answer new questions in vascular dementia research

    Get PDF
    Introduction: Traditional approaches to clinical research have, as yet, failed to provide effective treatments for vascular dementia (VaD). Novel approaches to collation and synthesis of data may allow for time and cost efficient hypothesis generating and testing. These approaches may have particular utility in helping us understand and treat a complex condition such as VaD. Methods: We present an overview of new uses for existing data to progress VaD research. The overview is the result of consultation with various stakeholders, focused literature review and learning from the group’s experience of successful approaches to data repurposing. In particular, we benefitted from the expert discussion and input of delegates at the 9th International Congress on Vascular Dementia (Ljubljana, 16-18th October 2015). Results: We agreed on key areas that could be of relevance to VaD research: systematic review of existing studies; individual patient level analyses of existing trials and cohorts and linking electronic health record data to other datasets. We illustrated each theme with a case-study of an existing project that has utilised this approach. Conclusions: There are many opportunities for the VaD research community to make better use of existing data. The volume of potentially available data is increasing and the opportunities for using these resources to progress the VaD research agenda are exciting. Of course, these approaches come with inherent limitations and biases, as bigger datasets are not necessarily better datasets and maintaining rigour and critical analysis will be key to optimising data use

    Using the Barthel Index and modified Rankin Scale as outcome measures for stroke rehabilitation trials; A comparison of minimum sample size requirements

    Get PDF
    Objectives Underpowered trials risk inaccurate results. Recruitment to stroke rehabilitation randomised controlled trials (RCTs) is often a challenge. Statistical simulations offer an important opportunity to explore the adequacy of sample sizes in the context of specific outcome measures. We aimed to examine and compare the adequacy of stroke rehabilitation RCT sample sizes using the Barthel Index (BI) or modified Rankin Scale (mRS) as primary outcomes. Methods We conducted computer simulations using typical experimental event rates (EER) and control event rates (CER) based on individual participant data (IPD) from stroke rehabilitation RCTs. Event rates are the proportion of participants who experienced clinically relevant improvements in the RCT experimental and control groups. We examined minimum sample size requirements and estimated the number of participants required to achieve a number needed to treat within clinically acceptable boundaries for the BI and mRS. Results We secured 2350 IPD (18 RCTs). For a 90% chance of statistical accuracy on the BI a rehabilitation RCT would require 273 participants per randomised group. Accurate interpretation of effect sizes would require 1000s of participants per group. Simulations for the mRS were not possible as a clinically relevant improvement was not detected when using this outcome measure. Conclusions Stroke rehabilitation RCTs with large sample sizes are required for accurate interpretation of effect sizes based on the BI. The mRS lacked sensitivity to detect change and thus may be unsuitable as a primary outcome in stroke rehabilitation trials

    Improving economic evaluations in stroke : A report from the ESO Health Economics Working Group

    Get PDF
    Introduction Approaches to economic evaluations of stroke therapies are varied and inconsistently described. An objective of the European Stroke Organisation (ESO) Health Economics Working Group is to standardise and improve the economic evaluations of interventions for stroke. Methods The ESO Health Economics Working Group and additional experts were contacted to develop a protocol and a guidance document for data collection for economic evaluations of stroke therapies. A modified Delphi approach, including a survey and consensus processes, was used to agree on content. We also asked the participants about resources that could be shared to improve economic evaluations of interventions for stroke. Results Of 28 experts invited, 16 (57%) completed the initial survey, with representation from universities, government, and industry. More than half of the survey respondents endorsed 13 specific items to include in a standard resource use questionnaire. Preferred functional/quality of life outcome measures to use for economic evaluations were the modified Rankin Scale (14 respondents, 88%) and the EQ-5D instrument (11 respondents, 69%). Of the 12 respondents who had access to data used in economic evaluations, 10 (83%) indicated a willingness to share data. A protocol template and a guidance document for data collection were developed and are presented in this article. Conclusion The protocol template and guidance document for data collection will support a more standardised and transparent approach for economic evaluations of stroke care.Peer reviewe

    Communicating simply, but not too simply: Reporting of participants and speech and language interventions for aphasia after stroke

    Get PDF
    Purpose: Speech and language pathology (SLP) for aphasia is a complex intervention delivered to a heterogeneous population within diverse settings. Simplistic descriptions of participants and interventions in research hinder replication, interpretation of results, guideline and research developments through secondary data analyses. This study aimed to describe the availability of participant and intervention descriptors in existing aphasia research datasets. Method: We systematically identified aphasia research datasets containing ≥10 participants with information on time since stroke and language ability. We extracted participant and SLP intervention descriptions and considered the availability of data compared to historical and current reporting standards. We developed an extension to the Template for Intervention Description and Replication checklist to support meaningful classification and synthesis of the SLP interventions to support secondary data analysis. Result: Of 11, 314 identified records we screened 1131 full texts and received 75 dataset contributions. We extracted data from 99 additional public domain datasets. Participant age (97.1%) and sex (90.8%) were commonly available. Prior stroke (25.8%), living context (12.1%) and socio-economic status (2.3%) were rarely available. Therapy impairment target, frequency and duration were most commonly available but predominately described at group level. Home practice (46.3%) and tailoring (functional relevance 46.3%) were inconsistently available. Conclusion : Gaps in the availability of participant and intervention details were significant, hampering clinical implementation of evidence into practice and development of our field of research. Improvements in the quality and consistency of participant and intervention data reported in aphasia research are required to maximise clinical implementation, replication in research and the generation of insights from secondary data analysis. Systematic review registration: PROSPERO CRD4201811094
    corecore