96 research outputs found

    Assessing the outcomes of a clinical trial: Primary outcome measures only tell part of the story

    Get PDF
    Identifying outcome measures that are sensitive to change and meaningful to participants is a challenge when designing clinical trials of complex communication interventions. Outcome measures encompassing participants’ perceptions of clinically meaningful change and their experience of the treatment process are frequently neglected. This paper presents an overview of the outcome measures used in a 3 arm clinical trial which aimed to investigate (i) social skills training for the person with TBI alone (which we have termed the TBI SOLO condition) and (ii) training communication partners to deal with difficult communication behaviors (the JOINT condition) compared to a delayed waitlist CONTROL condition. The paper asks two research questions: 1. What information did the self-report of perceived communication ability using the La Trobe Communication Questionnaire, and qualitative measures provide in addition to blinded ratings on the Adapted Kagan Scales, the primary outcome measure? 2.How did participants perceive the training experience as measured through post treatment interviews

    Communication skills of people with severe traumatic brain injury can be improved by training everyday communication partners: Findings from a single-blind multi-centre clinical trial

    Get PDF
    This controlled group comparison study examined the effectiveness of everyday communication partner (ECP) training for people with TBI. 44 participants with severe TBI and their ECPs were allocated to a) TBI SOLO group where the person with TBI was trained; b) JOINT group where the communication partner was also trained; or c) a delayed CONTROL. Conversations were videotaped pre and post training and rated by two blind assessors on conversational skills. Training ECPs was more efficacious than training the person with TBI alone. Involving communication partners in treatment appears crucial for improved communication interactions for people with severe TBI

    Atmospheric mercury in the Latrobe Valley, Australia : case study June 2013

    Get PDF
    Gaseous elemental mercury observations were conducted at Churchill, Victoria, in Australia from April to July, 2013, using a Tekran 2537 analyzer. A strong diurnal variation with daytime average values of 1.2–1.3 ng m–3 and nighttime average values of 1.6–1.8 ng m–3 was observed. These values are significantly higher than the Southern Hemisphere average of 0.85–1.05 ng m–3. Churchill is in the Latrobe Valley, approximately 150 km East of Melbourne, where approximately 80% of Victoria’s electricity is generated from low-rank brown coal from four major power stations: Loy Yang A, Loy Yang B, Hazelwood, and Yallourn. These aging generators do not have any sulfur, nitrogen oxide, or mercury air pollution controls. Mercury emitted in the 2015–2016 year in the Latrobe Valley is estimated to have had an externalized health cost of $AUD88 million. Air pollution mercury simulations were conducted using the Weather Research and Forecast model with Chemistry at 3 × 3 km resolution. Electrical power generation emissions were added using mercury emissions created from the National Energy Market’s 5-min energy distribution data. The strong diurnal cycle in the observed mercury was well simulated (R2 ¼ .49 and P value ¼ 0.00) when soil mercury emissions arising from several years of wet and dry deposition in a radius around the power generators was included in the model, as has been observed around aging lignite coal power generators elsewhere. These results indicate that long-term air and soil sampling in power generation regions, even after the closure of coal fired power stations, will have important implications to understanding the airborne mercury emissions sources. Copyright: © 2021 The Author(s). **Please note that there are multiple authors for this article therefore only the name of the first 5 including Federation University Australia affiliate “Melita Keywood” is provided in this record*

    The methodological quality of aphasia research: an investigation using the PsycBITE™ database

    Get PDF
    This paper examines methodological quality of aphasia research using the Psychological database for Brain Injury Treatment Efficacy™(www.psycbite.com). PsycBITE™ includes five designs: Systematic Reviews (SR), Randomised Controlled Trials (RCT), non-RCT (NRCT); Case Series (CS) and Single Subject Designs (SSD). Of 310 studies indexed for aphasia: SR=8 (3%); RCT=22 (7%); NRCT=17 (5%); CS=48 (15%); SSD=215 (69%). Methodological quality ratings (MQR) using the PEDro scale (scored out of 10) were available for 9 RCTs (mean MQR=4.6 SD = 1.5), 5 NRCTs (mean MQR=2.3, SD =1.1), and 12 CSs (mean MQR=0.9, SD =0.7). Methodological quality is discussed with suggestions for future treatment studies

    Reprint of “The Single-Case Reporting Guideline In BEhavioural interventions (SCRIBE) 2016: explanation and elaboration”

    Get PDF
    There is substantial evidence that research studies reported in the scientific literature do not provide adequate information so that readers know exactly what was done and what was found. This problem has been addressed by the development of reporting guidelines which tell authors what should be reported and how it should be described. Many reporting guidelines are now available for different types of research designs. There is no such guideline for one type of research design commonly used in the behavioral sciences, the single-case experimental design (SCED). The present study addressed this gap. This report describes the Single-Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016, which is a set of 26 items that authors need to address when writing about SCED research for publication in a scientific journal. Each item is described, a rationale for its inclusion is provided, and examples of adequate reporting taken from the literature are quoted. It is recommended that the SCRIBE 2016 is used by authors preparing manuscripts describing SCED research for publication, as well as journal reviewers and editors who are evaluating such manuscripts.Published versio

    Inter-rater reliability of the Measure of Support in Conversation and Measure of Participation in Conversation (Kagan et al., 2004) adapted for people with Traumatic Brain Injury (TBI) and their communication partners

    Get PDF
    Aim: This study reports inter-rater reliability of the Adapted Measure of Support in Conversation (MSC) and Measure of Participation in Conversation (MPC) for TBI interactions. Method: The MSC and MPC were adapted to reflect theoretical models of cognitive-communication support for people with TBI. 10 casual and 10 purposeful TBI interactions were independently rated. Results: Strong inter-rater agreement was established on the MSC (ICC = 0.85-0.97) and the MPC (ICC = 0.85-0.97). All ratings scored within 0.5 on a 9 point scale. Conclusion: This is the first scale to measure the communication partner during TBI interactions. It shows promise in evaluating communication partner training programs

    Developing Core Sets for Persons With Traumatic Brain Injury Based on the International Classification of Functioning, Disability, and Health

    Get PDF
    The authors outline the process for developing the International Classification of Functioning, Disability, and Health (ICF) Core Sets for traumatic brain injury (TBI). ICF Core Sets are selections of categories of the ICF that identify relevant categories of patients affected by specific diseases. Comprehensive and brief ICF Core Sets for TBI should become useful for clinical practice and for research. The final definition of the ICF Core Sets for TBI will be determined at an ICF Core Sets Consensus Conference, which will integrate evidence from preliminary studies. The development of ICF Core Sets is an inclusive and open process and rehabilitation professionals are invited to participate

    The Single-Case Reporting Guideline In BEhavioural Interventions (SCRIBE) 2016 statement

    Get PDF
    We developed a reporting guideline to provide authors with guidance about what should be reported when writing a paper for publication in a scientific journal using a particular type of research design: the single-case experimental design. This report describes the methods used to develop the Single-Case Reporting guideline In BEhavioural interventions (SCRIBE) 2016. As a result of 2 online surveys and a 2-day meeting of experts, the SCRIBE 2016 checklist was developed, which is a set of 26 items that authors need to address when writing about single-case research. This article complements the more detailed SCRIBE 2016 Explanation and Elaboration article (Tate et al., 2016) that provides a rationale for each of the items and examples of adequate reporting from the literature. Both these resources will assist authors to prepare reports of single-case research with clarity, completeness, accuracy, and transparency. They will also provide journal reviewers and editors with a practical checklist against which such reports may be critically evaluated. We recommend that the SCRIBE 2016 is used by authors preparing manuscripts describing single-case research for publication, as well as journal reviewers and editors who are evaluating such manuscripts.Funding for the SCRIBE project was provided by the Lifetime Care and Support Authority of New South Wales, Australia. The funding body was not involved in the conduct, interpretation or writing of this work. We acknowledge the contribution of the responders to the Delphi surveys, as well as administrative assistance provided by Kali Godbee and Donna Wakim at the SCRIBE consensus meeting. Lyndsey Nickels was funded by an Australian Research Council Future Fellowship (FT120100102) and Australian Research Council Centre of Excellence in Cognition and Its Disorders (CE110001021). For further discussion on this topic, please visit the Archives of Scientific Psychology online public forum at http://arcblog.apa.org. (Lifetime Care and Support Authority of New South Wales, Australia; FT120100102 - Australian Research Council Future Fellowship; CE110001021 - Australian Research Council Centre of Excellence in Cognition and Its Disorders)Published versio
    • …
    corecore