12 research outputs found

    Does therapy with biofeedback improve swallowing in adults with dysphagia?: a systematic review and meta-analysis

    Get PDF
    Objective To describe and systematically review the current evidence on the effects of swallow therapy augmented by biofeedback in adults with dysphagia (PROSPERO 2016:CRD42016052942). Data sources Two independent reviewers conducted searches which included MEDLINE, EMBASE, trial registries and grey literature up to December 2016. Study selection Randomised controlled trials (RCTs) and non-RCTs were assessed, including for risk of bias and quality. Data extraction Data were extracted by one reviewer and verified by another on biofeedback type, measures of swallow function, physiology and clinical outcome, and analysed using Cochrane Review Manager (random effects models). Results are expressed as weighted mean difference (WMD) and odds ratio (OR). Data Synthesis Of 675 articles, we included 23 studies (n=448 participants). Three main types of biofeedback were used: accelerometry, surface electromyography and tongue manometry. Exercises included saliva swallows, manoeuvres and strength exercises. Dose varied between 6-72 sessions for 20-60 minutes. Five controlled studies (stroke n=95; head and neck cancer n=33; mixed aetiology n=10) were included in meta-analyses. Compared to control, biofeedback augmented dysphagia therapy significantly enhanced hyoid displacement (three studies, WMD=0.22cm; 95% CI [0.04, 0.40], p=0.02) but there was no significant difference in functional oral intake (WMD=1.10; 95%CI [-1.69, 3.89], p=0.44) or dependency on tube feeding (OR =3.19; 95%CI [0.16, 62.72], p=0.45). Risk of bias was high and there was significant statistical heterogeneity between trials in measures of swallow function and number tube fed (I2 70-94%). Several non-validated outcome measures were used. Subgroup analyses were not possible due to a paucity of studies. Conclusions Dysphagia therapy augmented by biofeedback using surface electromyography and accelerometry enhances hyoid displacement but functional improvements in swallowing are not evident. However data are extremely limited and further larger well-designed RCTs are warranted

    Psychometric assessment and validation of the dysphagia severity rating scale in stroke patients

    Get PDF
    Post stroke dysphagia (PSD) is common and associated with poor outcome. The Dysphagia Severity Rating Scale (DSRS), which grades how severe dysphagia is based on fluid and diet modification and supervision requirements for feeding, is used for clinical research but has limited published validation information. Multiple approaches were taken to validate the DSRS, including concurrent- and predictive criterion validity, internal consistency, inter- and intra-rater reliability and sensitivity to change. This was done using data from four studies involving pharyngeal electrical stimulation in acute stroke patients with dysphagia, an individual patient data meta-analysis and unpublished studies (NCT03499574, NCT03700853). In addition, consensual- and content validity and the Minimal Clinically Important Difference (MCID) were assessed using anonymous surveys sent to UK-based Speech and Language Therapists (SLTs). Scores for consensual validity were mostly moderate (62.5–78%) to high or excellent (89–100%) for most scenarios. All but two assessments of content validity were excellent. In concurrent criterion validity assessments, DSRS was most closely associated with measures of radiological aspiration (penetration aspiration scale, Spearman rank rs = 0.49, p [less than] 0.001) and swallowing (functional oral intake scale, FOIS, rs =−0.96, p [less than] 0.001); weaker but statistically significant associations were seen with impairment, disability and dependency. A similar pattern of relationships was seen for predictive criterion validity. Internal consistency (Cronbach’s alpha) was either “good” or “excellent”. Intra and inter-rater reliability were largely “excellent” (intraclass correlation >0.90). DSRS was sensitive to positive change during recovery (medians: 7, 4 and 1 at baseline and 2 and 13 weeks respectively) and in response to an intervention, pharyngeal electrical stimulation, in a published meta-analysis. The MCID was 1.0 and DSRS and FOIS scores may be estimated from each other. The DSRS appears to be a valid tool for grading the severity of swallowing impairment in patients with post stroke dysphagia and is appropriate for use in clinical research and clinical service deliver

    Diagnostic accuracy of the Dysphagia Trained Nurse Assessment tool in acute stroke

    Get PDF
    Background and purposeComprehensive swallow screening assessments to identify dysphagia and make early eating and drinking recommendations can be used by trained nurses. This study aimed to validate the Dysphagia Trained Nurse Assessment (DTNAx) tool in acute stroke patients.MethodsParticipants with diagnosed stroke were prospectively and consecutively recruited from an acute stroke unit. Following a baseline DTNAx on admission, participants underwent a speech and language therapist (SLT) bedside assessment of swallowing (speech and language therapist assessment [SLTAx]), videofluoroscopy (VFS) and a further DTNAx by the same or a different nurse.ResultsForty-seven participants were recruited, of whom 22 had dysphagia. Compared to SLTAx in the identification of dysphagia, DTNAx had a sensitivity of 96.9% (95% confidence interval [CI] 83.8–99.9) and specificity of 89.5% (95% CI 75.2–97.1). Compared to VFS in the identification of aspiration, DTNAx had a sensitivity of 77.8% (95% CI 40.0–97.2) and a specificity of 81.6% (95% CI 65.7–92.3). Over 81% of the diet and fluid recommendations made by the dysphagia trained nurses were in absolute agreement compared to SLTAx. Both DTNAx and SLTAx had low diagnostic accuracy compared to the VFS-based definition of dysphagia.ConclusionsNurses trained in DTNAx showed good diagnostic accuracy in identifying dysphagia compared to SLTAx and in identifying aspiration compared to VFS. They made appropriate diet and fluid recommendations in line with SLTs in the early management of dysphagia

    Severity of cardiovascular disease outcomes among patients with HIV is related to markers of inflammation and coagulation

    Get PDF
    Background-In the general population, raised levels of inflammatory markers are stronger predictors of fatal than nonfatal cardiovascular disease (CVD) events. People with HIV have elevated levels of interleukin-6 (IL-6), high-sensitivity C-reactive protein (hsCRP), and D-dimer; HIV-induced activation of inflammatory and coagulation pathways may be responsible for their greater risk of CVD. Whether the enhanced inflammation and coagulation associated with HIV is associated with more fatal CVD events has not been investigated. Methods and Results-Biomarkers were measured at baseline for 9764 patients with HIV and no history of CVD. Of these patients, we focus on the 288 that experienced either a fatal (n=74) or nonfatal (n=214) CVD event over a median of 5 years. Odds ratios (ORs) (fatal versus nonfatal CVD) (95% confidence intervals [CIs]) associated with a doubling of IL-6, D-dimer, hsCRP, and a 1-unit increase in an IL-6 and D-dimer score, measured a median of 2.6 years before the event, were 1.39 (1.07 to 1.79), 1.40 (1.10 to 1.78), 1.09 (0.93 to 1.28), and 1.51 (1.15 to 1.97), respectively. Of the 214 patients with nonfatal CVD, 23 died during follow-up. Hazard ratios (95% CI) for all-cause mortality were 1.72 (1.28 to 2.31), 1.73 (1.27 to 2.36), 1.44 (1.15 to 1.80), and 1.88 (1.39 to 2.55), respectively, for IL-6, D-dimer, hsCRP, and the IL-6 and D-dimer score. Conclusions-Higher IL-6 and D-dimer levels reflecting enhanced inflammation and coagulation associated with HIV are associated with a greater risk of fatal CVD and a greater risk of death after a nonfatal CVD even

    Hyperimmune immunoglobulin for hospitalised patients with COVID-19 (ITAC): a double-blind, placebo-controlled, phase 3, randomised trial

    Get PDF
    BACKGROUND: Passive immunotherapy using hyperimmune intravenous immunoglobulin (hIVIG) to SARS-CoV-2, derived from recovered donors, is a potential rapidly available, specific therapy for an outbreak infection such as SARS-CoV-2. Findings from randomised clinical trials of hIVIG for the treatment of COVID-19 are limited. METHODS: In this international randomised, double-blind, placebo-controlled trial, hospitalised patients with COVID-19 who had been symptomatic for up to 12 days and did not have acute end-organ failure were randomly assigned (1:1) to receive either hIVIG or an equivalent volume of saline as placebo, in addition to remdesivir, when not contraindicated, and other standard clinical care. Randomisation was stratified by site pharmacy; schedules were prepared using a mass-weighted urn design. Infusions were prepared and masked by trial pharmacists; all other investigators, research staff, and trial participants were masked to group allocation. Follow-up was for 28 days. The primary outcome was measured at day 7 by a seven-category ordinal endpoint that considered pulmonary status and extrapulmonary complications and ranged from no limiting symptoms to death. Deaths and adverse events, including organ failure and serious infections, were used to define composite safety outcomes at days 7 and 28. Prespecified subgroup analyses were carried out for efficacy and safety outcomes by duration of symptoms, the presence of anti-spike neutralising antibodies, and other baseline factors. Analyses were done on a modified intention-to-treat (mITT) population, which included all randomly assigned participants who met eligibility criteria and received all or part of the assigned study product infusion. This study is registered with ClinicalTrials.gov, NCT04546581. FINDINGS: From Oct 8, 2020, to Feb 10, 2021, 593 participants (n=301 hIVIG, n=292 placebo) were enrolled at 63 sites in 11 countries; 579 patients were included in the mITT analysis. Compared with placebo, the hIVIG group did not have significantly greater odds of a more favourable outcome at day 7; the adjusted OR was 1·06 (95% CI 0·77–1·45; p=0·72). Infusions were well tolerated, although infusion reactions were more common in the hIVIG group (18·6% vs 9·5% for placebo; p=0·002). The percentage with the composite safety outcome at day 7 was similar for the hIVIG (24%) and placebo groups (25%; OR 0·98, 95% CI 0·66–1·46; p=0·91). The ORs for the day 7 ordinal outcome did not vary for subgroups considered, but there was evidence of heterogeneity of the treatment effect for the day 7 composite safety outcome: risk was greater for hIVIG compared with placebo for patients who were antibody positive (OR 2·21, 95% CI 1·14–4·29); for patients who were antibody negative, the OR was 0·51 (0·29–0·90; pinteraction=0·001). INTERPRETATION: When administered with standard of care including remdesivir, SARS-CoV-2 hIVIG did not demonstrate efficacy among patients hospitalised with COVID-19 without end-organ failure. The safety of hIVIG might vary by the presence of endogenous neutralising antibodies at entry. FUNDING: US National Institutes of Health

    Development and Validation of a Risk Score for Chronic Kidney Disease in HIV Infection Using Prospective Cohort Data from the D:A:D Study

    Get PDF
    Ristola M. on työryhmien DAD Study Grp ; Royal Free Hosp Clin Cohort ; INSIGHT Study Grp ; SMART Study Grp ; ESPRIT Study Grp jäsen.Background Chronic kidney disease (CKD) is a major health issue for HIV-positive individuals, associated with increased morbidity and mortality. Development and implementation of a risk score model for CKD would allow comparison of the risks and benefits of adding potentially nephrotoxic antiretrovirals to a treatment regimen and would identify those at greatest risk of CKD. The aims of this study were to develop a simple, externally validated, and widely applicable long-term risk score model for CKD in HIV-positive individuals that can guide decision making in clinical practice. Methods and Findings A total of 17,954 HIV-positive individuals from the Data Collection on Adverse Events of Anti-HIV Drugs (D:A:D) study with >= 3 estimated glomerular filtration rate (eGFR) values after 1 January 2004 were included. Baseline was defined as the first eGFR > 60 ml/min/1.73 m2 after 1 January 2004; individuals with exposure to tenofovir, atazanavir, atazanavir/ritonavir, lopinavir/ritonavir, other boosted protease inhibitors before baseline were excluded. CKD was defined as confirmed (>3 mo apart) eGFR In the D:A:D study, 641 individuals developed CKD during 103,185 person-years of follow-up (PYFU; incidence 6.2/1,000 PYFU, 95% CI 5.7-6.7; median follow-up 6.1 y, range 0.3-9.1 y). Older age, intravenous drug use, hepatitis C coinfection, lower baseline eGFR, female gender, lower CD4 count nadir, hypertension, diabetes, and cardiovascular disease (CVD) predicted CKD. The adjusted incidence rate ratios of these nine categorical variables were scaled and summed to create the risk score. The median risk score at baseline was -2 (interquartile range -4 to 2). There was a 1: 393 chance of developing CKD in the next 5 y in the low risk group (risk score = 5, 505 events), respectively. Number needed to harm (NNTH) at 5 y when starting unboosted atazanavir or lopinavir/ritonavir among those with a low risk score was 1,702 (95% CI 1,166-3,367); NNTH was 202 (95% CI 159-278) and 21 (95% CI 19-23), respectively, for those with a medium and high risk score. NNTH was 739 (95% CI 506-1462), 88 (95% CI 69-121), and 9 (95% CI 8-10) for those with a low, medium, and high risk score, respectively, starting tenofovir, atazanavir/ritonavir, or another boosted protease inhibitor. The Royal Free Hospital Clinic Cohort included 2,548 individuals, of whom 94 individuals developed CKD (3.7%) during 18,376 PYFU (median follow-up 7.4 y, range 0.3-12.7 y). Of 2,013 individuals included from the SMART/ESPRIT control arms, 32 individuals developed CKD (1.6%) during 8,452 PYFU (median follow-up 4.1 y, range 0.6-8.1 y). External validation showed that the risk score predicted well in these cohorts. Limitations of this study included limited data on race and no information on proteinuria. Conclusions Both traditional and HIV-related risk factors were predictive of CKD. These factors were used to develop a risk score for CKD in HIV infection, externally validated, that has direct clinical relevance for patients and clinicians to weigh the benefits of certain antiretrovirals against the risk of CKD and to identify those at greatest risk of CKD.Peer reviewe

    Effects of Pharyngeal Electrical Stimulation on Swallow Timings, Clearance and Safety in Post-Stroke Dysphagia:Analysis from the Swallowing Treatment Using Electrical Pharyngeal Stimulation (STEPS) Trial

    Get PDF
    Swallowing impairment (dysphagia) post-stroke results in poorer outcomes. Pharyngeal electrical stimulation (PES) is a potential treatment for post-stroke dysphagia. In a post hoc analysis, we investigated PES using videofluoroscopy swallow studies (VFSS) from the STEPS trial incorporating multiple measures of safety (penetration aspiration scale-PAS), speed and duration (timing), and efficiency (clearance), as opposed to the original trial which only measured PAS scores. 81 randomised participants (PES () versus sham ()) were analysed at baseline and 2 weeks. Participants swallowed up to and of thin liquid barium at 40% , images at ≥25 fps. Based on PAS, the 5 ml mode bolus (most frequently occurring PAS from ) and the worst 50 ml bolus were chosen for further analysis. Eight timing measures were performed, including stage transition duration (STD) and pharyngeal transit time (PTT). Clearance measures comprised oral and pharyngeal residue and swallows to clear. Comparisons of change of scoring outcomes between PES and sham were done at 2 weeks. Wilcoxon Signed Ranks Test was also used to evaluate longitudinal changes from both groups’ combined results at two weeks. Between-group analysis showed no statistically significant differences. Issues with suboptimal image quality and frame rate acquisition affected final numbers. At two weeks, both groups demonstrated a significant improvement in most safety scores (PAS) and STD, possibly due to spontaneous recovery or a combination of spontaneous recovery and swallowing treatment and usual care. A nonsignificant trend for improvement was seen in other timing measures, including PTT. This study, which conducted additional measurements of kinematic and residue analysis on the STEPS data did not detect “missed” improvements in swallowing function that the PAS is not designed to measure. However, more studies with greater numbers are required

    Reliability of the Penetration-Aspiration Scale and Temporal and Clearance Measures in Poststroke Dysphagia:Videofluoroscopic Analysis From the Swallowing Treatment using Electrical Pharyngeal Stimulation Trial

    No full text
    PURPOSE: Information on reliability of outcome measures used to assess the effectiveness of interventions in dysphagia rehabilitation is lacking, particularly when used by different research groups. Here, we report on reliability of the penetration–aspiration scale (PAS) and temporal and clearance measures, determined using videofluoroscopy. METHOD: Secondary analysis used videofluoroscopies from the Swallowing Treatment using Electrical Pharyngeal Stimulation trial in subacute stroke. PAS scores (719 scores from 18 participants) were evaluated and compared to the original PAS scores from the trial. Five conditions were assessed, including reliability for every swallow and overall mean of the worst PAS score. Operational rules for assessing temporal and clearance measures were also developed using the same data, and reliability of these rules was assessed. Reliability of component-level and derivative-level scores was assessed using the intraclass correlation coefficient (ICC) and weighted kappa. RESULTS: Image quality was variable. Interrater reliability for the overall mean of the worst PAS score was excellent (ICC = .914, 95% confidence interval [CI] [.853, .951]) but moderate for every swallow in the bolus (ICC = .743, 95% CI [.708, .775]). Intrarater reliability for PAS was excellent (all conditions). Excellent reliability (both inter- and intrarater > .90) was seen for temporal measures of stage transition duration (ICC = .998, 95% CI [.993, .999] and ICC = .995, 95% CI [.987, .998], respectively) as well as initiation of laryngeal closure and pharyngeal transit time and all individual swallow events. Strong scores were obtained for some clearance measures; others were moderate or weak. CONCLUSIONS: Interrater reliability for PAS is acceptable but depends on how the PAS scores are handled in the analysis. Interrater reliability for most temporal measures was high, although some measures required additional training. No clearance measures had excellent reliability. SUPPLEMENTAL MATERIAL: https://doi.org/10.23641/asha.1909008
    corecore