24 research outputs found

    Reliability, Validity, and Responsiveness of InFLUenza Patient-Reported Outcome (FLU-PRO©) Scores in Influenza-Positive Patients

    Get PDF
    Objectives: To assess the reliability, validity, and responsiveness of InFLUenza Patient-Reported Outcome (FLU-PRO©) scores for quantifying the presence and severity of influenza symptoms. Methods: An observational prospective cohort study of adults (≥18 years) with influenza-like illness in the United States, the United Kingdom, Mexico, and South America was conducted. Participants completed the 37-item draft FLU-PRO daily for up to 14 days. Item-level and factor analyses were used to remove items and determine factor structure. Reliability of the final tool was estimated using Cronbach α and intraclass correlation coefficients (2-day reliability). Convergent and known-groups validity and responsiveness were assessed using global assessments of influenza severity and return to usual health. Results: Of the 536 patients enrolled, 221 influenza-positive subjects comprised the analytical sample. The mean age of the patients was 40.7 years, 60.2% were women, and 59.7% were white. The final 32-item measure has six factors/domains (nose, throat, eyes, chest/respiratory, gastrointestinal, and body/systemic), with a higher order factor representing symptom severity overall (comparative fit index = 0.92; root mean square error of approximation = 0.06). Cronbach α was high (total = 0.92; domain range = 0.71–0.87); test-retest reliability (intraclass correlation coefficient, day 1–day 2) was 0.83 for total scores and 0.57 to 0.79 for domains. Day 1 FLU-PRO domain and total scores were moderately to highly correlated (≥0.30) with Patient Global Rating of Flu Severity (except nose and throat). Consistent with known-groups validity, scores differentiated severity groups on the basis of global rating (total: F = 57.2, P < 0.001; domains: F = 8.9–67.5, P < 0.001). Subjects reporting return to usual health showed significantly greater (P < 0.05) FLU-PRO score improvement by day 7 than did those who did not, suggesting score responsiveness. Conclusions: Results suggest that FLU-PRO scores are reliable, valid, and responsive to change in influenza-positive adults

    Evolution of the use of corticosteroids for the treatment of hospitalised COVID-19 patients in Spain between March and November 2020: SEMI-COVID national registry

    Get PDF
    Objectives: Since the results of the RECOVERY trial, WHO recommendations about the use of corticosteroids (CTs) in COVID-19 have changed. The aim of the study is to analyse the evolutive use of CTs in Spain during the pandemic to assess the potential influence of new recommendations. Material and methods: A retrospective, descriptive, and observational study was conducted on adults hospitalised due to COVID-19 in Spain who were included in the SEMI-COVID- 19 Registry from March to November 2020. Results: CTs were used in 6053 (36.21%) of the included patients. The patients were older (mean (SD)) (69.6 (14.6) vs. 66.0 (16.8) years; p < 0.001), with hypertension (57.0% vs. 47.7%; p < 0.001), obesity (26.4% vs. 19.3%; p < 0.0001), and multimorbidity prevalence (20.6% vs. 16.1%; p < 0.001). These patients had higher values (mean (95% CI)) of C-reactive protein (CRP) (86 (32.7-160) vs. 49.3 (16-109) mg/dL; p < 0.001), ferritin (791 (393-1534) vs. 470 (236- 996) µg/dL; p < 0.001), D dimer (750 (430-1400) vs. 617 (345-1180) µg/dL; p < 0.001), and lower Sp02/Fi02 (266 (91.1) vs. 301 (101); p < 0.001). Since June 2020, there was an increment in the use of CTs (March vs. September; p < 0.001). Overall, 20% did not receive steroids, and 40% received less than 200 mg accumulated prednisone equivalent dose (APED). Severe patients are treated with higher doses. The mortality benefit was observed in patients with oxygen saturation </=90%. Conclusions: Patients with greater comorbidity, severity, and inflammatory markers were those treated with CTs. In severe patients, there is a trend towards the use of higher doses. The mortality benefit was observed in patients with oxygen saturation </=90%

    TRY plant trait database – enhanced coverage and open access

    Get PDF
    Plant traits—the morphological, anatomical, physiological, biochemical and phenological characteristics of plants—determine how plants respond to environmental factors, affect other trophic levels, and influence ecosystem properties and their benefits and detriments to people. Plant trait data thus represent the basis for a vast area of research spanning from evolutionary biology, community and functional ecology, to biodiversity conservation, ecosystem and landscape management, restoration, biogeography and earth system modelling. Since its foundation in 2007, the TRY database of plant traits has grown continuously. It now provides unprecedented data coverage under an open access data policy and is the main plant trait database used by the research community worldwide. Increasingly, the TRY database also supports new frontiers of trait‐based plant research, including the identification of data gaps and the subsequent mobilization or measurement of new data. To support this development, in this article we evaluate the extent of the trait data compiled in TRY and analyse emerging patterns of data coverage and representativeness. Best species coverage is achieved for categorical traits—almost complete coverage for ‘plant growth form’. However, most traits relevant for ecology and vegetation modelling are characterized by continuous intraspecific variation and trait–environmental relationships. These traits have to be measured on individual plants in their respective environment. Despite unprecedented data coverage, we observe a humbling lack of completeness and representativeness of these continuous traits in many aspects. We, therefore, conclude that reducing data gaps and biases in the TRY database remains a key challenge and requires a coordinated approach to data mobilization and trait measurements. This can only be achieved in collaboration with other initiatives

    Reducing the environmental impact of surgery on a global scale: systematic review and co-prioritization with healthcare workers in 132 countries

    Get PDF
    Background Healthcare cannot achieve net-zero carbon without addressing operating theatres. The aim of this study was to prioritize feasible interventions to reduce the environmental impact of operating theatres. Methods This study adopted a four-phase Delphi consensus co-prioritization methodology. In phase 1, a systematic review of published interventions and global consultation of perioperative healthcare professionals were used to longlist interventions. In phase 2, iterative thematic analysis consolidated comparable interventions into a shortlist. In phase 3, the shortlist was co-prioritized based on patient and clinician views on acceptability, feasibility, and safety. In phase 4, ranked lists of interventions were presented by their relevance to high-income countries and low–middle-income countries. Results In phase 1, 43 interventions were identified, which had low uptake in practice according to 3042 professionals globally. In phase 2, a shortlist of 15 intervention domains was generated. In phase 3, interventions were deemed acceptable for more than 90 per cent of patients except for reducing general anaesthesia (84 per cent) and re-sterilization of ‘single-use’ consumables (86 per cent). In phase 4, the top three shortlisted interventions for high-income countries were: introducing recycling; reducing use of anaesthetic gases; and appropriate clinical waste processing. In phase 4, the top three shortlisted interventions for low–middle-income countries were: introducing reusable surgical devices; reducing use of consumables; and reducing the use of general anaesthesia. Conclusion This is a step toward environmentally sustainable operating environments with actionable interventions applicable to both high– and low–middle–income countries

    AS03-adjuvanted versus non-adjuvanted inactivated trivalent influenza vaccine against seasonal influenza in elderly people: a phase 3 randomised trial

    No full text
    Item does not contain fulltextBACKGROUND: We aimed to compare AS03-adjuvanted inactivated trivalent influenza vaccine (TIV) with non-adjuvanted TIV for seasonal influenza prevention in elderly people. METHODS: We did a randomised trial in 15 countries worldwide during the 2008-09 (year 1) and 2009-10 (year 2) influenza seasons. Eligible participants aged at least 65 years who were not in hospital or bedridden and were without acute illness were randomly assigned (1:1) to receive either AS03-adjuvanted TIV or non-adjuvanted TIV. Randomisation was done in an internet-based system, with a blocking scheme and stratification by age (65-74 years and 75 years or older). Participants were scheduled to receive one vaccine in each year, and remained in the same group in years 1 and 2. Unmasked personnel prepared and gave the vaccines, but participants and individuals assessing any study endpoint were masked. The coprimary objectives were to assess the relative efficacy of the vaccines and lot-to-lot consistency of the AS03-adjuvanted TIV (to be reported elsewhere). For the first objective, the primary endpoint was relative efficacy of the vaccines for prevention of influenza A (excluding A H1N1 pdm09) or B, or both, that was confirmed by PCR analysis in year 1 (lower limit of two-sided 95% CI had to be greater than zero to establish superiority). From Nov 15, to April 30, in both years, participants were monitored by telephone or site contact and home visits every week or 2 weeks to identify cases of influenza-like illness. After onset of suspected cases, we obtained nasal and throat swabs to identify influenza RNA with real-time PCR. Efficacy analyses were done per protocol. This trial is registered with ClinicalTrials.gov, number NCT00753272. FINDINGS: We enrolled 43 802 participants, of whom 21 893 were assigned to and received the AS03-adjuvanted TIV and 21 802 the non-adjuvanted TIV in year 1. In the year 1 efficacy cohort, fewer participants given AS03-adjuvanted than non-adjuvanted TIV were infected with influenza A or B, or both (274 [1.27%, 95% CI 1.12-1.43] of 21 573 vs 310 [1.44%, 1.29-1.61] of 21 482; relative efficacy 12.11%, 95% CI -3.40 to 25.29; superiority not established). Fewer participants in the year 1 efficacy cohort given AS03-adjuvanted TIV than non-adjuvanted TIV were infected with influenza A (224 [1.04%, 95% CI 0.91-1.18] vs 270 [1.26, 1.11-1.41]; relative efficacy 17.53%, 95% CI 1.55-30.92) and influenza A H3N2 (170 [0.79, 0.67-0.92] vs 205 [0.95, 0.83-1.09]; post-hoc analysis relative efficacy 22.0%, 95% CI 5.68-35.49). INTERPRETATION: AS03-adjuvanted TIV has a higher efficacy for prevention of some subtypes of influenza than does a non-adjuvanted TIV. Future influenza vaccine studies in elderly people should be based on subtype or lineage-specific endpoints. FUNDING: GlaxoSmithKline Biologicals SA

    Molecular mechanisms and biological role of Campylobacter jejuni attachment to host cells.

    No full text
    Adhesion to host cells is an important step in pathogenesis of Campylobacter jejuni, which is the most prevalent bacterial cause of human gastroenteritis worldwide. In contrast to other bacteria such as E. coli and Salmonella, adherence of C. jejuni is not mediated by fimbria or pili. A number of C. jejuni adhesion-related factors have been described. However, the results obtained by different researchers in different laboratories are often contradictory and inconclusive, with only some of the factors described being confirmed as true adhesins. In this review, we present the current state of studies on the mechanisms of attachment of C. jejuni to host cells

    Pseudomonas aeruginosa Bloodstream Infections Presenting with Septic Shock in Neutropenic Cancer Patients: Impact of Empirical Antibiotic Therapy.

    No full text
    This large, multicenter, retrospective cohort study including onco-hematological neutropenic patients with Pseudomonas aeruginosa bloodstream infection (PABSI) found that among 1213 episodes, 411 (33%) presented with septic shock. The presence of solid tumors (33.3% vs. 20.2%, p &lt; 0.001), a high-risk Multinational Association for Supportive Care in Cancer (MASCC) index score (92.6% vs. 57.4%; p &lt; 0.001), pneumonia (38% vs. 19.2% p &lt; 0.001), and infection due to multidrug-resistant P. aeruginosa (MDRPA) (33.8% vs. 21.1%, p &lt; 0.001) were statistically significantly higher in patients with septic shock compared to those without. Patients with septic shock were more likely to receive inadequate empirical antibiotic therapy (IEAT) (21.7% vs. 16.2%, p = 0.020) and to present poorer outcomes, including a need for ICU admission (74% vs. 10.5%; p &lt; 0.001), mechanical ventilation (49.1% vs. 5.6%; p &lt; 0.001), and higher 7-day and 30-day case fatality rates (58.2% vs. 12%, p &lt; 0.001, and 74% vs. 23.1%, p &lt; 0.001, respectively). Risk factors for 30-day case fatality rate in patients with septic shock were orotracheal intubation, IEAT, infection due to MDRPA, and persistent PABSI. Therapy with granulocyte colony-stimulating factor and BSI from the urinary tract were associated with improved survival. Carbapenems were the most frequent IEAT in patients with septic shock, and the use of empirical combination therapy showed a tendency towards improved survival. Our findings emphasize the need for tailored management strategies in this high-risk population

    Remdesivir for the treatment of COVID-19 — Final report

    No full text
    BACKGROUND Although several therapeutic agents have been evaluated for the treatment of coronavirus disease 2019 (Covid-19), no antiviral agents have yet been shown to be efficacious. METHODS We conducted a double-blind, randomized, placebo-controlled trial of intravenous remdesivir in adults who were hospitalized with Covid-19 and had evidence of lower respiratory tract infection. Patients were randomly assigned to receive either remdesivir (200 mg loading dose on day 1, followed by 100 mg daily for up to 9 additional days) or placebo for up to 10 days. The primary outcome was the time to recovery, defined by either discharge from the hospital or hospitalization for infection-control purposes only. RESULTS A total of 1062 patients underwent randomization (with 541 assigned to remdesivir and 521 to placebo). Those who received remdesivir had a median recovery time of 10 days (95% confidence interval [CI], 9 to 11), as compared with 15 days (95% CI, 13 to 18) among those who received placebo (rate ratio for recovery, 1.29; 95% CI, 1.12 to 1.49; P&lt;0.001, by a log-rank test). In an analysis that used a proportional-odds model with an eight-category ordinal scale, the patients who received remdesivir were found to be more likely than those who received placebo to have clinical improvement at day 15 (odds ratio, 1.5; 95% CI, 1.2 to 1.9, after adjustment for actual disease severity). The Kaplan–Meier estimates of mortality were 6.7% with remdesivir and 11.9% with placebo by day 15 and 11.4% with remdesivir and 15.2% with placebo by day 29 (hazard ratio, 0.73; 95% CI, 0.52 to 1.03). Serious adverse events were reported in 131 of the 532 patients who received remdesivir (24.6%) and in 163 of the 516 patients who received placebo (31.6%). CONCLUSIONS Our data show that remdesivir was superior to placebo in shortening the time to recovery in adults who were hospitalized with Covid-19 and had evidence of lower respiratory tract infection. Copyright © 2020 Massachusetts Medical Society
    corecore