10 research outputs found

    High throughput genomic sequencing of bioaerosols in broiler chicken production facilities

    Get PDF
    Chronic inhalation exposure to agricultural dust promotes the development of chronic respiratory diseases among poultry workers. Poultry dust is composed of dander, chicken feed, litter bedding and microbes. However, the microbial composition and abundance has not been fully elucidated. Genomic DNA was extracted from settled dust and personal inhalable dust collected while performing litter sampling or mortality collection tasks. DNA libraries were sequenced using a paired-end sequencing-by-synthesis approach on an Illumina HiSeq 2500. Sequencing data showed that poultry dust is predominantly composed of bacteria (64–67%) with a small quantity of avian, human and feed DNA (\u3c 2% of total reads). Staphylococcus sp. AL1, Salinicoccus carnicancri and Lactobacillus crispatus were the most abundant bacterial species in personal exposure samples of inhalable dust. Settled dust had a moderate relative abundance of these species as well as Staphylococcus lentus and Lactobacillus salivarius. There was a statistical difference between the microbial composition of aerosolized and settled dust. Unlike settled dust composition, aerosolized dust composition had little variance between samples. These data provide an extensive analysis of the microbial composition and relative abundance in personal inhalable poultry dust and settled poultry dust

    Use of lenvatinib in the treatment of radioiodine-refractory differentiated thyroid cancer: a multidisciplinary perspective for daily practice

    Get PDF
    Background: Most thyroid cancers of follicular origin have a favorable outcome. Only a small percentage of patients will develop metastatic disease, some of which will become radioiodine refractory (RAI-R). Important challenges to ensure the best therapeutic outcomes include proper, timely, and appropriate diagnosis; decisions on local, systemic treatments; management of side effects of therapies; and a good relationship between the specialist, patients, and caregivers. Methods: With the aim of providing suggestions that can be useful in ev eryday practice, a multidisciplinary group of experts organized the following document, based on their shared clinical experience with patients with RAI-R differentiat ed thyroid cancer (DTC) undergoing treatment with lenvatinib. The main areas covered are patient selection, initiation of therapy, follow-up, and management of adverse events. Conclusions: It is essential to provide guidance for the management of RAI-R DTC patients with systemic therapies, and especially lenvatinib, since compliance and adherence to treatment are fundamental to achieve the best outcomes. Whil e the therapeutic landscape in RAI-R DTC is evolving, with new targeted therapies, immunotherapy, etc., lenvatinib is expected to remain a first-line treatment and mainstay of therapy for several years in the vast majority of patients and settings. The guidance herein covers baseline work-up and initiation of systemic therapy, relevance of symptoms, multidisciplinary assessment, and patient education. Practical information based on expert experience is also given for the starting dose of lenvatinib, follow-up and monitoring, as well as the management of adverse events and discontinuation and reinitiating of therapy. The importance of patient engagement is also stressed

    Effect of angiotensin-converting enzyme inhibitor and angiotensin receptor blocker initiation on organ support-free days in patients hospitalized with COVID-19

    Get PDF
    IMPORTANCE Overactivation of the renin-angiotensin system (RAS) may contribute to poor clinical outcomes in patients with COVID-19. Objective To determine whether angiotensin-converting enzyme (ACE) inhibitor or angiotensin receptor blocker (ARB) initiation improves outcomes in patients hospitalized for COVID-19. DESIGN, SETTING, AND PARTICIPANTS In an ongoing, adaptive platform randomized clinical trial, 721 critically ill and 58 non–critically ill hospitalized adults were randomized to receive an RAS inhibitor or control between March 16, 2021, and February 25, 2022, at 69 sites in 7 countries (final follow-up on June 1, 2022). INTERVENTIONS Patients were randomized to receive open-label initiation of an ACE inhibitor (n = 257), ARB (n = 248), ARB in combination with DMX-200 (a chemokine receptor-2 inhibitor; n = 10), or no RAS inhibitor (control; n = 264) for up to 10 days. MAIN OUTCOMES AND MEASURES The primary outcome was organ support–free days, a composite of hospital survival and days alive without cardiovascular or respiratory organ support through 21 days. The primary analysis was a bayesian cumulative logistic model. Odds ratios (ORs) greater than 1 represent improved outcomes. RESULTS On February 25, 2022, enrollment was discontinued due to safety concerns. Among 679 critically ill patients with available primary outcome data, the median age was 56 years and 239 participants (35.2%) were women. Median (IQR) organ support–free days among critically ill patients was 10 (–1 to 16) in the ACE inhibitor group (n = 231), 8 (–1 to 17) in the ARB group (n = 217), and 12 (0 to 17) in the control group (n = 231) (median adjusted odds ratios of 0.77 [95% bayesian credible interval, 0.58-1.06] for improvement for ACE inhibitor and 0.76 [95% credible interval, 0.56-1.05] for ARB compared with control). The posterior probabilities that ACE inhibitors and ARBs worsened organ support–free days compared with control were 94.9% and 95.4%, respectively. Hospital survival occurred in 166 of 231 critically ill participants (71.9%) in the ACE inhibitor group, 152 of 217 (70.0%) in the ARB group, and 182 of 231 (78.8%) in the control group (posterior probabilities that ACE inhibitor and ARB worsened hospital survival compared with control were 95.3% and 98.1%, respectively). CONCLUSIONS AND RELEVANCE In this trial, among critically ill adults with COVID-19, initiation of an ACE inhibitor or ARB did not improve, and likely worsened, clinical outcomes. TRIAL REGISTRATION ClinicalTrials.gov Identifier: NCT0273570

    Effect of lower tidal volume ventilation facilitated by extracorporeal carbon dioxide removal vs standard care ventilation on 90-day mortality in patients with acute hypoxemic respiratory failure

    No full text
    Importance In patients who require mechanical ventilation for acute hypoxemic respiratory failure, further reduction in tidal volumes, compared with conventional low tidal volume ventilation, may improve outcomes. Objective To determine whether lower tidal volume mechanical ventilation using extracorporeal carbon dioxide removal improves outcomes in patients with acute hypoxemic respiratory failure. Design, Setting, and Participants This multicenter, randomized, allocation-concealed, open-label, pragmatic clinical trial enrolled 412 adult patients receiving mechanical ventilation for acute hypoxemic respiratory failure, of a planned sample size of 1120, between May 2016 and December 2019 from 51 intensive care units in the UK. Follow-up ended on March 11, 2020. Interventions Participants were randomized to receive lower tidal volume ventilation facilitated by extracorporeal carbon dioxide removal for at least 48 hours (n = 202) or standard care with conventional low tidal volume ventilation (n = 210). Main Outcomes and Measures The primary outcome was all-cause mortality 90 days after randomization. Prespecified secondary outcomes included ventilator-free days at day 28 and adverse event rates. Results Among 412 patients who were randomized (mean age, 59 years; 143 [35%] women), 405 (98%) completed the trial. The trial was stopped early because of futility and feasibility following recommendations from the data monitoring and ethics committee. The 90-day mortality rate was 41.5% in the lower tidal volume ventilation with extracorporeal carbon dioxide removal group vs 39.5% in the standard care group (risk ratio, 1.05 [95% CI, 0.83-1.33]; difference, 2.0% [95% CI, −7.6% to 11.5%]; P = .68). There were significantly fewer mean ventilator-free days in the extracorporeal carbon dioxide removal group compared with the standard care group (7.1 [95% CI, 5.9-8.3] vs 9.2 [95% CI, 7.9-10.4] days; mean difference, −2.1 [95% CI, −3.8 to −0.3]; P = .02). Serious adverse events were reported for 62 patients (31%) in the extracorporeal carbon dioxide removal group and 18 (9%) in the standard care group, including intracranial hemorrhage in 9 patients (4.5%) vs 0 (0%) and bleeding at other sites in 6 (3.0%) vs 1 (0.5%) in the extracorporeal carbon dioxide removal group vs the control group. Overall, 21 patients experienced 22 serious adverse events related to the study device. Conclusions and Relevance Among patients with acute hypoxemic respiratory failure, the use of extracorporeal carbon dioxide removal to facilitate lower tidal volume mechanical ventilation, compared with conventional low tidal volume mechanical ventilation, did not significantly reduce 90-day mortality. However, due to early termination, the study may have been underpowered to detect a clinically important difference

    Postoperative continuous positive airway pressure to prevent pneumonia, re-intubation, and death after major abdominal surgery (PRISM): a multicentre, open-label, randomised, phase 3 trial

    Get PDF
    Background: Respiratory complications are an important cause of postoperative morbidity. We aimed to investigate whether continuous positive airway pressure (CPAP) administered immediately after major abdominal surgery could prevent postoperative morbidity. Methods: PRISM was an open-label, randomised, phase 3 trial done at 70 hospitals across six countries. Patients aged 50 years or older who were undergoing elective major open abdominal surgery were randomly assigned (1:1) to receive CPAP within 4 h of the end of surgery or usual postoperative care. Patients were randomly assigned using a computer-generated minimisation algorithm with inbuilt concealment. The primary outcome was a composite of pneumonia, endotracheal re-intubation, or death within 30 days after randomisation, assessed in the intention-to-treat population. Safety was assessed in all patients who received CPAP. The trial is registered with the ISRCTN registry, ISRCTN56012545. Findings: Between Feb 8, 2016, and Nov 11, 2019, 4806 patients were randomly assigned (2405 to the CPAP group and 2401 to the usual care group), of whom 4793 were included in the primary analysis (2396 in the CPAP group and 2397 in the usual care group). 195 (8\ub71%) of 2396 patients in the CPAP group and 197 (8\ub72%) of 2397 patients in the usual care group met the composite primary outcome (adjusted odds ratio 1\ub701 [95% CI 0\ub781-1\ub724]; p=0\ub795). 200 (8\ub79%) of 2241 patients in the CPAP group had adverse events. The most common adverse events were claustrophobia (78 [3\ub75%] of 2241 patients), oronasal dryness (43 [1\ub79%]), excessive air leak (36 [1\ub76%]), vomiting (26 [1\ub72%]), and pain (24 [1\ub71%]). There were two serious adverse events: one patient had significant hearing loss and one patient had obstruction of their venous catheter caused by a CPAP hood, which resulted in transient haemodynamic instability. Interpretation: In this large clinical effectiveness trial, CPAP did not reduce the incidence of pneumonia, endotracheal re-intubation, or death after major abdominal surgery. Although CPAP has an important role in the treatment of respiratory failure after surgery, routine use of prophylactic post-operative CPAP is not recommended

    A Bayesian reanalysis of the Standard versus Accelerated Initiation of Renal-Replacement Therapy in Acute Kidney Injury (STARRT-AKI) trial

    No full text
    Background Timing of initiation of kidney-replacement therapy (KRT) in critically ill patients remains controversial. The Standard versus Accelerated Initiation of Renal-Replacement Therapy in Acute Kidney Injury (STARRT-AKI) trial compared two strategies of KRT initiation (accelerated versus standard) in critically ill patients with acute kidney injury and found neutral results for 90-day all-cause mortality. Probabilistic exploration of the trial endpoints may enable greater understanding of the trial findings. We aimed to perform a reanalysis using a Bayesian framework. Methods We performed a secondary analysis of all 2927 patients randomized in multi-national STARRT-AKI trial, performed at 168 centers in 15 countries. The primary endpoint, 90-day all-cause mortality, was evaluated using hierarchical Bayesian logistic regression. A spectrum of priors includes optimistic, neutral, and pessimistic priors, along with priors informed from earlier clinical trials. Secondary endpoints (KRT-free days and hospital-free days) were assessed using zero–one inflated beta regression. Results The posterior probability of benefit comparing an accelerated versus a standard KRT initiation strategy for the primary endpoint suggested no important difference, regardless of the prior used (absolute difference of 0.13% [95% credible interval [CrI] − 3.30%; 3.40%], − 0.39% [95% CrI − 3.46%; 3.00%], and 0.64% [95% CrI − 2.53%; 3.88%] for neutral, optimistic, and pessimistic priors, respectively). There was a very low probability that the effect size was equal or larger than a consensus-defined minimal clinically important difference. Patients allocated to the accelerated strategy had a lower number of KRT-free days (median absolute difference of − 3.55 days [95% CrI − 6.38; − 0.48]), with a probability that the accelerated strategy was associated with more KRT-free days of 0.008. Hospital-free days were similar between strategies, with the accelerated strategy having a median absolute difference of 0.48 more hospital-free days (95% CrI − 1.87; 2.72) compared with the standard strategy and the probability that the accelerated strategy had more hospital-free days was 0.66. Conclusions In a Bayesian reanalysis of the STARRT-AKI trial, we found very low probability that an accelerated strategy has clinically important benefits compared with the standard strategy. Patients receiving the accelerated strategy probably have fewer days alive and KRT-free. These findings do not support the adoption of an accelerated strategy of KRT initiation

    Regional Practice Variation and Outcomes in the Standard Versus Accelerated Initiation of Renal Replacement Therapy in Acute Kidney Injury (STARRT-AKI) Trial: A Post Hoc Secondary Analysis.

    No full text
    ObjectivesAmong patients with severe acute kidney injury (AKI) admitted to the ICU in high-income countries, regional practice variations for fluid balance (FB) management, timing, and choice of renal replacement therapy (RRT) modality may be significant.DesignSecondary post hoc analysis of the STandard vs. Accelerated initiation of Renal Replacement Therapy in Acute Kidney Injury (STARRT-AKI) trial (ClinicalTrials.gov number NCT02568722).SettingOne hundred-fifty-three ICUs in 13 countries.PatientsAltogether 2693 critically ill patients with AKI, of whom 994 were North American, 1143 European, and 556 from Australia and New Zealand (ANZ).InterventionsNone.Measurements and main resultsTotal mean FB to a maximum of 14 days was +7199 mL in North America, +5641 mL in Europe, and +2211 mL in ANZ (p p p p p p p p = 0.007).ConclusionsAmong STARRT-AKI trial centers, significant regional practice variation exists regarding FB, timing of initiation of RRT, and initial use of continuous RRT. After adjustment, such practice variation was associated with lower ICU and hospital stay and 90-day mortality among ANZ patients compared with other regions

    Effect of Antiplatelet Therapy on Survival and Organ Support–Free Days in Critically Ill Patients With COVID-19

    No full text
    International audienc

    Whole-genome sequencing reveals host factors underlying critical COVID-19

    No full text
    Altres ajuts: Department of Health and Social Care (DHSC); Illumina; LifeArc; Medical Research Council (MRC); UKRI; Sepsis Research (the Fiona Elizabeth Agnew Trust); the Intensive Care Society, Wellcome Trust Senior Research Fellowship (223164/Z/21/Z); BBSRC Institute Program Support Grant to the Roslin Institute (BBS/E/D/20002172, BBS/E/D/10002070, BBS/E/D/30002275); UKRI grants (MC_PC_20004, MC_PC_19025, MC_PC_1905, MRNO2995X/1); UK Research and Innovation (MC_PC_20029); the Wellcome PhD training fellowship for clinicians (204979/Z/16/Z); the Edinburgh Clinical Academic Track (ECAT) programme; the National Institute for Health Research, the Wellcome Trust; the MRC; Cancer Research UK; the DHSC; NHS England; the Smilow family; the National Center for Advancing Translational Sciences of the National Institutes of Health (CTSA award number UL1TR001878); the Perelman School of Medicine at the University of Pennsylvania; National Institute on Aging (NIA U01AG009740); the National Institute on Aging (RC2 AG036495, RC4 AG039029); the Common Fund of the Office of the Director of the National Institutes of Health; NCI; NHGRI; NHLBI; NIDA; NIMH; NINDS.Critical COVID-19 is caused by immune-mediated inflammatory lung injury. Host genetic variation influences the development of illness requiring critical care or hospitalization after infection with SARS-CoV-2. The GenOMICC (Genetics of Mortality in Critical Care) study enables the comparison of genomes from individuals who are critically ill with those of population controls to find underlying disease mechanisms. Here we use whole-genome sequencing in 7,491 critically ill individuals compared with 48,400 controls to discover and replicate 23 independent variants that significantly predispose to critical COVID-19. We identify 16 new independent associations, including variants within genes that are involved in interferon signalling (IL10RB and PLSCR1), leucocyte differentiation (BCL11A) and blood-type antigen secretor status (FUT2). Using transcriptome-wide association and colocalization to infer the effect of gene expression on disease severity, we find evidence that implicates multiple genes-including reduced expression of a membrane flippase (ATP11A), and increased expression of a mucin (MUC1)-in critical disease. Mendelian randomization provides evidence in support of causal roles for myeloid cell adhesion molecules (SELE, ICAM5 and CD209) and the coagulation factor F8, all of which are potentially druggable targets. Our results are broadly consistent with a multi-component model of COVID-19 pathophysiology, in which at least two distinct mechanisms can predispose to life-threatening disease: failure to control viral replication; or an enhanced tendency towards pulmonary inflammation and intravascular coagulation. We show that comparison between cases of critical illness and population controls is highly efficient for the detection of therapeutically relevant mechanisms of disease
    corecore