37 research outputs found

    The Endogenous Th17 Response in NO<inf>2</inf>-Promoted Allergic Airway Disease Is Dispensable for Airway Hyperresponsiveness and Distinct from Th17 Adoptive Transfer

    Get PDF
    Severe, glucocorticoid-resistant asthma comprises 5-7% of patients with asthma. IL-17 is a biomarker of severe asthma, and the adoptive transfer of Th17 cells in mice is sufficient to induce glucocorticoid-resistant allergic airway disease. Nitrogen dioxide (NO2) is an environmental toxin that correlates with asthma severity, exacerbation, and risk of adverse outcomes. Mice that are allergically sensitized to the antigen ovalbumin by exposure to NO2 exhibit a mixed Th2/Th17 adaptive immune response and eosinophil and neutrophil recruitment to the airway following antigen challenge, a phenotype reminiscent of severe clinical asthma. Because IL-1 receptor (IL-1R) signaling is critical in the generation of the Th17 response in vivo, we hypothesized that the IL-1R/Th17 axis contributes to pulmonary inflammation and airway hyperresponsiveness (AHR) in NO2-promoted allergic airway disease and manifests in glucocorticoid-resistant cytokine production. IL-17A neutralization at the time of antigen challenge or genetic deficiency in IL-1R resulted in decreased neutrophil recruitment to the airway following antigen challenge but did not protect against the development of AHR. Instead, IL-1R-/- mice developed exacerbated AHR compared to WT mice. Lung cells from NO2-allergically inflamed mice that were treated in vitro with dexamethasone (Dex) during antigen restimulation exhibited reduced Th17 cytokine production, whereas Th17 cytokine production by lung cells from recipient mice of in vitro Th17-polarized OTII T-cells was resistant to Dex. These results demonstrate that the IL-1R/Th17 axis does not contribute to AHR development in NO2-promoted allergic airway disease, that Th17 adoptive transfer does not necessarily reflect an endogenously-generated Th17 response, and that functions of Th17 responses are contingent on the experimental conditions in which they are generated. © 2013 Martin et al

    The efficacy of surgical decompression before 24 hours versus 24 to 72 hours in patients with spinal cord injury from T1 to L1 – with specific consideration on ethics: a randomized controlled trial

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>There is no clear evidence that early decompression following spinal cord injury (SCI) improves neurologic outcome. Such information must be obtained from randomized controlled trials (RCTs). To date no large scale RCT has been performed evaluating the timing of surgical decompression in the setting of thoracolumbar spinal cord injury. A concern for many is the ethical dilemma that a delay in surgery may adversely effect neurologic recovery although this has never been conclusively proven. The purpose of this study is to compare the efficacy of early (before 24 hours) verse late (24–72 hours) surgical decompression in terms of neurological improvement in the setting of traumatic thoracolumbar spinal cord injury in a randomized format by independent, trained and blinded examiners.</p> <p>Methods</p> <p>In this prospective, randomized clinical trial, 328 selected spinal cord injury patients with traumatic thoracolumbar spinal cord injury are to be randomly assigned to: 1) early surgery (before 24 hours); or 2) late surgery (24–72 hours). A rapid response team and set up is prepared to assist the early treatment for the early decompressive group. Supportive care, i.e. pressure support, immobilization, will be provided on admission to the late decompression group. Patients will be followed for at least 12 months posttrauma.</p> <p>Discussion</p> <p>This study will hopefully assist in contributing to the question of the efficacy of the timing of surgery in traumatic thoracolumbar SCI.</p> <p>Trial Registration</p> <p><b>RCT registration number: ISRCTN61263382</b></p

    Degradation of carbon disulphide (CS<sub>2</sub>) in soils and groundwater from a CS<sub>2 -</sub>contaminated site

    Get PDF
    This study is the first investigation of biodegradation of carbon disulphide (CS2) in soil that provides estimates of degradation rates and identifies intermediate degradation products and carbon isotope signatures of degradation. Microcosm studies were undertaken under anaerobic conditions using soil and groundwater recovered from CS2-contaminated sites. Proposed degradation mechanisms were validated using equilibrium speciation modelling of concentrations and carbon isotope ratios. A first-order degradation rate constant of 1. 25 × 10-2 h-1 was obtained for biological degradation with soil. Carbonyl sulphide (COS) and hydrogen sulphide (H2S) were found to be intermediates of degradation, but did not accumulate in vials. A 13C/12C enrichment factor of -7. 5 ± 0. 8 ‰ was obtained for degradation within microcosms with both soil and groundwater whereas a 13C/12C enrichment factor of -23. 0 ± 2. 1 ‰ was obtained for degradation with site groundwater alone. It can be concluded that biological degradation of both CS2-contaminated soil and groundwater is likely to occur in the field suggesting that natural attenuation may be an appropriate remedial tool at some sites. The presence of biodegradation by-products including COS and H2S indicates that biodegradation of CS2 is occurring and stable carbon isotopes are a promising tool to quantify CS2 degradation
    corecore