168 research outputs found

    Contrasting effects of fluoroquinolone antibiotics on the expression of the collagenases, matrix metalloproteinases (MMP)-1 and -13, in human tendon-derived cells

    Get PDF
    Fluoroquinolone antibiotics may cause tendon pain and rupture. We reported previously that the fluoroquinolone ciprofloxacin potentiated interleukin (IL)-1ß-stimulated expression of matrix metalloproteinases (MMP)-3 and MMP-1 in human tendon-derived cells. We have now tested additional fluoroquinolones and investigated whether they have a similar effect on expression of MMP-13. Tendon cells were incubated for two periods of 48?h with or without fluoroquinolones and IL-1ß. Total ribonucleic acid (RNA) was assayed for MMP messenger RNA by relative quantitative reverse transcriptase polymerase chain reaction, with normalization for glyceraldehyde-3-phosphate dehydrogenase mRNA. Samples of supernatant medium were assayed for MMP output by activity assays. MMP-13 was expressed by tendon cells at lower levels than MMP-1, and was stimulated typically 10- to 100-fold by IL-1ß. Ciprofloxacin, norfloxacin and ofloxacin each reduced both basal and stimulated expression of MMP-13 mRNA. In contrast, ciprofloxacin and norfloxacin increased basal and IL-1ß-stimulated MMP-1 mRNA expression. Both the inhibition of MMP-13 and the potentiation of MMP-1 expression by fluoroquinolones were accompanied by corresponding changes in IL-1ß-stimulated MMP output. The non-fluorinated quinolone nalidixic acid had lesser or no effects. Fluoroquinolones show contrasting effects on the expression of the two collagenases MMP-1 and MMP-13, indicating specific effects on MMP gene regulation

    Physics, Topology, Logic and Computation: A Rosetta Stone

    Full text link
    In physics, Feynman diagrams are used to reason about quantum processes. In the 1980s, it became clear that underlying these diagrams is a powerful analogy between quantum physics and topology: namely, a linear operator behaves very much like a "cobordism". Similar diagrams can be used to reason about logic, where they represent proofs, and computation, where they represent programs. With the rise of interest in quantum cryptography and quantum computation, it became clear that there is extensive network of analogies between physics, topology, logic and computation. In this expository paper, we make some of these analogies precise using the concept of "closed symmetric monoidal category". We assume no prior knowledge of category theory, proof theory or computer science.Comment: 73 pages, 8 encapsulated postscript figure

    Exposure to Oil Spill Chemicals and Lung Function in Deepwater Horizon Disaster Response Workers

    Get PDF
    Objective: The aim of this study was to assess the relationship between total hydrocarbon (THC) exposures attributed to oil spill clean-up work and lung function 1 to 3 years after the Deepwater Horizon (DWH) disaster. Methods: We used data from the GuLF STUDY, a large cohort of adults who worked on response to the DWH disaster and others who were safety trained but did not work. We analyzed data from 6288 workers with two acceptable spirometry tests. We estimated THC exposure levels with a job exposure matrix. We evaluated lung function using the forced expiratory volume in 1second (FEV 1; mL), the forced vital capacity (FVC; mL), and the FEV 1 /FVC ratio (%). Results: Lung function measures did not differ by THC exposure levels among clean-up workers. Conclusion: We did not observe an association between THC exposure and lung function among clean-up workers 1 to 3 years following the DWH disaster

    Lung Function in Oil Spill Response Workers 1-3 Years after the Deepwater Horizon Disaster

    Get PDF
    Background: Little is known about the effects of inhalation exposures on lung function among workers involved in the mitigation of oil spills. Our objective was to determine the relationship between oil spill response work and lung function 1-3 years after the Deepwater Horizon (DWH) disaster. Methods: We evaluated spirometry for 7,775 adults living in the Gulf states who either participated in DWH response efforts (workers) or received safety training but were not hired (nonworkers). At an enrollment interview, we collected detailed work histories including information on potential exposure to dispersants and burning oil/gas. We assessed forced expiratory volume in 1 second (FEV 1; mL), forced vital capacity (FVC; mL), and the ratio (FEV 1 /FVC%) for differences by broad job classes and exposure to dispersants or burning oil/gas using multivariable linear and modified Poisson regression. Results: We found no differences between workers and nonworkers. Among workers, we observed a small decrement in FEV 1 (Beta, -71 mL; 95% confidence interval [CI], -127 to -14) in decontamination workers compared with support workers. Workers with high potential exposure to burning oil/gas had reduced lung function compared with unexposed workers: FEV 1 (Beta, -183 mL; 95% CI, -316 to -49) and FEV 1 /FVC (Beta, -1.93%; 95% CI, -3.50 to -0.36), and an elevated risk of having a FEV 1 /FVC in the lowest tertile (prevalence ratio, 1.38; 95% CI, 0.99 to 1.92). Conclusions: While no differences in lung function were found between workers and nonworkers, lung function was reduced among decontamination workers and workers with high exposure to burning oil/gas compared with unexposed workers

    An updated radiocarbon-based ice margin chronology for the last deglaciation of the North American Ice Sheet Complex

    Get PDF
    The North American Ice Sheet Complex (NAISC; consisting of the Laurentide, Cordilleran and Innuitian ice sheets) was the largest ice mass to repeatedly grow and decay in the Northern Hemisphere during the Quaternary. Understanding its pattern of retreat following the Last Glacial Maximum is critical for studying many facets of the Late Quaternary, including ice sheet behaviour, the evolution of Holocene landscapes, sea level, atmospheric circulation, and the peopling of the Americas. Currently, the most up-to-date and authoritative margin chronology for the entire ice sheet complex is featured in two publications (Geological Survey of Canada Open File 1574 [Dyke et al., 2003]; ‘Quaternary Glaciations – Extent and Chronology, Part II’ [Dyke, 2004]). These often-cited datasets track ice margin recession in 36 time slices spanning 18 ka to 1 ka (all ages in uncalibrated radiocarbon years) using a combination of geomorphology, stratigraphy and radiocarbon dating. However, by virtue of being over 15 years old, the ice margin chronology requires updating to reflect new work and important revisions. This paper updates the aforementioned 36 ice margin maps to reflect new data from regional studies. We also update the original radiocarbon dataset from the 2003/2004 papers with 1541 new ages to reflect work up to and including 2018. A major revision is made to the 18 ka ice margin, where Banks and Eglinton islands (once considered to be glacial refugia) are now shown to be fully glaciated. Our updated 18 ka ice sheet increased in areal extent from 17.81 to 18.37 million km2, which is an increase of 3.1% in spatial coverage of the NAISC at that time. Elsewhere, we also summarize, region-by-region, significant changes to the deglaciation sequence. This paper integrates new information provided by regional experts and radiocarbon data into the deglaciation sequence while maintaining consistency with the original ice margin positions of Dyke et al. (2003) and Dyke (2004) where new information is lacking; this is a pragmatic solution to satisfy the needs of a Quaternary research community that requires up-to-date knowledge of the pattern of ice margin recession of what was once the world’s largest ice mass. The 36 updated isochrones are available in PDF and shapefile format, together with a spreadsheet of the expanded radiocarbon dataset (n = 5195 ages) and estimates of uncertainty for each interval

    Machine learning algorithms performed no better than regression models for prognostication in traumatic brain injury

    Get PDF
    Objective: We aimed to explore the added value of common machine learning (ML) algorithms for prediction of outcome for moderate and severe traumatic brain injury. Study Design and Setting: We performed logistic regression (LR), lasso regression, and ridge regression with key baseline predictors in the IMPACT-II database (15 studies, n = 11,022). ML algorithms included support vector machines, random forests, gradient boosting machines, and artificial neural networks and were trained using the same predictors. To assess generalizability of predictions, we performed internal, internal-external, and external validation on the recent CENTER-TBI study (patients with Glasgow Coma Scale <13, n = 1,554). Both calibration (calibration slope/intercept) and discrimination (area under the curve) was quantified. Results: In the IMPACT-II database, 3,332/11,022 (30%) died and 5,233(48%) had unfavorable outcome (Glasgow Outcome Scale less than 4). In the CENTER-TBI study, 348/1,554(29%) died and 651(54%) had unfavorable outcome. Discrimination and calibration varied widely between the studies and less so between the studied algorithms. The mean area under the curve was 0.82 for mortality and 0.77 for unfavorable outcomes in the CENTER-TBI study. Conclusion: ML algorithms may not outperform traditional regression approaches in a low-dimensional setting for outcome prediction after moderate or severe traumatic brain injury. Similar to regression-based prediction models, ML algorithms should be rigorously validated to ensure applicability to new populations

    Variation in Structure and Process of Care in Traumatic Brain Injury: Provider Profiles of European Neurotrauma Centers Participating in the CENTER-TBI Study.

    Get PDF
    INTRODUCTION: The strength of evidence underpinning care and treatment recommendations in traumatic brain injury (TBI) is low. Comparative effectiveness research (CER) has been proposed as a framework to provide evidence for optimal care for TBI patients. The first step in CER is to map the existing variation. The aim of current study is to quantify variation in general structural and process characteristics among centers participating in the Collaborative European NeuroTrauma Effectiveness Research in Traumatic Brain Injury (CENTER-TBI) study. METHODS: We designed a set of 11 provider profiling questionnaires with 321 questions about various aspects of TBI care, chosen based on literature and expert opinion. After pilot testing, questionnaires were disseminated to 71 centers from 20 countries participating in the CENTER-TBI study. Reliability of questionnaires was estimated by calculating a concordance rate among 5% duplicate questions. RESULTS: All 71 centers completed the questionnaires. Median concordance rate among duplicate questions was 0.85. The majority of centers were academic hospitals (n = 65, 92%), designated as a level I trauma center (n = 48, 68%) and situated in an urban location (n = 70, 99%). The availability of facilities for neuro-trauma care varied across centers; e.g. 40 (57%) had a dedicated neuro-intensive care unit (ICU), 36 (51%) had an in-hospital rehabilitation unit and the organization of the ICU was closed in 64% (n = 45) of the centers. In addition, we found wide variation in processes of care, such as the ICU admission policy and intracranial pressure monitoring policy among centers. CONCLUSION: Even among high-volume, specialized neurotrauma centers there is substantial variation in structures and processes of TBI care. This variation provides an opportunity to study effectiveness of specific aspects of TBI care and to identify best practices with CER approaches

    Measurements of differential production cross sections for a Z boson in association with jets in pp collisions at root s=8 TeV

    Get PDF
    Peer reviewe

    Experimental progress in positronium laser physics

    Get PDF
    corecore