113 research outputs found

    Scale‐invariance of albedo‐based wind friction velocity

    Get PDF
    Obtaining reliable estimates of aerodynamic roughness is necessary to interpret and accurately predict aeolian sediment transport dynamics. However, inherent uncertainties in field measurements and models of surface aerodynamic properties continue to undermine aeolian research, monitoring, and dust modeling. A new relation between aerodynamic shelter and land surface shadow has been established at the wind tunnel scale, enabling the potential for estimates of wind erosion and dust emission to be obtained across scales from albedo data. Here, we compare estimates of wind friction velocity (u * ) derived from traditional methods (wind speed profiles) with those derived from the albedo model at two separate scales using bare soil patch (via net radiometers) and landscape (via MODIS 500 m) datasets. Results show that profile‐derived estimates of u * are highly variable in anisotropic surface roughness due to changes in wind direction and fetch. Wind speed profiles poorly estimate soil surface (bed) wind friction velocities necessary for aeolian sediment transport research and modeling. Albedo‐based estimates of u * at both scales have small variability because the estimate is integrated over a defined, fixed area and resolves the partition of wind momentum between roughness elements and the soil surface. We demonstrate that the wind tunnel‐based calibration of albedo for predicting wind friction velocities at the soil surface (u s* ) is applicable across scales. The albedo‐based approach enables consistent and reliable drag partition correction across scales for model and field estimates of u s* necessary for wind erosion and dust emission modeling

    A note on the use of drag partition in aeolian transport models

    Get PDF
    Sediment transport equations used in wind erosion and dust emission models generally incorporate a threshold for particle motion (u*t) with a correction function to account for roughness-induced momentum reduction and aerodynamic sheltering. The prevailing approach is to adjust u*t by the drag partition R, estimated as the ratio of the bare soil threshold (u*ts) to that of the surface in the presence of roughness elements (u*tr). Here, we show that application of R to adjust only the entrainment threshold (u*t = u*ts/R) is physically inconsistent with the effect of roughness on the momentum partition as represented in models and produces overestimates of the sediment flux density (Q). Equations for Q typically include a friction velocity scaling term (u*n). As Q scales with friction velocity at the soil surface (us*), rather than total friction velocity (u*) acting over the roughness layer, u*n must be also adjusted for roughness effects. Modelling aeolian transport as a function of us* represents a different way of thinking about the application of some drag partition schemes but is consistent with understanding of aeolian transport physics. We further note that the practice of reducing Q by the vegetation cover fraction to account for the physically-protected surface area constitutes double accounting of the surface protection when R is represented through the basal-to-frontal area ratio of roughness elements (σ) and roughness density (λ). If the drag partition is implemented fully, additional adjustment for surface protection is unnecessary to produce more accurate aeolian transport estimates. These findings apply equally to models of the vertical dust flux

    Bovine Lactoferrin Counteracts Toll-Like Receptor Mediated Activation Signals in Antigen Presenting Cells

    Get PDF
    Lactoferrin (LF), a key element in mammalian immune system, plays pivotal roles in host defence against infection and excessive inflammation. Its protective effects range from direct antimicrobial activities against a large panel of microbes, including bacteria, viruses, fungi and parasites, to antinflammatory and anticancer activities. In this study, we show that monocyte-derived dendritic cells (MD-DCs) generated in the presence of bovine LF (bLF) fail to undergo activation by up-modulating CD83, co-stimulatory and major histocompatibility complex molecules, and cytokine/chemokine secretion. Moreover, these cells are weak activators of T cell proliferation and retain antigen uptake activity. Consistent with an impaired maturation, bLF-MD-DC primed T lymphocytes exhibit a functional unresponsiveness characterized by reduced expression of CD154 and impaired expression of IFN-γ and IL-2. The observed imunosuppressive effects correlate with an increased expression of molecules with negative regulatory functions (i.e. immunoglobulin-like transcript 3 and programmed death ligand 1), indoleamine 2,3-dioxygenase, and suppressor of cytokine signaling-3. Interestingly, bLF-MD-DCs produce IL-6 and exhibit constitutive signal transducer and activator of transcription 3 activation. Conversely, bLF exposure of already differentiated MD-DCs completely fails to induce IL-6, and partially inhibits Toll-like receptor (TLR) agonist-induced activation. Cell-specific differences in bLF internalization likely account for the distinct response elicited by bLF in monocytes versus immature DCs, providing a mechanistic base for its multiple effects. These results indicate that bLF exerts a potent anti-inflammatory activity by skewing monocyte differentiation into DCs with impaired capacity to undergo activation and to promote Th1 responses. Overall, these bLF-mediated effects may represent a strategy to block excessive DC activation upon TLR-induced inflammation, adding further evidence for a critical role of bLF in directing host immune function

    Association of the OPRM1 Variant rs1799971 (A118G) with Non-Specific Liability to Substance Dependence in a Collaborative de novo Meta-Analysis of European-Ancestry Cohorts

    Get PDF
    Peer reviewe

    Machine learning algorithms performed no better than regression models for prognostication in traumatic brain injury

    Get PDF
    Objective: We aimed to explore the added value of common machine learning (ML) algorithms for prediction of outcome for moderate and severe traumatic brain injury. Study Design and Setting: We performed logistic regression (LR), lasso regression, and ridge regression with key baseline predictors in the IMPACT-II database (15 studies, n = 11,022). ML algorithms included support vector machines, random forests, gradient boosting machines, and artificial neural networks and were trained using the same predictors. To assess generalizability of predictions, we performed internal, internal-external, and external validation on the recent CENTER-TBI study (patients with Glasgow Coma Scale <13, n = 1,554). Both calibration (calibration slope/intercept) and discrimination (area under the curve) was quantified. Results: In the IMPACT-II database, 3,332/11,022 (30%) died and 5,233(48%) had unfavorable outcome (Glasgow Outcome Scale less than 4). In the CENTER-TBI study, 348/1,554(29%) died and 651(54%) had unfavorable outcome. Discrimination and calibration varied widely between the studies and less so between the studied algorithms. The mean area under the curve was 0.82 for mortality and 0.77 for unfavorable outcomes in the CENTER-TBI study. Conclusion: ML algorithms may not outperform traditional regression approaches in a low-dimensional setting for outcome prediction after moderate or severe traumatic brain injury. Similar to regression-based prediction models, ML algorithms should be rigorously validated to ensure applicability to new populations

    Variation in Structure and Process of Care in Traumatic Brain Injury: Provider Profiles of European Neurotrauma Centers Participating in the CENTER-TBI Study.

    Get PDF
    INTRODUCTION: The strength of evidence underpinning care and treatment recommendations in traumatic brain injury (TBI) is low. Comparative effectiveness research (CER) has been proposed as a framework to provide evidence for optimal care for TBI patients. The first step in CER is to map the existing variation. The aim of current study is to quantify variation in general structural and process characteristics among centers participating in the Collaborative European NeuroTrauma Effectiveness Research in Traumatic Brain Injury (CENTER-TBI) study. METHODS: We designed a set of 11 provider profiling questionnaires with 321 questions about various aspects of TBI care, chosen based on literature and expert opinion. After pilot testing, questionnaires were disseminated to 71 centers from 20 countries participating in the CENTER-TBI study. Reliability of questionnaires was estimated by calculating a concordance rate among 5% duplicate questions. RESULTS: All 71 centers completed the questionnaires. Median concordance rate among duplicate questions was 0.85. The majority of centers were academic hospitals (n = 65, 92%), designated as a level I trauma center (n = 48, 68%) and situated in an urban location (n = 70, 99%). The availability of facilities for neuro-trauma care varied across centers; e.g. 40 (57%) had a dedicated neuro-intensive care unit (ICU), 36 (51%) had an in-hospital rehabilitation unit and the organization of the ICU was closed in 64% (n = 45) of the centers. In addition, we found wide variation in processes of care, such as the ICU admission policy and intracranial pressure monitoring policy among centers. CONCLUSION: Even among high-volume, specialized neurotrauma centers there is substantial variation in structures and processes of TBI care. This variation provides an opportunity to study effectiveness of specific aspects of TBI care and to identify best practices with CER approaches

    The FANCM:p.Arg658* truncating variant is associated with risk of triple-negative breast cancer

    Get PDF
    Abstract: Breast cancer is a common disease partially caused by genetic risk factors. Germline pathogenic variants in DNA repair genes BRCA1, BRCA2, PALB2, ATM, and CHEK2 are associated with breast cancer risk. FANCM, which encodes for a DNA translocase, has been proposed as a breast cancer predisposition gene, with greater effects for the ER-negative and triple-negative breast cancer (TNBC) subtypes. We tested the three recurrent protein-truncating variants FANCM:p.Arg658*, p.Gln1701*, and p.Arg1931* for association with breast cancer risk in 67,112 cases, 53,766 controls, and 26,662 carriers of pathogenic variants of BRCA1 or BRCA2. These three variants were also studied functionally by measuring survival and chromosome fragility in FANCM−/− patient-derived immortalized fibroblasts treated with diepoxybutane or olaparib. We observed that FANCM:p.Arg658* was associated with increased risk of ER-negative disease and TNBC (OR = 2.44, P = 0.034 and OR = 3.79; P = 0.009, respectively). In a country-restricted analysis, we confirmed the associations detected for FANCM:p.Arg658* and found that also FANCM:p.Arg1931* was associated with ER-negative breast cancer risk (OR = 1.96; P = 0.006). The functional results indicated that all three variants were deleterious affecting cell survival and chromosome stability with FANCM:p.Arg658* causing more severe phenotypes. In conclusion, we confirmed that the two rare FANCM deleterious variants p.Arg658* and p.Arg1931* are risk factors for ER-negative and TNBC subtypes. Overall our data suggest that the effect of truncating variants on breast cancer risk may depend on their position in the gene. Cell sensitivity to olaparib exposure, identifies a possible therapeutic option to treat FANCM-associated tumors

    Subnational mapping of HIV incidence and mortality among individuals aged 15–49 years in sub-Saharan Africa, 2000–18 : a modelling study

    Get PDF
    Background: High-resolution estimates of HIV burden across space and time provide an important tool for tracking and monitoring the progress of prevention and control efforts and assist with improving the precision and efficiency of targeting efforts. We aimed to assess HIV incidence and HIV mortality for all second-level administrative units across sub-Saharan Africa. Methods: In this modelling study, we developed a framework that used the geographically specific HIV prevalence data collected in seroprevalence surveys and antenatal care clinics to train a model that estimates HIV incidence and mortality among individuals aged 15–49 years. We used a model-based geostatistical framework to estimate HIV prevalence at the second administrative level in 44 countries in sub-Saharan Africa for 2000–18 and sought data on the number of individuals on antiretroviral therapy (ART) by second-level administrative unit. We then modified the Estimation and Projection Package (EPP) to use these HIV prevalence and treatment estimates to estimate HIV incidence and mortality by second-level administrative unit. Findings: The estimates suggest substantial variation in HIV incidence and mortality rates both between and within countries in sub-Saharan Africa, with 15 countries having a ten-times or greater difference in estimated HIV incidence between the second-level administrative units with the lowest and highest estimated incidence levels. Across all 44 countries in 2018, HIV incidence ranged from 2 ·8 (95% uncertainty interval 2·1–3·8) in Mauritania to 1585·9 (1369·4–1824·8) cases per 100 000 people in Lesotho and HIV mortality ranged from 0·8 (0·7–0·9) in Mauritania to 676· 5 (513· 6–888·0) deaths per 100 000 people in Lesotho. Variation in both incidence and mortality was substantially greater at the subnational level than at the national level and the highest estimated rates were accordingly higher. Among second-level administrative units, Guijá District, Gaza Province, Mozambique, had the highest estimated HIV incidence (4661·7 [2544·8–8120·3]) cases per 100000 people in 2018 and Inhassunge District, Zambezia Province, Mozambique, had the highest estimated HIV mortality rate (1163·0 [679·0–1866·8]) deaths per 100 000 people. Further, the rate of reduction in HIV incidence and mortality from 2000 to 2018, as well as the ratio of new infections to the number of people living with HIV was highly variable. Although most second-level administrative units had declines in the number of new cases (3316 [81· 1%] of 4087 units) and number of deaths (3325 [81·4%]), nearly all appeared well short of the targeted 75% reduction in new cases and deaths between 2010 and 2020. Interpretation: Our estimates suggest that most second-level administrative units in sub-Saharan Africa are falling short of the targeted 75% reduction in new cases and deaths by 2020, which is further compounded by substantial within-country variability. These estimates will help decision makers and programme implementers expand access to ART and better target health resources to higher burden subnational areas

    Mapping geographical inequalities in access to drinking water and sanitation facilities in low-income and middle-income countries, 2000-17

    Get PDF
    Background: Universal access to safe drinking water and sanitation facilities is an essential human right, recognised in the Sustainable Development Goals as crucial for preventing disease and improving human wellbeing. Comprehensive, high-resolution estimates are important to inform progress towards achieving this goal. We aimed to produce high-resolution geospatial estimates of access to drinking water and sanitation facilities. Methods: We used a Bayesian geostatistical model and data from 600 sources across more than 88 low-income and middle-income countries (LMICs) to estimate access to drinking water and sanitation facilities on continuous continent-wide surfaces from 2000 to 2017, and aggregated results to policy-relevant administrative units. We estimated mutually exclusive and collectively exhaustive subcategories of facilities for drinking water (piped water on or off premises, other improved facilities, unimproved, and surface water) and sanitation facilities (septic or sewer sanitation, other improved, unimproved, and open defecation) with use of ordinal regression. We also estimated the number of diarrhoeal deaths in children younger than 5 years attributed to unsafe facilities and estimated deaths that were averted by increased access to safe facilities in 2017, and analysed geographical inequality in access within LMICs. Findings: Across LMICs, access to both piped water and improved water overall increased between 2000 and 2017, with progress varying spatially. For piped water, the safest water facility type, access increased from 40·0% (95% uncertainty interval [UI] 39·4–40·7) to 50·3% (50·0–50·5), but was lowest in sub-Saharan Africa, where access to piped water was mostly concentrated in urban centres. Access to both sewer or septic sanitation and improved sanitation overall also increased across all LMICs during the study period. For sewer or septic sanitation, access was 46·3% (95% UI 46·1–46·5) in 2017, compared with 28·7% (28·5–29·0) in 2000. Although some units improved access to the safest drinking water or sanitation facilities since 2000, a large absolute number of people continued to not have access in several units with high access to such facilities (>80%) in 2017. More than 253 000 people did not have access to sewer or septic sanitation facilities in the city of Harare, Zimbabwe, despite 88·6% (95% UI 87·2–89·7) access overall. Many units were able to transition from the least safe facilities in 2000 to safe facilities by 2017; for units in which populations primarily practised open defecation in 2000, 686 (95% UI 664–711) of the 1830 (1797–1863) units transitioned to the use of improved sanitation. Geographical disparities in access to improved water across units decreased in 76·1% (95% UI 71·6–80·7) of countries from 2000 to 2017, and in 53·9% (50·6–59·6) of countries for access to improved sanitation, but remained evident subnationally in most countries in 2017. Interpretation: Our estimates, combined with geospatial trends in diarrhoeal burden, identify where efforts to increase access to safe drinking water and sanitation facilities are most needed. By highlighting areas with successful approaches or in need of targeted interventions, our estimates can enable precision public health to effectively progress towards universal access to safe water and sanitation

    Peri-operative red blood cell transfusion in neonates and infants: NEonate and Children audiT of Anaesthesia pRactice IN Europe: A prospective European multicentre observational study

    Get PDF
    BACKGROUND: Little is known about current clinical practice concerning peri-operative red blood cell transfusion in neonates and small infants. Guidelines suggest transfusions based on haemoglobin thresholds ranging from 8.5 to 12 g dl-1, distinguishing between children from birth to day 7 (week 1), from day 8 to day 14 (week 2) or from day 15 (≥week 3) onwards. OBJECTIVE: To observe peri-operative red blood cell transfusion practice according to guidelines in relation to patient outcome. DESIGN: A multicentre observational study. SETTING: The NEonate-Children sTudy of Anaesthesia pRactice IN Europe (NECTARINE) trial recruited patients up to 60 weeks' postmenstrual age undergoing anaesthesia for surgical or diagnostic procedures from 165 centres in 31 European countries between March 2016 and January 2017. PATIENTS: The data included 5609 patients undergoing 6542 procedures. Inclusion criteria was a peri-operative red blood cell transfusion. MAIN OUTCOME MEASURES: The primary endpoint was the haemoglobin level triggering a transfusion for neonates in week 1, week 2 and week 3. Secondary endpoints were transfusion volumes, 'delta haemoglobin' (preprocedure - transfusion-triggering) and 30-day and 90-day morbidity and mortality. RESULTS: Peri-operative red blood cell transfusions were recorded during 447 procedures (6.9%). The median haemoglobin levels triggering a transfusion were 9.6 [IQR 8.7 to 10.9] g dl-1 for neonates in week 1, 9.6 [7.7 to 10.4] g dl-1 in week 2 and 8.0 [7.3 to 9.0] g dl-1 in week 3. The median transfusion volume was 17.1 [11.1 to 26.4] ml kg-1 with a median delta haemoglobin of 1.8 [0.0 to 3.6] g dl-1. Thirty-day morbidity was 47.8% with an overall mortality of 11.3%. CONCLUSIONS: Results indicate lower transfusion-triggering haemoglobin thresholds in clinical practice than suggested by current guidelines. The high morbidity and mortality of this NECTARINE sub-cohort calls for investigative action and evidence-based guidelines addressing peri-operative red blood cell transfusions strategies. TRIAL REGISTRATION: ClinicalTrials.gov, identifier: NCT02350348
    corecore