15 research outputs found

    White-Tailed Deer Incidents With U.S. Civil Aircraft

    Get PDF
    Aircraft incidents with ungulates cause substantial economic losses and pose risks to human safety. We analyzed 879 white-tailed deer (Odocoileus virginianus) incidents with United States civil aircraft from 1990 to 2009 reported in the Federal Aviation Administration National Wildlife Strike Database. During that time, deer incidents followed a quadratic response curve, peaking in 1994 and declining thereafter. There appeared to be some seasonal patterning in incident frequency, with deer incidents increasing overall from January to November, and peaking in October and November (30.7%). Most incidents (64.8%) occurred at night, but incident rates were greatest (P 0.001) at dusk. Landing-roll represented 60.7% of incidents and more incidents occurred during landing than take-off (P 0.001). Almost 70% of deer incidents had an effect on flight. About 6% of pilots attempted to avoid deer, and were less likely to sustain damage. Aircraft were 25 times more likely to be destroyed when multiple deer were struck versus a single individual. Deer incidents represented 0.9% of all wildlife incidents, yet 5.4% of total estimated costs. Reported costs for deer incident damages during this period exceeded US36million,withUS36 million, with US75 million in total estimated damages. Deer incidents resulted in 1 of 24 human deaths and 26 of 217 injuries reported for all wildlife incidents with aircraft during the reporting period. Managers should implement exclusion techniques (e.g., fences, cattle guards, or electrified mats) to maximize reductions in deer use of airfields. Where exclusion is not practical, managers should consider lethal control, habitat modifications, increased monitoring and hazing, and improved technology to aircraft and runway lighting to reduce incidents at airports

    Global burden of 369 diseases and injuries in 204 countries and territories, 1990–2019: a systematic analysis for the Global Burden of Disease Study 2019

    Get PDF
    Background: In an era of shifting global agendas and expanded emphasis on non-communicable diseases and injuries along with communicable diseases, sound evidence on trends by cause at the national level is essential. The Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) provides a systematic scientific assessment of published, publicly available, and contributed data on incidence, prevalence, and mortality for a mutually exclusive and collectively exhaustive list of diseases and injuries. Methods: GBD estimates incidence, prevalence, mortality, years of life lost (YLLs), years lived with disability (YLDs), and disability-adjusted life-years (DALYs) due to 369 diseases and injuries, for two sexes, and for 204 countries and territories. Input data were extracted from censuses, household surveys, civil registration and vital statistics, disease registries, health service use, air pollution monitors, satellite imaging, disease notifications, and other sources. Cause-specific death rates and cause fractions were calculated using the Cause of Death Ensemble model and spatiotemporal Gaussian process regression. Cause-specific deaths were adjusted to match the total all-cause deaths calculated as part of the GBD population, fertility, and mortality estimates. Deaths were multiplied by standard life expectancy at each age to calculate YLLs. A Bayesian meta-regression modelling tool, DisMod-MR 2.1, was used to ensure consistency between incidence, prevalence, remission, excess mortality, and cause-specific mortality for most causes. Prevalence estimates were multiplied by disability weights for mutually exclusive sequelae of diseases and injuries to calculate YLDs. We considered results in the context of the Socio-demographic Index (SDI), a composite indicator of income per capita, years of schooling, and fertility rate in females younger than 25 years. Uncertainty intervals (UIs) were generated for every metric using the 25th and 975th ordered 1000 draw values of the posterior distribution. Findings: Global health has steadily improved over the past 30 years as measured by age-standardised DALY rates. After taking into account population growth and ageing, the absolute number of DALYs has remained stable. Since 2010, the pace of decline in global age-standardised DALY rates has accelerated in age groups younger than 50 years compared with the 1990–2010 time period, with the greatest annualised rate of decline occurring in the 0–9-year age group. Six infectious diseases were among the top ten causes of DALYs in children younger than 10 years in 2019: lower respiratory infections (ranked second), diarrhoeal diseases (third), malaria (fifth), meningitis (sixth), whooping cough (ninth), and sexually transmitted infections (which, in this age group, is fully accounted for by congenital syphilis; ranked tenth). In adolescents aged 10–24 years, three injury causes were among the top causes of DALYs: road injuries (ranked first), self-harm (third), and interpersonal violence (fifth). Five of the causes that were in the top ten for ages 10–24 years were also in the top ten in the 25–49-year age group: road injuries (ranked first), HIV/AIDS (second), low back pain (fourth), headache disorders (fifth), and depressive disorders (sixth). In 2019, ischaemic heart disease and stroke were the top-ranked causes of DALYs in both the 50–74-year and 75-years-and-older age groups. Since 1990, there has been a marked shift towards a greater proportion of burden due to YLDs from non-communicable diseases and injuries. In 2019, there were 11 countries where non-communicable disease and injury YLDs constituted more than half of all disease burden. Decreases in age-standardised DALY rates have accelerated over the past decade in countries at the lower end of the SDI range, while improvements have started to stagnate or even reverse in countries with higher SDI. Interpretation: As disability becomes an increasingly large component of disease burden and a larger component of health expenditure, greater research and developm nt investment is needed to identify new, more effective intervention strategies. With a rapidly ageing global population, the demands on health services to deal with disabling outcomes, which increase with age, will require policy makers to anticipate these changes. The mix of universal and more geographically specific influences on health reinforces the need for regular reporting on population health in detail and by underlying cause to help decision makers to identify success stories of disease control to emulate, as well as opportunities to improve. Funding: Bill & Melinda Gates Foundation. © 2020 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 licens

    Measurement of the charge asymmetry in top-quark pair production in the lepton-plus-jets final state in pp collision data at s=8TeV\sqrt{s}=8\,\mathrm TeV{} with the ATLAS detector

    Get PDF

    ATLAS Run 1 searches for direct pair production of third-generation squarks at the Large Hadron Collider

    Get PDF

    Predictive value of minimal residual disease in Philadelphia-chromosome-positive acute lymphoblastic leukemia treated with imatinib in the European intergroup study of post-induction treatment of Philadelphia-chromosome-positive acute lymphoblastic leukemia, based on immunoglobulin/T-cell receptor and BCR/ABL1 methodologies

    No full text
    The prognostic value of minimal residual disease (MRD) in Philadelphia-chromosome-positive (Ph+) childhood acute lymphoblastic leukemia (ALL) treated with tyrosine kinase inhibitors is not fully established. We detected MRD by real-time quantitative polymerase chain reaction (RQ-PCR) of rearranged immunoglobulin/T-cell receptor genes (IG/TR) and/or BCR/ABL1 fusion transcript to investigate its predictive value in patients receiving Berlin-Frankfurt-Münster (BFM) high-risk (HR) therapy and post-induction intermittent imatinib (the European intergroup study of post-induction treatment of Philadelphia-chromosome-positive acute lymphoblastic leukemia (EsPhALL) study). MRD was monitored after induction (time point (TP)1), consolidation Phase IB (TP2), HR Blocks, reinductions, and at the end of therapy. MRD negativity progressively increased over time, both by IG/TR and BCR/ABL1. Of 90 patients with IG/TR MRD at TP1, nine were negative and none relapsed, while 11 with MRD<5×10−4 and 70 with MRD≥5×10−4 had a comparable 5-year cumulative incidence of relapse of 36.4 (15.4) and 35.2 (5.9), respectively. Patients who achieved MRD negativity at TP2 had a low relapse risk (5-yr cumulative incidence of relapse (CIR)=14.3[9.8]), whereas those who attained MRD negativity at a later date showed higher CIR, comparable to patients with positive MRD at any level. BCR/ABL1 MRD negative patients at TP1 had a relapse risk similar to those who were IG/TR MRD negative (1/8 relapses). The overall concordance between the two methods is 69%, with significantly higher positivity by BCR/ABL1. In conclusion, MRD monitoring by both methods may be functional not only for measuring response but also for guiding biological studies aimed at investigating causes for discrepancies, although from our data IG/TR MRD monitoring appears to be more reliable. Early MRD negativity is highly predictive of favorable outcome. The earlier MRD negativity is achieved, the better the prognosis

    Predictive value of minimal residual disease in Philadelphia-chromosome-positive acute lymphoblastic leukemia treated with imatinib in the European intergroup study of post-induction treatment of Philadelphia-chromosome-positive acute lymphoblastic leukemia, based on immunoglobulin/T-cell receptor and BCR/ABL1 methodologies

    No full text
    The prognostic value of minimal residual disease (MRD) in Philadelphia-chromosome-positive (Ph+) childhood acute lymphoblastic leukemia (ALL) treated with tyrosine kinase inhibitors is not fully established. We detected MRD by real-time quantitative polymerase chain reaction (RQ-PCR) of rearranged immunoglobulin/T-cell receptor genes (IG/TR) and/or BCR/ABL1 fusion transcript to investigate its predictive value in patients receiving Berlin-Frankfurt-Münster (BFM) high-risk (HR) therapy and post-induction intermittent imatinib (the European intergroup study of post-induction treatment of Philadelphia-chromosome-positive acute lymphoblastic leukemia (EsPhALL) study). MRD was monitored after induction (time point (TP)1), consolidation Phase IB (TP2), HR Blocks, reinductions, and at the end of therapy. MRD negativity progressively increased over time, both by IG/TR and BCR/ABL1. Of 90 patients with IG/TR MRD at TP1, nine were negative and none relapsed, while 11 with MRD<5×10−4 and 70 with MRD≥5×10−4 had a comparable 5-year cumulative incidence of relapse of 36.4 (15.4) and 35.2 (5.9), respectively. Patients who achieved MRD negativity at TP2 had a low relapse risk (5-yr cumulative incidence of relapse (CIR)=14.3[9.8]), whereas those who attained MRD negativity at a later date showed higher CIR, comparable to patients with positive MRD at any level. BCR/ABL1 MRD negative patients at TP1 had a relapse risk similar to those who were IG/TR MRD negative (1/8 relapses). The overall concordance between the two methods is 69%, with significantly higher positivity by BCR/ABL1. In conclusion, MRD monitoring by both methods may be functional not only for measuring response but also for guiding biological studies aimed at investigating causes for discrepancies, although from our data IG/TR MRD monitoring appears to be more reliable. Early MRD negativity is highly predictive of favorable outcome. The earlier MRD negativity is achieved, the better the prognosis

    Global age-sex-specific fertility, mortality, healthy life expectancy (HALE), and population estimates in 204 countries and territories, 1950–2019:a comprehensive demographic analysis for the Global Burden of Disease Study 2019

    No full text
    Abstract Background: Accurate and up-to-date assessment of demographic metrics is crucial for understanding a wide range of social, economic, and public health issues that affect populations worldwide. The Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2019 produced updated and comprehensive demographic assessments of the key indicators of fertility, mortality, migration, and population for 204 countries and territories and selected subnational locations from 1950 to 2019. Methods: 8078 country-years of vital registration and sample registration data, 938 surveys, 349 censuses, and 238 other sources were identified and used to estimate age-specific fertility. Spatiotemporal Gaussian process regression (ST-GPR) was used to generate age-specific fertility rates for 5-year age groups between ages 15 and 49 years. With extensions to age groups 10–14 and 50–54 years, the total fertility rate (TFR) was then aggregated using the estimated age-specific fertility between ages 10 and 54 years. 7417 sources were used for under-5 mortality estimation and 7355 for adult mortality. ST-GPR was used to synthesise data sources after correction for known biases. Adult mortality was measured as the probability of death between ages 15 and 60 years based on vital registration, sample registration, and sibling histories, and was also estimated using ST-GPR. HIV-free life tables were then estimated using estimates of under-5 and adult mortality rates using a relational model life table system created for GBD, which closely tracks observed age-specific mortality rates from complete vital registration when available. Independent estimates of HIV-specific mortality generated by an epidemiological analysis of HIV prevalence surveys and antenatal clinic serosurveillance and other sources were incorporated into the estimates in countries with large epidemics. Annual and single-year age estimates of net migration and population for each country and territory were generated using a Bayesian hierarchical cohort component model that analysed estimated age-specific fertility and mortality rates along with 1250 censuses and 747 population registry years. We classified location-years into seven categories on the basis of the natural rate of increase in population (calculated by subtracting the crude death rate from the crude birth rate) and the net migration rate. We computed healthy life expectancy (HALE) using years lived with disability (YLDs) per capita, life tables, and standard demographic methods. Uncertainty was propagated throughout the demographic estimation process, including fertility, mortality, and population, with 1000 draw-level estimates produced for each metric. Findings: The global TFR decreased from 2·72 (95% uncertainty interval [UI] 2·66–2·79) in 2000 to 2·31 (2·17–2·46) in 2019. Global annual livebirths increased from 134·5 million (131·5–137·8) in 2000 to a peak of 139·6 million (133·0–146·9) in 2016. Global livebirths then declined to 135·3 million (127·2–144·1) in 2019. Of the 204 countries and territories included in this study, in 2019, 102 had a TFR lower than 2·1, which is considered a good approximation of replacement-level fertility. All countries in sub-Saharan Africa had TFRs above replacement level in 2019 and accounted for 27·1% (95% UI 26·4–27·8) of global livebirths. Global life expectancy at birth increased from 67·2 years (95% UI 66·8–67·6) in 2000 to 73·5 years (72·8–74·3) in 2019. The total number of deaths increased from 50·7 million (49·5–51·9) in 2000 to 56·5 million (53·7–59·2) in 2019. Under-5 deaths declined from 9·6 million (9·1–10·3) in 2000 to 5·0 million (4·3–6·0) in 2019. Global population increased by 25·7%, from 6·2 billion (6·0–6·3) in 2000 to 7·7 billion (7·5–8·0) in 2019. In 2019, 34 countries had negative natural rates of increase; in 17 of these, the population declined because immigration was not sufficient to counteract the negative rate of decline. Globally, HALE increased from 58·6 years (56·1–60·8) in 2000 to 63·5 years (60·8–66·1) in 2019. HALE increased in 202 of 204 countries and territories between 2000 and 2019. Interpretation: Over the past 20 years, fertility rates have been dropping steadily and life expectancy has been increasing, with few exceptions. Much of this change follows historical patterns linking social and economic determinants, such as those captured by the GBD Socio-demographic Index, with demographic outcomes. More recently, several countries have experienced a combination of low fertility and stagnating improvement in mortality rates, pushing more populations into the late stages of the demographic transition. Tracking demographic change and the emergence of new patterns will be essential for global health monitoring

    Effect of general anaesthesia on functional outcome in patients with anterior circulation ischaemic stroke having endovascular thrombectomy versus standard care: a meta-analysis of individual patient data

    Get PDF
    Background: General anaesthesia (GA) during endovascular thrombectomy has been associated with worse patient outcomes in observational studies compared with patients treated without GA. We assessed functional outcome in ischaemic stroke patients with large vessel anterior circulation occlusion undergoing endovascular thrombectomy under GA, versus thrombectomy not under GA (with or without sedation) versus standard care (ie, no thrombectomy), stratified by the use of GA versus standard care. Methods: For this meta-analysis, patient-level data were pooled from all patients included in randomised trials in PuMed published between Jan 1, 2010, and May 31, 2017, that compared endovascular thrombectomy predominantly done with stent retrievers with standard care in anterior circulation ischaemic stroke patients (HERMES Collaboration). The primary outcome was functional outcome assessed by ordinal analysis of the modified Rankin scale (mRS) at 90 days in the GA and non-GA subgroups of patients treated with endovascular therapy versus those patients treated with standard care, adjusted for baseline prognostic variables. To account for between-trial variance we used mixed-effects modelling with a random effect for trials incorporated in all models. Bias was assessed using the Cochrane method. The meta-analysis was prospectively designed, but not registered. Findings: Seven trials were identified by our search; of 1764 patients included in these trials, 871 were allocated to endovascular thrombectomy and 893 were assigned standard care. After exclusion of 74 patients (72 did not undergo the procedure and two had missing data on anaesthetic strategy), 236 (30%) of 797 patients who had endovascular procedures were treated under GA. At baseline, patients receiving GA were younger and had a shorter delay between stroke onset and randomisation but they had similar pre-treatment clinical severity compared with patients who did not have GA. Endovascular thrombectomy improved functional outcome at 3 months both in patients who had GA (adjusted common odds ratio (cOR) 1·52, 95% CI 1·09–2·11, p=0·014) and in those who did not have GA (adjusted cOR 2·33, 95% CI 1·75–3·10, p&lt;0·0001) versus standard care. However, outcomes were significantly better for patients who did not receive GA versus those who received GA (covariate-adjusted cOR 1·53, 95% CI 1·14–2·04, p=0·0044). The risk of bias and variability between studies was assessed to be low. Interpretation: Worse outcomes after endovascular thrombectomy were associated with GA, after adjustment for baseline prognostic variables. These data support avoidance of GA whenever possible. The procedure did, however, remain effective versus standard care in patients treated under GA, indicating that treatment should not be withheld in those who require anaesthesia for medical reasons

    Validation and utility of ARDS subphenotypes identified by machine-learning models using clinical data: an observational, multicohort, retrospective analysis

    No full text
    International audienceTwo acute respiratory distress syndrome (ARDS) subphenotypes (hyperinflammatory and hypoinflammatory) with distinct clinical and biological features and differential treatment responses have been identified using latent class analysis (LCA) in seven individual cohorts. To facilitate bedside identification of subphenotypes, clinical classifier models using readily available clinical variables have been described in four randomised controlled trials. We aimed to assess the performance of these models in observational cohorts of ARDS. Methods: In this observational, multicohort, retrospective study, we validated two machine-learning clinical classifier models for assigning ARDS subphenotypes in two observational cohorts of patients with ARDS: Early Assessment of Renal and Lung Injury (EARLI; n=335) and Validating Acute Lung Injury Markers for Diagnosis (VALID; n=452), with LCA-derived subphenotypes as the gold standard. The primary model comprised only vital signs and laboratory variables, and the secondary model comprised all predictors in the primary model, with the addition of ventilatory variables and demographics. Model performance was assessed by calculating the area under the receiver operating characteristic curve (AUC) and calibration plots, and assigning subphenotypes using a probability cutoff value of 0·5 to determine sensitivity, specificity, and accuracy of the assignments. We also assessed the performance of the primary model in EARLI using data automatically extracted from an electronic health record (EHR; EHR-derived EARLI cohort). In Large Observational Study to Understand the Global Impact of Severe Acute Respiratory Failure (LUNG SAFE; n=2813), a multinational, observational ARDS cohort, we applied a custom classifier model (with fewer variables than the primary model) to determine the prognostic value of the subphenotypes and tested their interaction with the positive end-expiratory pressure (PEEP) strategy, with 90-day mortality as the dependent variable. Findings: The primary clinical classifier model had an area under receiver operating characteristic curve (AUC) of 0·92 (95% CI 0·90–0·95) in EARLI and 0·88 (0·84–0·91) in VALID. Performance of the primary model was similar when using exclusively EHR-derived predictors compared with manually curated predictors (AUC=0·88 [95% CI 0·81–0·94] vs 0·92 [0·88–0·97]). In LUNG SAFE, 90-day mortality was higher in patients assigned the hyperinflammatory subphenotype than in those with the hypoinflammatory phenotype (414 [57%] of 725 vs 694 [33%] of 2088; p<0·0001). There was a significant treatment interaction with PEEP strategy and ARDS subphenotype (p=0·041), with lower 90-day mortality in the high PEEP group of patients with the hyperinflammatory subphenotype (hyperinflammatory subphenotype: 169 [54%] of 313 patients in the high PEEP group vs 127 [62%] of 205 patients in the low PEEP group; hypoinflammatory subphenotype: 231 [34%] of 675 patients in the high PEEP group vs 233 [32%] of 734 patients in the low PEEP group). Interpretation: Classifier models using clinical variables alone can accurately assign ARDS subphenotypes in observational cohorts. Application of these models can provide valuable prognostic information and could inform management strategies for personalised treatment, including application of PEEP, once prospectively validated. Funding: US National Institutes of Health and European Society of Intensive Care Medicine

    Global burden of 87 risk factors in 204 countries and territories, 1990–2019 : a systematic analysis for the Global Burden of Disease Study 2019

    Get PDF
    Background: Rigorous analysis of levels and trends in exposure to leading risk factors and quantification of their effect on human health are important to identify where public health is making progress and in which cases current efforts are inadequate. The Global Burden of Diseases, Injuries, and Risk Factors Study (GBD) 2019 provides a standardised and comprehensive assessment of the magnitude of risk factor exposure, relative risk, and attributable burden of disease. Methods: GBD 2019 estimated attributable mortality, years of life lost (YLLs), years of life lived with disability (YLDs), and disability-adjusted life-years (DALYs) for 87 risk factors and combinations of risk factors, at the global level, regionally, and for 204 countries and territories. GBD uses a hierarchical list of risk factors so that specific risk factors (eg, sodium intake), and related aggregates (eg, diet quality), are both evaluated. This method has six analytical steps. (1) We included 560 risk–outcome pairs that met criteria for convincing or probable evidence on the basis of research studies. 12 risk–outcome pairs included in GBD 2017 no longer met inclusion criteria and 47 risk–outcome pairs for risks already included in GBD 2017 were added based on new evidence. (2) Relative risks were estimated as a function of exposure based on published systematic reviews, 81 systematic reviews done for GBD 2019, and meta-regression. (3) Levels of exposure in each age-sex-location-year included in the study were estimated based on all available data sources using spatiotemporal Gaussian process regression, DisMod-MR 2.1, a Bayesian meta-regression method, or alternative methods. (4) We determined, from published trials or cohort studies, the level of exposure associated with minimum risk, called the theoretical minimum risk exposure level. (5) Attributable deaths, YLLs, YLDs, and DALYs were computed by multiplying population attributable fractions (PAFs) by the relevant outcome quantity for each age-sex-location-year. (6) PAFs and attributable burden for combinations of risk factors were estimated taking into account mediation of different risk factors through other risk factors. Across all six analytical steps, 30 652 distinct data sources were used in the analysis. Uncertainty in each step of the analysis was propagated into the final estimates of attributable burden. Exposure levels for dichotomous, polytomous, and continuous risk factors were summarised with use of the summary exposure value to facilitate comparisons over time, across location, and across risks. Because the entire time series from 1990 to 2019 has been re-estimated with use of consistent data and methods, these results supersede previously published GBD estimates of attributable burden. Findings: The largest declines in risk exposure from 2010 to 2019 were among a set of risks that are strongly linked to social and economic development, including household air pollution; unsafe water, sanitation, and handwashing; and child growth failure. Global declines also occurred for tobacco smoking and lead exposure. The largest increases in risk exposure were for ambient particulate matter pollution, drug use, high fasting plasma glucose, and high body-mass index. In 2019, the leading Level 2 risk factor globally for attributable deaths was high systolic blood pressure, which accounted for 10·8 million (95% uncertainty interval [UI] 9·51–12·1) deaths (19·2% [16·9–21·3] of all deaths in 2019), followed by tobacco (smoked, second-hand, and chewing), which accounted for 8·71 million (8·12–9·31) deaths (15·4% [14·6–16·2] of all deaths in 2019). The leading Level 2 risk factor for attributable DALYs globally in 2019 was child and maternal malnutrition, which largely affects health in the youngest age groups and accounted for 295 million (253–350) DALYs (11·6% [10·3–13·1] of all global DALYs that year). The risk factor burden varied considerably in 2019 between age groups and locations. Among children aged 0–9 years, the three leading detailed risk factors for attributable DALYs were all related to malnutrition. Iron deficiency was the leading risk factor for those aged 10–24 years, alcohol use for those aged 25–49 years, and high systolic blood pressure for those aged 50–74 years and 75 years and older. Interpretation: Overall, the record for reducing exposure to harmful risks over the past three decades is poor. Success with reducing smoking and lead exposure through regulatory policy might point the way for a stronger role for public policy on other risks in addition to continued efforts to provide information on risk factor harm to the general public. Funding: Bill & Melinda Gates Foundation.Peer reviewe
    corecore