260 research outputs found

    Fuel premixing module for gas turbine engine combustor

    Get PDF
    A fuel-air premixing module is designed to reduce emissions from a gas turbine engine. In one form, the premixing module includes a central pilot premixer module with a main premixer module positioned thereround. Each of the portions of the fuel-air premixing module include an axial inflow swirler with a plurality of fixed swirler vanes. Fuel is injected into the main premixer module between the swirler vanes of the axial inflow swirler and at an acute angle relative to the centerline of the premixing module

    Distributed learning on 20 000+ lung cancer patients - The Personal Health Train

    Get PDF
    Background and purpose Access to healthcare data is indispensable for scientific progress and innovation. Sharing healthcare data is time-consuming and notoriously difficult due to privacy and regulatory concerns. The Personal Health Train (PHT) provides a privacy-by-design infrastructure connecting FAIR (Findable, Accessible, Interoperable, Reusable) data sources and allows distributed data analysis and machine learning. Patient data never leaves a healthcare institute. Materials and methods Lung cancer patient-specific databases (tumor staging and post-treatment survival information) of oncology departments were translated according to a FAIR data model and stored locally in a graph database. Software was installed locally to enable deployment of distributed machine learning algorithms via a central server. Algorithms (MATLAB, code and documentation publicly available) are patient privacy-preserving as only summary statistics and regression coefficients are exchanged with the central server. A logistic regression model to predict post-treatment two-year survival was trained and evaluated by receiver operating characteristic curves (ROC), root mean square prediction error (RMSE) and calibration plots. Results In 4 months, we connected databases with 23 203 patient cases across 8 healthcare institutes in 5 countries (Amsterdam, Cardiff, Maastricht, Manchester, Nijmegen, Rome, Rotterdam, Shanghai) using the PHT. Summary statistics were computed across databases. A distributed logistic regression model predicting post-treatment two-year survival was trained on 14 810 patients treated between 1978 and 2011 and validated on 8 393 patients treated between 2012 and 2015. Conclusion The PHT infrastructure demonstrably overcomes patient privacy barriers to healthcare data sharing and enables fast data analyses across multiple institutes from different countries with different regulatory regimens. This infrastructure promotes global evidence-based medicine while prioritizing patient privacy

    The Long-Baseline Neutrino Experiment: Exploring Fundamental Symmetries of the Universe

    Get PDF
    The preponderance of matter over antimatter in the early Universe, the dynamics of the supernova bursts that produced the heavy elements necessary for life and whether protons eventually decay --- these mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our Universe, its current state and its eventual fate. The Long-Baseline Neutrino Experiment (LBNE) represents an extensively developed plan for a world-class experiment dedicated to addressing these questions. LBNE is conceived around three central components: (1) a new, high-intensity neutrino source generated from a megawatt-class proton accelerator at Fermi National Accelerator Laboratory, (2) a near neutrino detector just downstream of the source, and (3) a massive liquid argon time-projection chamber deployed as a far detector deep underground at the Sanford Underground Research Facility. This facility, located at the site of the former Homestake Mine in Lead, South Dakota, is approximately 1,300 km from the neutrino source at Fermilab -- a distance (baseline) that delivers optimal sensitivity to neutrino charge-parity symmetry violation and mass ordering effects. This ambitious yet cost-effective design incorporates scalability and flexibility and can accommodate a variety of upgrades and contributions. With its exceptional combination of experimental configuration, technical capabilities, and potential for transformative discoveries, LBNE promises to be a vital facility for the field of particle physics worldwide, providing physicists from around the globe with opportunities to collaborate in a twenty to thirty year program of exciting science. In this document we provide a comprehensive overview of LBNE's scientific objectives, its place in the landscape of neutrino physics worldwide, the technologies it will incorporate and the capabilities it will possess.Comment: Major update of previous version. This is the reference document for LBNE science program and current status. Chapters 1, 3, and 9 provide a comprehensive overview of LBNE's scientific objectives, its place in the landscape of neutrino physics worldwide, the technologies it will incorporate and the capabilities it will possess. 288 pages, 116 figure

    Empirical Legal Studies Before 1940: A Bibliographic Essay

    Get PDF
    The modern empirical legal studies movement has well-known antecedents in the law and society and law and economics traditions of the latter half of the 20th century. Less well known is the body of empirical research on legal phenomena from the period prior to World War II. This paper is an extensive bibliographic essay that surveys the English language empirical legal research from approximately 1940 and earlier. The essay is arranged around the themes in the research: criminal justice, civil justice (general studies of civil litigation, auto accident litigation and compensation, divorce, small claims, jurisdiction and procedure, civil juries), debt and bankruptcy, banking, appellate courts, legal needs, legal profession (including legal education), and judicial staffing and selection. Accompanying the essay is an extensive bibliography of research articles, books, and reports

    Inclusive fitness theory and eusociality

    Get PDF

    Suppressing quantum errors by scaling a surface code logical qubit

    Full text link
    Practical quantum computing will require error rates that are well below what is achievable with physical qubits. Quantum error correction offers a path to algorithmically-relevant error rates by encoding logical qubits within many physical qubits, where increasing the number of physical qubits enhances protection against physical errors. However, introducing more qubits also increases the number of error sources, so the density of errors must be sufficiently low in order for logical performance to improve with increasing code size. Here, we report the measurement of logical qubit performance scaling across multiple code sizes, and demonstrate that our system of superconducting qubits has sufficient performance to overcome the additional errors from increasing qubit number. We find our distance-5 surface code logical qubit modestly outperforms an ensemble of distance-3 logical qubits on average, both in terms of logical error probability over 25 cycles and logical error per cycle (2.914%±0.016%2.914\%\pm 0.016\% compared to 3.028%±0.023%3.028\%\pm 0.023\%). To investigate damaging, low-probability error sources, we run a distance-25 repetition code and observe a 1.7×1061.7\times10^{-6} logical error per round floor set by a single high-energy event (1.6×1071.6\times10^{-7} when excluding this event). We are able to accurately model our experiment, and from this model we can extract error budgets that highlight the biggest challenges for future systems. These results mark the first experimental demonstration where quantum error correction begins to improve performance with increasing qubit number, illuminating the path to reaching the logical error rates required for computation.Comment: Main text: 6 pages, 4 figures. v2: Update author list, references, Fig. S12, Table I

    Rare coding variants in PLCG2, ABI3, and TREM2 implicate microglial-mediated innate immunity in Alzheimer's disease

    Get PDF
    We identified rare coding variants associated with Alzheimer’s disease (AD) in a 3-stage case-control study of 85,133 subjects. In stage 1, 34,174 samples were genotyped using a whole-exome microarray. In stage 2, we tested associated variants (P<1×10-4) in 35,962 independent samples using de novo genotyping and imputed genotypes. In stage 3, an additional 14,997 samples were used to test the most significant stage 2 associations (P<5×10-8) using imputed genotypes. We observed 3 novel genome-wide significant (GWS) AD associated non-synonymous variants; a protective variant in PLCG2 (rs72824905/p.P522R, P=5.38×10-10, OR=0.68, MAFcases=0.0059, MAFcontrols=0.0093), a risk variant in ABI3 (rs616338/p.S209F, P=4.56×10-10, OR=1.43, MAFcases=0.011, MAFcontrols=0.008), and a novel GWS variant in TREM2 (rs143332484/p.R62H, P=1.55×10-14, OR=1.67, MAFcases=0.0143, MAFcontrols=0.0089), a known AD susceptibility gene. These protein-coding changes are in genes highly expressed in microglia and highlight an immune-related protein-protein interaction network enriched for previously identified AD risk genes. These genetic findings provide additional evidence that the microglia-mediated innate immune response contributes directly to AD development

    Evenness mediates the global relationship between forest productivity and richness

    Get PDF
    1. Biodiversity is an important component of natural ecosystems, with higher species richness often correlating with an increase in ecosystem productivity. Yet, this relationship varies substantially across environments, typically becoming less pronounced at high levels of species richness. However, species richness alone cannot reflect all important properties of a community, including community evenness, which may mediate the relationship between biodiversity and productivity. If the evenness of a community correlates negatively with richness across forests globally, then a greater number of species may not always increase overall diversity and productivity of the system. Theoretical work and local empirical studies have shown that the effect of evenness on ecosystem functioning may be especially strong at high richness levels, yet the consistency of this remains untested at a global scale. 2. Here, we used a dataset of forests from across the globe, which includes composition, biomass accumulation and net primary productivity, to explore whether productivity correlates with community evenness and richness in a way that evenness appears to buffer the effect of richness. Specifically, we evaluated whether low levels of evenness in speciose communities correlate with the attenuation of the richness–productivity relationship. 3. We found that tree species richness and evenness are negatively correlated across forests globally, with highly speciose forests typically comprising a few dominant and many rare species. Furthermore, we found that the correlation between diversity and productivity changes with evenness: at low richness, uneven communities are more productive, while at high richness, even communities are more productive. 4. Synthesis. Collectively, these results demonstrate that evenness is an integral component of the relationship between biodiversity and productivity, and that the attenuating effect of richness on forest productivity might be partly explained by low evenness in speciose communities. Productivity generally increases with species richness, until reduced evenness limits the overall increases in community diversity. Our research suggests that evenness is a fundamental component of biodiversity–ecosystem function relationships, and is of critical importance for guiding conservation and sustainable ecosystem management decisions

    Multi-messenger observations of a binary neutron star merger

    Get PDF
    On 2017 August 17 a binary neutron star coalescence candidate (later designated GW170817) with merger time 12:41:04 UTC was observed through gravitational waves by the Advanced LIGO and Advanced Virgo detectors. The Fermi Gamma-ray Burst Monitor independently detected a gamma-ray burst (GRB 170817A) with a time delay of ~1.7 s with respect to the merger time. From the gravitational-wave signal, the source was initially localized to a sky region of 31 deg2 at a luminosity distance of 40+8-8 Mpc and with component masses consistent with neutron stars. The component masses were later measured to be in the range 0.86 to 2.26 Mo. An extensive observing campaign was launched across the electromagnetic spectrum leading to the discovery of a bright optical transient (SSS17a, now with the IAU identification of AT 2017gfo) in NGC 4993 (at ~40 Mpc) less than 11 hours after the merger by the One- Meter, Two Hemisphere (1M2H) team using the 1 m Swope Telescope. The optical transient was independently detected by multiple teams within an hour. Subsequent observations targeted the object and its environment. Early ultraviolet observations revealed a blue transient that faded within 48 hours. Optical and infrared observations showed a redward evolution over ~10 days. Following early non-detections, X-ray and radio emission were discovered at the transient’s position ~9 and ~16 days, respectively, after the merger. Both the X-ray and radio emission likely arise from a physical process that is distinct from the one that generates the UV/optical/near-infrared emission. No ultra-high-energy gamma-rays and no neutrino candidates consistent with the source were found in follow-up searches. These observations support the hypothesis that GW170817 was produced by the merger of two neutron stars in NGC4993 followed by a short gamma-ray burst (GRB 170817A) and a kilonova/macronova powered by the radioactive decay of r-process nuclei synthesized in the ejecta
    corecore