73 research outputs found

    Tau Lepton Identification With Graph Neural Networks at Future Electron–Positron Colliders

    Get PDF
    Efficient and accurate reconstruction and identification of tau lepton decays plays a crucial role in the program of measurements and searches under the study for the future high-energy particle colliders. Leveraging recent advances in machine learning algorithms, which have dramatically improved the state of the art in visual object recognition, we have developed novel tau identification methods that are able to classify tau decays in leptons and hadrons and to discriminate them against QCD jets. We present the methodology and the results of the application at the interesting use case of the IDEA dual-readout calorimeter detector concept proposed for the future FCC-ee electron–positron collider

    MEGAN: Multi-Explanation Graph Attention Network

    Get PDF
    Explainable artificial intelligence (XAI) methods are expected to improve trust during human-AI interactions, provide tools for model analysis and extend human understanding of complex problems. Explanation-supervised training allows to improve explanation quality by training self-explaining XAI models on ground truth or human-generated explanations. However, existing explanation methods have limited expressiveness and interoperability due to the fact that only single explanations in form of node and edge importance are generated. To that end we propose the novel multi-explanation graph attention network (MEGAN). Our fully differentiable, attention-based model features multiple explanation channels, which can be chosen independently of the task specifications. We first validate our model on a synthetic graph regression dataset. We show that for the special single explanation case, our model significantly outperforms existing post-hoc and explanation-supervised baseline methods. Furthermore, we demonstrate significant advantages when using two explanations, both in quantitative explanation measures as well as in human interpretability. Finally, we demonstrate our model's capabilities on multiple real-world datasets. We find that our model produces sparse high-fidelity explanations consistent with human intuition about those tasks and at the same time matches state-of-the-art graph neural networks in predictive performance, indicating that explanations and accuracy are not necessarily a trade-off.Comment: 9 pages main text, 29 pages total, 19 figure

    Graph neural networks for materials science and chemistry

    Get PDF
    Machine learning plays an increasingly important role in many areas of chemistry and materials science, being used to predict materials properties, accelerate simulations, design new structures, and predict synthesis routes of new materials. Graph neural networks (GNNs) are one of the fastest growing classes of machine learning models. They are of particular relevance for chemistry and materials science, as they directly work on a graph or structural representation of molecules and materials and therefore have full access to all relevant information required to characterize materials. In this Review, we provide an overview of the basic principles of GNNs, widely used datasets, and state-of-the-art architectures, followed by a discussion of a wide range of recent applications of GNNs in chemistry and materials science, and concluding with a road-map for the further development and application of GNNs

    Removal of pharmaceuticals in pre-denitrifying MBBR – Influence of organic substrate availability in single- and three-stage configurations

    Get PDF
    Due to the limited efficiency of conventional biological treatment, innovative solutions are being explored to improve the removal of trace organic chemicals in wastewater. Controlling biomass exposure to growth substrate represents an appealing option for process optimization, as substrate availability likely impacts microbial activity, hence organic trace chemical removal. This study investigated the elimination of pharmaceuticals in pre-denitrifying moving bed biofilm reactors (MBBRs), where biofilm exposure to different organic substrate loading and composition was controlled by reactor staging. A three-stage MBBR and a single-stage reference MBBR (with the same operating volume and filling ratio) were operated under continuous-flow conditions (18 months). Two sets of batch experiments (day 100 and 471) were performed to quantify and compare pharmaceutical removal and denitrification kinetics in the different MBBRs. Experimental results revealed the possible influence of retransformation (e.g., from conjugated metabolites) and enantioselectivity on the removal of selected pharmaceuticals. In the second set of experiments, specific trends in denitrification and biotransformation kinetics were observed, with highest and lowest rates/rate constants in the first (S1) and the last (S3) staged sub reactors, respectively. These observations were confirmed by removal efficiency data obtained during continuous-flow operation, with limited removal (<10%) of recalcitrant pharmaceuticals and highest removal in S1 within the three-stage MBBR. Notably, biotransformation rate constants obtained for non recalcitrant pharmaceuticals correlated with mean specific denitrification rates, maximum specific growth rates and observed growth yield values. Overall, these findings suggest that: (i) the long-term exposure to tiered substrate accessibility in the three-stage configuration shaped the denitrification and biotransformation capacity of biofilms, with significant reduction under substrate limitation; (ii) biotransformation of pharmaceuticals may have occurred as a result of cometabolism by heterotrophic denitrifying bacteria. (C) 2017 Elsevier Ltd. All rights reserved

    Measuring total reaction cross-sections at energies near the coulomb barrier by the active target method

    Get PDF
    An experimental technique is described that is able to measure reaction cross-sections at energies around the Coulomb barrier by using low intensity beams and a Si detector as an active target. Set-up optimization was carefully investigated in terms of collimation, detector efficiency and pile-up rejection. The method has been tested by measuring the total reaction cross-section sigma(R)(E) for the (7)Li + (28)Si system in the energy range of E(lab) = 12-16 MeV. The deduced excitation function sigma(R)(E) agrees with the data obtained in a previous experiment. The presented technique can also be applied in order to determine total reaction cross-sections for low intensity radioactive beams at energies around the Coulomb barrier. (C) 2009 Elsevier B.V. All rights reserved. Nuclear Instruments & Methods in Physics Research Section a-Accelerators Spectrometers Detectors and Associated Equipmen

    Integrated System Built for Small-Molecule Semiconductors via High-Throughput Approaches

    Get PDF
    High-throughput synthesis of solution-processable structurally variable small-molecule semiconductors is both an opportunity and a challenge. A large number of diverse molecules provide a possibility for quick material discovery and machine learning based on experimental data. However, the diversity of molecular structure leads to the complexity of molecular properties, such as solubility, polarity, and crystallinity, which poses great challenges to solution processing and purification. Here, we first report an integrated system for the high-throughput synthesis, purification, and characterization of molecules with a large variety. Based on the principle of Like dissolves like, we combine theoretical calculations and a robotic platform to accelerate the purification of those molecules. With this platform, a material library containing 125 molecules and their optical-electric properties was built within a timeframe of weeks. More importantly, the high repeatability of recrystallization we design is a reliable approach to further upgrading and industrial production

    Integrated System Built for Small-Molecule Semiconductors via High-Throughput Approaches

    Get PDF
    High-throughput synthesis of solution-processable structurally variable small-molecule semiconductors is both an opportunity and a challenge. A large number of diverse molecules provide a possibility for quick material discovery and machine learning based on experimental data. However, the diversity of the molecular structure leads to the complexity of molecular properties, such as solubility, polarity, and crystallinity, which poses great challenges to solution processing and purification. Here, we first report an integrated system for the high-throughput synthesis, purification, and characterization of molecules with a large variety. Based on the principle “Like dissolves like,” we combine theoretical calculations and a robotic platform to accelerate the purification of those molecules. With this platform, a material library containing 125 molecules and their optical-electronic properties was built within a timeframe of weeks. More importantly, the high repeatability of recrystallization we design is a reliable approach to further upgrading and industrial production

    What is missing in autonomous discovery: Open challenges for the community

    Full text link
    Self-driving labs (SDLs) leverage combinations of artificial intelligence, automation, and advanced computing to accelerate scientific discovery. The promise of this field has given rise to a rich community of passionate scientists, engineers, and social scientists, as evidenced by the development of the Acceleration Consortium and recent Accelerate Conference. Despite its strengths, this rapidly developing field presents numerous opportunities for growth, challenges to overcome, and potential risks of which to remain aware. This community perspective builds on a discourse instantiated during the first Accelerate Conference, and looks to the future of self-driving labs with a tempered optimism. Incorporating input from academia, government, and industry, we briefly describe the current status of self-driving labs, then turn our attention to barriers, opportunities, and a vision for what is possible. Our field is delivering solutions in technology and infrastructure, artificial intelligence and knowledge generation, and education and workforce development. In the spirit of community, we intend for this work to foster discussion and drive best practices as our field grows

    Suboptimal SARS-CoV-2-specific CD8+ T cell response associated with the prominent HLA-A*02:01 phenotype

    Get PDF
    An improved understanding of human T cell-mediated immunity in COVID-19 is important for optimizing therapeutic and vaccine strategies. Experience with influenza shows that infection primes CD8+ T cell memory to peptides presented by common HLA types like HLA-A2, which enhances recovery and diminishes clinical severity upon reinfection. Stimulating peripheral blood mononuclear cells from COVID-19 convalescent patients with overlapping peptides from severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) led to the clonal expansion of SARS-CoV-2−specific CD8+ and CD4+ T cells in vitro, with CD4+ T cells being robust. We identified two HLA-A*02:01-restricted SARS-CoV-2-specfic CD8+ T cell epitopes, A2/S269–277 and A2/Orf1ab3183–3191. Using peptide−HLA tetramer enrichment, direct ex vivo assessment of A2/S269+CD8+ and A2/Orf1ab3183+CD8+ populations indicated that A2/S269+CD8+ T cells were detected at comparable frequencies (∼1.3 × 10−5) in acute and convalescent HLA-A*02:01+ patients. These frequencies were higher than those found in uninfected HLA-A*02:01+ donors (∼2.5 × 10−6), but low when compared to frequencies for influenza-specific (A2/M158) and Epstein–Barr virus (EBV)-specific (A2/BMLF1280) (∼1.38 × 10−4) populations. Phenotyping A2/S269+CD8+ T cells from COVID-19 convalescents ex vivo showed that A2/S269+CD8+ T cells were predominantly negative for CD38, HLA-DR, PD-1, and CD71 activation markers, although the majority of total CD8+ T cells expressed granzymes and/or perforin. Furthermore, the bias toward naïve, stem cell memory and central memory A2/S269+CD8+ T cells rather than effector memory populations suggests that SARS-CoV-2 infection may be compromising CD8+ T cell activation. Priming with appropriate vaccines may thus be beneficial for optimizing CD8+ T cell immunity in COVID-19

    The LOFT mission concept: a status update

    Get PDF
    The Large Observatory For x-ray Timing (LOFT) is a mission concept which was proposed to ESA as M3 and M4 candidate in the framework of the Cosmic Vision 2015-2025 program. Thanks to the unprecedented combination of effective area and spectral resolution of its main instrument and the uniquely large field of view of its wide field monitor, LOFT will be able to study the behaviour of matter in extreme conditions such as the strong gravitational field in the innermost regions close to black holes and neutron stars and the supra-nuclear densities in the interiors of neutron stars. The science payload is based on a Large Area Detector (LAD, >8m2 effective area, 2-30 keV, 240 eV spectral resolution, 1 degree collimated field of view) and a Wide Field Monitor (WFM, 2-50 keV, 4 steradian field of view, 1 arcmin source location accuracy, 300 eV spectral resolution). The WFM is equipped with an on-board system for bright events (e.g., GRB) localization. The trigger time and position of these events are broadcast to the ground within 30 s from discovery. In this paper we present the current technical and programmatic status of the mission
    corecore