84 research outputs found

    Improving heavy Dirac neutrino prospects at future hadron colliders using machine learning

    Full text link
    In this work, by using the machine learning methods, we study the sensitivities of heavy pseudo-Dirac neutrino NN in the inverse seesaw at the high-energy hadron colliders. The production process for the signal is ppN3+ETmisspp \to \ell N \to 3 \ell + E_T^{\rm miss}, while the dominant background is ppWZ3+ETmissp p \to W Z \to 3 \ell + E_T^{\rm miss}. We use either the Multi-Layer Perceptron or the Boosted Decision Tree with Gradient Boosting to analyse the kinematic observables and optimize the discrimination of background and signal events. It is found that the reconstructed ZZ boson mass and heavy neutrino mass from the charged leptons and missing transverse energy play crucial roles in separating the signal from backgrounds. The prospects of heavy-light neutrino mixing VN2|V_{\ell N}|^2 (with =e,μ\ell = e,\,\mu) are estimated by using machine learning at the hadron colliders with s=14\sqrt{s}=14 TeV, 27 TeV, and 100 TeV, and it is found that VN2|V_{\ell N}|^2 can be improved up to O(106){\cal O} (10^{-6}) for heavy neutrino mass mN=100m_N = 100 GeV and O(104){\cal O} (10^{-4}) for mN=1m_N = 1 TeV.Comment: 33 pages, 14 figures, 4 tables, more details and more references added, version published in JHE

    Deciphering the functional importance of comammox vs. canonical ammonia oxidisers in nitrification and N2O emissions in acidic agricultural soils

    Get PDF
    Acknowledgments This work was jointly supported by grants from the National Key Research and Development Program of China (2018YFD0800202), the National Key Research and Development Program of China (2017YFD0200707 & 2017YFD0200102), the Fundamental Research Funds for the Central Universities (226-2023-00077) and Zhejiang University-Julong Ecological Environment R&D Centre (2019-KYY-514106-0006).Peer reviewe

    GraphTheta: A Distributed Graph Neural Network Learning System With Flexible Training Strategy

    Full text link
    Graph neural networks (GNNs) have been demonstrated as a powerful tool for analysing non-Euclidean graph data. However, the lack of efficient distributed graph learning (GL) systems severely hinders applications of GNNs, especially when graphs are big and GNNs are relatively deep. Herein, we present GraphTheta, a novel distributed and scalable GL system implemented in vertex-centric graph programming model. GraphTheta is the first GL system built upon distributed graph processing with neural network operators implemented as user-defined functions. This system supports multiple training strategies, and enables efficient and scalable big graph learning on distributed (virtual) machines with low memory each. To facilitate graph convolution implementations, GraphTheta puts forward a new GL abstraction named NN-TGAR to bridge the gap between graph processing and graph deep learning. A distributed graph engine is proposed to conduct the stochastic gradient descent optimization with a hybrid-parallel execution. Moreover, we add support for a new cluster-batched training strategy besides global-batch and mini-batch. We evaluate GraphTheta using a number of datasets with network size ranging from small-, modest- to large-scale. Experimental results show that GraphTheta can scale well to 1,024 workers for training an in-house developed GNN on an industry-scale Alipay dataset of 1.4 billion nodes and 4.1 billion attributed edges, with a cluster of CPU virtual machines (dockers) of small memory each (5\sim12GB). Moreover, GraphTheta obtains comparable or better prediction results than the state-of-the-art GNN implementations, demonstrating its capability of learning GNNs as well as existing frameworks, and can outperform DistDGL by up to 2.02×2.02\times with better scalability. To the best of our knowledge, this work presents the largest edge-attributed GNN learning task conducted in the literature.Comment: 18 pages, 14 figures, 5 table

    Microstructure and mechanical properties of Cu joints soldered with a Sn-based composite solder, reinforced by metal foam

    Get PDF
    In this study, Ni foam, Cu coated Ni foam and Cu-Ni alloy foams were used as strengthening phases for pure Sn solder. Cu-Cu joints were fabricated by soldering with these Sn-based composite solders at 260 °C for different times. The tensile strength of pure Sn solder was improved significantly by the addition of metal foams, and the Cu-Ni alloy/Sn composite solder exhibited the highest tensile strength of 50.32 MPa. The skeleton networks of the foams were gradually dissolved into the soldering seam with increasing soldering time, accompanied by the massive formation of (Cu,Ni)6Sn5 phase in the joint. The dissolution rates of Ni foam, Cu coated Ni foam and Cu-Ni alloy foams into the Sn matrix increased successively during soldering. An increased dissolution rate of the metal foam leads to an increase in the Ni content in the soldering seam, which was found to be beneficial in refining the (Cu,Ni)6Sn5 phase and inhibiting the formation of the Cu3Sn IMC layer on the Cu substrate surface. The average shear strength of the Cu joints was improved with increasing soldering time, and a shear strength of 61.2 MPa was obtained for Cu joints soldered with Cu-Ni alloy/Sn composite solder for 60 min

    Standardized Assessment of Automatic Segmentation of White Matter Hyperintensities and Results of the WMH Segmentation Challenge

    Get PDF
    Quantification of cerebral white matter hyperintensities (WMH) of presumed vascular origin is of key importance in many neurological research studies. Currently, measurements are often still obtained from manual segmentations on brain MR images, which is a laborious procedure. The automatic WMH segmentation methods exist, but a standardized comparison of the performance of such methods is lacking. We organized a scientific challenge, in which developers could evaluate their methods on a standardized multi-center/-scanner image dataset, giving an objective comparison: the WMH Segmentation Challenge. Sixty T1 + FLAIR images from three MR scanners were released with the manual WMH segmentations for training. A test set of 110 images from five MR scanners was used for evaluation. The segmentation methods had to be containerized and submitted to the challenge organizers. Five evaluation metrics were used to rank the methods: 1) Dice similarity coefficient; 2) modified Hausdorff distance (95th percentile); 3) absolute log-transformed volume difference; 4) sensitivity for detecting individual lesions; and 5) F1-score for individual lesions. In addition, the methods were ranked on their inter-scanner robustness; 20 participants submitted their methods for evaluation. This paper provides a detailed analysis of the results. In brief, there is a cluster of four methods that rank significantly better than the other methods, with one clear winner. The inter-scanner robustness ranking shows that not all the methods generalize to unseen scanners. The challenge remains open for future submissions and provides a public platform for method evaluation

    Primalâdual hybrid gradient method for distributionally robust optimization problems

    No full text
    © 2017 Elsevier B.V. We focus on the discretization approach to distributionally robust optimization (DRO) problems and propose a numerical scheme originated from the primalâdual hybrid gradient (PDHG) method that recently has been well studied in convex optimization area. Specifically, we consider the cases where the ambiguity set of the discretized DRO model is defined through the moment condition and Wasserstein metric, respectively. Moreover, we apply the PDHG to a portfolio selection problem modelled by DRO and verify its efficiency.Link_to_subscribed_fulltex

    On the Disruptive Innovation Strategy of Renewable Energy Technology Diffusion: An Agent-Based Model

    No full text
    Renewable energy technologies (RETs) are crucial for solving the world’s energy dilemma. However, the diffusion rate of RETs is still dissatisfactory. One critical reason is that conventional energy technologies (CETs) are dominating energy markets. Emergent technologies that have inferior initial performance but eventually become new dominators of markets are frequently observed in various industries, which can be explained with the disruptive innovation theory (DIT). DIT suggests that instead of competing with incumbent technologies in the dominated dimension, redefining the competition on a two-dimensional basis is wise. Aiming at applying DIT to RET diffusion, this research builds an agent-based model (ABM) considering the order of entering the market, price, preference changing and RET improvement rate to simulate the competition dynamics between RETs and CETs. The findings include that the order of entering the market is crucial for a technology’s success; disruptive innovation is an effective approach to cope with the disadvantage of RETs as latecomers; generally, lower price, higher consistency with consumers’ preferences and higher improvement rate in the conventional dimension are beneficial to RET diffusion; counter-intuitively, increasing RET’s improvement rate in the conventional dimension is beneficial to RET diffusion when the network is sparse; while it is harmful when the network is densified
    corecore