793 research outputs found

    Information and treatment of unknown correlations in the combination of measurements using the BLUE method

    Get PDF
    We discuss the effect of large positive correlations in the combinations of several measurements of a single physical quantity using the Best Linear Unbiased Estimate (BLUE) method. We suggest a new approach for comparing the relative weights of the different measurements in their contributions to the combined knowledge about the unknown parameter, using the well-established concept of Fisher information. We argue, in particular, that one We discuss the effect of large positive correlations in the combinations of several measurements of a single physical quantity using the Best Linear Unbiased Estimate (BLUE) method. We suggest a new approach for comparing the relative weights of the different measurements in their contributions to the combined knowledge about the unknown parameter, using the well-established concept of Fisher information. We argue, in particular, that one contribution to information comes from the collective interplay of the measurements through their correlations and that this contribution cannot be attributed to any of the individual measurements alone. We show that negative coefficients in the BLUE weighted average invariably indicate the presence of a regime of high correlations, where the effect of further increasing some of these correlations is that of reducing the error on the combined estimate. In these regimes, we stress that assuming fully correlated systematic uncertainties is not a truly conservative choice, and that the correlations provided as input to BLUE combinations need to be assessed with extreme care instead. In situations where the precise evaluation of these correlations is impractical, or even impossible, we provide tools to help experimental physicists perform more conservative combinations

    Integrating LHCb workflows on HPC resources: status and strategies

    Full text link
    High Performance Computing (HPC) supercomputers are expected to play an increasingly important role in HEP computing in the coming years. While HPC resources are not necessarily the optimal fit for HEP workflows, computing time at HPC centers on an opportunistic basis has already been available to the LHC experiments for some time, and it is also possible that part of the pledged computing resources will be offered as CPU time allocations at HPC centers in the future. The integration of the experiment workflows to make the most efficient use of HPC resources is therefore essential. This paper describes the work that has been necessary to integrate LHCb workflows at a specific HPC site, the Marconi-A2 system at CINECA in Italy, where LHCb benefited from a joint PRACE (Partnership for Advanced Computing in Europe) allocation with the other Large Hadron Collider (LHC) experiments. This has required addressing two types of challenges: on the software application workloads, for optimising their performance on a many-core hardware architecture that differs significantly from those traditionally used in WLCG (Worldwide LHC Computing Grid), by reducing memory footprint using a multi-process approach; and in the distributed computing area, for submitting these workloads using more than one logical processor per job, which had never been done yet in LHCb.Comment: 9 pages, submitted to CHEP2019 proceedings in EPJ Web of Conference

    Design and engineering of a simplified workflow execution for the MG5aMC event generator on GPUs and vector CPUs

    Full text link
    Physics event generators are essential components of the data analysis software chain of high energy physics experiments, and important consumers of their CPU resources. Improving the software performance of these packages on modern hardware architectures, such as those deployed at HPC centers, is essential in view of the upcoming HL-LHC physics programme. In this paper, we describe an ongoing activity to reengineer the Madgraph5_aMC@NLO physics event generator, primarily to port it and allow its efficient execution on GPUs, but also to modernize it and optimize its performance on vector CPUs. We describe the motivation, engineering process and software architecture design of our developments, as well as the current challenges and future directions for this project. This paper is based on our submission to vCHEP2021 in March 2021,complemented with a few preliminary results that we presented during the conference. Further details and updated results will be given in later publications.Comment: 17 pages, 6 figures, submitted to vCHEP2021 proceedings in EPJ Web of Conferences; minor changes to address comments from the EPJWOC reviewe

    Acceleration beyond lowest order event generation: An outlook on further parallelism within MadGraph5_aMC@NLO

    Full text link
    An important area of high energy physics studies at the Large Hadron Collider (LHC) currently concerns the need for more extensive and precise comparison data. Important tools in this realm are event reweighing and evaluation of more precise next-to-leading order (NLO) processes via Monte Carlo event generators, especially in the context of the upcoming High Luminosity LHC. Current event generators need to improve throughputs for these studies. MadGraph5_aMC@NLO (MG5aMC) is an event generator being used by LHC experiments which has been accelerated considerably with a port to GPU and vector CPU architectures, but as of yet only for leading order processes. In this contribution a prototype for event reweighing using the accelerated MG5aMC software, as well as plans for an NLO implementation, are presented.Comment: 8 pages, 3 figures, proceedings of CHEP 2023, submitted to EPJ Wo

    Madgraph5_aMC@NLO on GPUs and vector CPUs Experience with the first alpha release

    Full text link
    Madgraph5_aMC@NLO is one of the most-frequently used Monte-Carlo event generators at the LHC, and an important consumer of compute resources. The software has been reengineered to maintain the overall look and feel of the user interface while speeding up event generation on CPUs and GPUs. The most computationally intensive part, the calculation of "matrix elements", is offloaded to new implementations optimised for GPUs and for CPU vector instructions, using event-level data parallelism. We present the work to support accelerated leading-order QCD processes, and discuss how this work is going to be released to Madgraph5_aMC@NLO's users.Comment: 8 pages, 3 figures, Proceedings of CHEP 2023, submitted to EPJ Wo

    Search for CP Violation in the Decay Z -> b (b bar) g

    Full text link
    About three million hadronic decays of the Z collected by ALEPH in the years 1991-1994 are used to search for anomalous CP violation beyond the Standard Model in the decay Z -> b \bar{b} g. The study is performed by analyzing angular correlations between the two quarks and the gluon in three-jet events and by measuring the differential two-jet rate. No signal of CP violation is found. For the combinations of anomalous CP violating couplings, h^b=h^AbgVbh^VbgAb{\hat{h}}_b = {\hat{h}}_{Ab}g_{Vb}-{\hat{h}}_{Vb}g_{Ab} and hb=h^Vb2+h^Ab2h^{\ast}_b = \sqrt{\hat{h}_{Vb}^{2}+\hat{h}_{Ab}^{2}}, limits of \hat{h}_b < 0.59and and h^{\ast}_{b} < 3.02$ are given at 95\% CL.Comment: 8 pages, 1 postscript figure, uses here.sty, epsfig.st

    DNA profiling, telomere analysis and antioxidant properties as tools for monitoring ex situ seed longevity

    Get PDF
    Background and Aims The germination test currently represents the most used method to assess seed viability in germplasm banks, despite the difficulties caused by the occurrence of seed dormancy. Furthermore, seed longevity can vary considerably across species and populations from different environments and studies related to the eco-physiological processes underlying such variations are still limited in their depth. The aim of the present work was the identification of reliable molecular markers that might help monitoring seed deterioration. Methods Dry seeds were subjected to artificial aging and collected at different time points for molecular/biochemical analyses. DNA damage was measured using the RAPD (Random Amplified Polymorphic DNA) approach while the seed antioxidant profile was obtained using both the DPPH (1,1-diphenyl, 2-picrylhydrazyl) assay and the Folin Ciocalteu reagent method. Electron Paramagnetic Resonance (EPR) provided profiles of free radicals. Quantitative RealTime-Polymerase Chain Reaction (QRT-PCR) was used to assess the expression profiles of the antioxidant genes MT2 (Type 2 Metallothionein) and SOD (Superoxide Dismutase). A modified QRT-PCR protocol was used to determine telomere length. Key Results The RAPD profiles highlighted different capacities of the two Silene species to overcome DNA damage induced by artificial aging. The antioxidant profiles of dry and rehydrated seeds revealed that the high-altitude taxon Silene acaulis was characterised by a lower antioxidant specific activity. Significant up-regulation of the MT2 and SOD genes was observed only in the rehydrated seeds of the low-altitude species. Rehydration resulted in telomere lengthening in both Silene species. Conclusions Different seed viability markers have been selected for plant species showing inherent variation of seed longevity. RAPD analysis, quantification of redox activity of non enzymatic antioxidant compounds and gene expression profiling provide deeper insights to study seed viability during storage. Telomere lengthening is a promising tool to discriminate between short- and long-lived species

    Speeding up Madgraph5 aMC@NLO through CPU vectorization and GPU offloading: towards a first alpha release

    Full text link
    The matrix element (ME) calculation in any Monte Carlo physics event generator is an ideal fit for implementing data parallelism with lockstep processing on GPUs and vector CPUs. For complex physics processes where the ME calculation is the computational bottleneck of event generation workflows, this can lead to large overall speedups by efficiently exploiting these hardware architectures, which are now largely underutilized in HEP. In this paper, we present the status of our work on the reengineering of the Madgraph5_aMC@NLO event generator at the time of the ACAT2022 conference. The progress achieved since our previous publication in the ICHEP2022 proceedings is discussed, for our implementations of the ME calculations in vectorized C++, in CUDA and in the SYCL framework, as well as in their integration into the existing MadEvent framework. The outlook towards a first alpha release of the software supporting QCD LO processes usable by the LHC experiments is also discussed.Comment: 7 pages, 4 figures, 4 tables; submitted to ACAT 2022 proceedings in IO

    Search for R-Parity Violating Decays of Supersymmetric Particles in e+ee^{+}e^{-} Collisions at Centre-of-Mass Energies near 183 GeV

    Get PDF
    Searches for pair-production of supersymmetric particles under the assumption that R-parity is violated via a single dominant LLEˉLL{\bar E}, LQDˉLQ{\bar D} or UˉDˉDˉ{\bar U} {\bar D} {\bar D} coupling are performed using the data collected by the \ALEPH\ collaboration at centre-of-mass energies of 181--184~\gev. The observed candidate events in the data are in agreement with the Standard Model expectations. Upper limits on the production cross-sections and lower limits on the masses of charginos, sleptons, squarks and sneutrinos are de rived
    corecore