1,248 research outputs found

    A note on comonotonicity and positivity of the control components of decoupled quadratic FBSDE

    Get PDF
    In this small note we are concerned with the solution of Forward-Backward Stochastic Differential Equations (FBSDE) with drivers that grow quadratically in the control component (quadratic growth FBSDE or qgFBSDE). The main theorem is a comparison result that allows comparing componentwise the signs of the control processes of two different qgFBSDE. As a byproduct one obtains conditions that allow establishing the positivity of the control process.Comment: accepted for publicatio

    Long-range angular correlations on the near and away side in p–Pb collisions at

    Get PDF

    Event-shape engineering for inclusive spectra and elliptic flow in Pb-Pb collisions at root(NN)-N-S=2.76 TeV

    Get PDF
    Peer reviewe

    Production of He-4 and (4) in Pb-Pb collisions at root(NN)-N-S=2.76 TeV at the LHC

    Get PDF
    Results on the production of He-4 and (4) nuclei in Pb-Pb collisions at root(NN)-N-S = 2.76 TeV in the rapidity range vertical bar y vertical bar <1, using the ALICE detector, are presented in this paper. The rapidity densities corresponding to 0-10% central events are found to be dN/dy4(He) = (0.8 +/- 0.4 (stat) +/- 0.3 (syst)) x 10(-6) and dN/dy4 = (1.1 +/- 0.4 (stat) +/- 0.2 (syst)) x 10(-6), respectively. This is in agreement with the statistical thermal model expectation assuming the same chemical freeze-out temperature (T-chem = 156 MeV) as for light hadrons. The measured ratio of (4)/He-4 is 1.4 +/- 0.8 (stat) +/- 0.5 (syst). (C) 2018 Published by Elsevier B.V.Peer reviewe

    First proton-proton collisions at the LHC as observed with the ALICE detector: measurement of the charged-particle pseudorapidity density at root s=900 GeV

    Get PDF
    -On 23rd November 2009, during the early commissioning of the CERN Large Hadron Collider (LHC), two counter-rotating proton bunches were circulated for the first time concurrently in the machine, at the LHC injection energy of 450 GeV per beam. Although the proton intensity was very low, with only one pilot bunch per beam, and no systematic attempt was made to optimize the collision optics, all LHC experiments reported a number of collision candidates. In the ALICE experiment, the collision region was centred very well in both the longitudinal and transverse directions and 284 events were recorded in coincidence with the two passing proton bunches. The events were immediately reconstructed and analyzed both online and offline. We have used these events to measure the pseudorapidity density of charged primary particles in the central region. In the range vertical bar eta vertical bar S collider. They also illustrate the excellent functioning and rapid progress of the LHC accelerator, and of both the hardware and software of the ALICE experiment, in this early start-up phase

    Underlying Event measurements in pp collisions at s=0.9 \sqrt {s} = 0.9 and 7 TeV with the ALICE experiment at the LHC

    Full text link

    Study of ATLAS TRT performance with GRID and supercomputers.

    No full text
    After the early success in discovering a new particle consistent with the long awaited Higgs boson, Large Hadron Collider experiments are ready for the precision measurements and further discoveries that will be made possible by much higher LHC collision rates from spring 2015. A proper understanding of the detectors performance at high occupancy conditions is important for many on-going physics analyses. The ATLAS Transition Radiation Tracker (TRT) is one of these detectors. TRT is a large straw tube tracking system that is the outermost of the three subsystems of the ATLAS Inner Detector (ID). TRT contributes significantly to the resolution for high-pT tracks in the ID providing excellent particle identification capabilities and electron-pion separation. ATLAS experiment is using Worldwide LHC Computing Grid. WLCG is a global collaboration of computer centers and provides seamless access to computing resources which include data storage capacity, processing power, sensors, visualisation tools and more. WLCG resources are fully utilised and it is important to integrate opportunistic computing resources such as supercomputers, commercial and academic clouds no to curtail the range and precision of physics studies. One of the most important study dedicated to be solved on a supercomputer is reconstruction of proton-proton events with large number of interactions in Transition Radiation Tracker. This studies are made for ATLAS TRT SW group. It becomes clear that high-performance computing contributions become important and valuable. An example of very successful approach is Kurchatov Institute’s Data Processing Center including Tier-1 grid site and supercomputer. TRT jobs have been submitted using the same PanDA portal and it was transparent for physicists. Results have been transferred to the ATLAS Grid site. The presented talk includes TRT performance results obtained with the usage of the ATLAS GRID and "Kurchatov" supercomputer as well as analysis of CPU efficiency during these studies

    High performance computing system in the framework of the Higgs boson studies

    No full text
    The Higgs boson physics is one of the most important and promising fields of study in modern High Energy Physics. To perform precision measurements of the Higgs boson properties, the use of fast and efficient instruments of Monte Carlo event simulation is required. Due to the increasing amount of data and to the growing complexity of the simulation software tools, the computing resources currently available for Monte Carlo simulation on the LHC GRID are not sufficient. One of the possibilities to address this shortfall of computing resources is the usage of institutes computer clusters, commercial computing resources and supercomputers. In this paper, a brief description of the Higgs boson physics, the Monte-Carlo generation and event simulation techniques are presented. A description of modern high performance computing systems and tests of their performance are also discussed. These studies have been performed on the Worldwide LHC Computing Grid and Kurchatov Institute Data Processing Center, including Tier-1 WLCG site and the OLCF Titan supercomputer. Monte-Carlo simulated events produced with the Titan supercomputer was used in the Higgs boson analysis and the results has been published by the ATLAS collaboration

    Integration Of PanDA Workload Management System With Supercomputers

    No full text
    The Large Hadron Collider (LHC), operating at the international CERN Laboratory in Geneva, Switzerland, is leading Big Data driven scientific explorations. Experiments at the LHC explore the fundamental nature of matter and the basic forces that shape our universe, and were recently credited for the discovery of a Higgs boson. ATLAS, one of the largest collaborations ever assembled in the sciences, is at the forefront of research at the LHC. To address an unprecedented multi-petabyte data processing challenge, the ATLAS experiment is relying on a heterogeneous distributed computational infrastructure. The ATLAS experiment uses PanDA (Production and Data Analysis) Workload Management System for managing the workflow for all data processing on over 140 data centers. Through PanDA, ATLAS physicists see a single computing facility that enables rapid scientific breakthroughs for the experiment, even though the data centers are physically scattered all over the world. While PanDA currently uses more than 100,000 cores with a peak performance of 0.3 petaFLOPS, next LHC data taking runs will require more resources than Grid computing can possibly provide. To alleviate these challenges, LHC experiments are engaged in an ambitious program to expand the current computing model to include additional resources such as the opportunistic use of supercomputers. We will describe a project aimed at integration of PanDA WMS with supercomputers in United States, Europe and Russia (in particular with Titan supercomputer at Oak Ridge Leadership Computing Facility (OLCF), Supercomputer at the National Research Center “Kurchatov Institute”, IT4 in Ostrava and others). Current approach utilizes modified PanDA pilot framework for job submission to the supercomputers batch queues and local data management, with light-weight MPI wrappers to run single threaded workloads in parallel on Titan's multi-core worker nodes. This implementation was tested with a variety of Monte-Carlo workloads on several supercomputing platforms. We will present our current accomplishments with running PanDA WMS at supercomputers and demonstrate our ability to use PanDA as a portal independent of the computing facilities infrastructure for High Energy and Nuclear Physics as well as other data-intensive science applications, such as bioinformatics and astro-particle physics
    corecore