2,934 research outputs found

    Extending the distributed computing infrastructure of the CMS experiment with HPC resources

    Get PDF
    Particle accelerators are an important tool to study the fundamental properties of elementary particles. Currently the highest energy accelerator is the LHC at CERN, in Geneva, Switzerland. Each of its four major detectors, such as the CMS detector, produces dozens of Petabytes of data per year to be analyzed by a large international collaboration. The processing is carried out on the Worldwide LHC Computing Grid, that spans over more than 170 compute centers around the world and is used by a number of particle physics experiments. Recently the LHC experiments were encouraged to make increasing use of HPC resources. While Grid resources are homogeneous with respect to the used Grid middleware, HPC installations can be very different in their setup. In order to integrate HPC resources into the highly automatized processing setups of the CMS experiment a number of challenges need to be addressed. For processing, access to primary data and metadata as well as access to the software is required. At Grid sites all this is achieved via a number of services that are provided by each center. However at HPC sites many of these capabilities cannot be easily provided and have to be enabled in the user space or enabled by other means. At HPC centers there are often restrictions regarding network access to remote services, which is again a severe limitation. The paper discusses a number of solutions and recent experiences by the CMS experiment to include HPC resources in processing campaigns

    Constraints on the χ_(c1) versus χ_(c2) polarizations in proton-proton collisions at √s = 8 TeV

    Get PDF
    The polarizations of promptly produced χ_(c1) and χ_(c2) mesons are studied using data collected by the CMS experiment at the LHC, in proton-proton collisions at √s=8  TeV. The χ_c states are reconstructed via their radiative decays χ_c → J/ψγ, with the photons being measured through conversions to e⁺e⁻, which allows the two states to be well resolved. The polarizations are measured in the helicity frame, through the analysis of the χ_(c2) to χ_(c1) yield ratio as a function of the polar or azimuthal angle of the positive muon emitted in the J/ψ → μ⁺μ⁻ decay, in three bins of J/ψ transverse momentum. While no differences are seen between the two states in terms of azimuthal decay angle distributions, they are observed to have significantly different polar anisotropies. The measurement favors a scenario where at least one of the two states is strongly polarized along the helicity quantization axis, in agreement with nonrelativistic quantum chromodynamics predictions. This is the first measurement of significantly polarized quarkonia produced at high transverse momentum

    Advancing throughput of HEP analysis work-flows using caching concepts

    Get PDF
    High throughput and short turnaround cycles are core requirements for efficient processing of data-intense end-user analyses in High Energy Physics (HEP). Together with the tremendously increasing amount of data to be processed, this leads to enormous challenges for HEP storage systems, networks and the data distribution to computing resources for end-user analyses. Bringing data close to the computing resource is a very promising approach to solve throughput limitations and improve the overall performance. However, achieving data locality by placing multiple conventional caches inside a distributed computing infrastructure leads to redundant data placement and inefficient usage of the limited cache volume. The solution is a coordinated placement of critical data on computing resources, which enables matching each process of an analysis work-flow to its most suitable worker node in terms of data locality and, thus, reduces the overall processing time. This coordinated distributed caching concept was realized at KIT by developing the coordination service NaviX that connects an XRootD cache proxy infrastructure with an HTCondor batch system. We give an overview about the coordinated distributed caching concept and experiences collected on prototype system based on NaviX
    corecore