5,445 research outputs found

    Extending the distributed computing infrastructure of the CMS experiment with HPC resources

    Get PDF
    Particle accelerators are an important tool to study the fundamental properties of elementary particles. Currently the highest energy accelerator is the LHC at CERN, in Geneva, Switzerland. Each of its four major detectors, such as the CMS detector, produces dozens of Petabytes of data per year to be analyzed by a large international collaboration. The processing is carried out on the Worldwide LHC Computing Grid, that spans over more than 170 compute centers around the world and is used by a number of particle physics experiments. Recently the LHC experiments were encouraged to make increasing use of HPC resources. While Grid resources are homogeneous with respect to the used Grid middleware, HPC installations can be very different in their setup. In order to integrate HPC resources into the highly automatized processing setups of the CMS experiment a number of challenges need to be addressed. For processing, access to primary data and metadata as well as access to the software is required. At Grid sites all this is achieved via a number of services that are provided by each center. However at HPC sites many of these capabilities cannot be easily provided and have to be enabled in the user space or enabled by other means. At HPC centers there are often restrictions regarding network access to remote services, which is again a severe limitation. The paper discusses a number of solutions and recent experiences by the CMS experiment to include HPC resources in processing campaigns

    Integration of the Barcelona Supercomputing Center for CMS computing: Towards large scale production

    Get PDF
    The CMS experiment is working to integrate an increasing number of High Performance Computing (HPC) resources into its distributed computing infrastructure. The case of the Barcelona Supercomputing Center (BSC) is particularly challenging as severe network restrictions prevent the use of CMS standard computing solutions. The CIEMAT CMS group has performed significant work in order to overcome these constraints and make BSC resources available to CMS. The developments include adapting the workload management tools, replicating the CMS software repository to BSC storage, providing an alternative access to detector conditions data, and setting up a service to transfer produced output data to a nearby storage facility. In this work, we discuss the current status of this integration activity and present recent developments, such as a front-end service to improve slot usage efficiency and an enhanced transfer service that supports the staging of input data for workflows at BSC. Moreover, significant efforts have been devoted to improving the scalability of the deployed solution, automating its operation, and simplifying the matchmaking of CMS workflows that are suitable for execution at BSC

    The Spanish CMS Analysis Facility at CIEMAT

    Get PDF
    The increasingly larger data volumes that the LHC experiments will accumulate in the coming years, especially in the High-Luminosity LHC era, call for a paradigm shift in the way experimental datasets are accessed and analyzed. The current model, based on data reduction on the Grid infrastructure, followed by interactive data analysis of manageable size samples on the physicists’ individual computers, will be superseded by the adoption of Analysis Facilities. This rapidly evolving concept is converging to include dedicated hardware infrastructures and computing services optimized for the effective analysis of large HEP data samples. This paper describes the actual implementation of this new analysis facility model at the CIEMAT institute, in Spain, to support the local CMS experiment community. Our work details the deployment of dedicated highly performant hardware, the operation of data staging and caching services ensuring prompt and efficient access to CMS physics analysis datasets, and the integration and optimization of a custom analysis framework based on ROOT’s RDataFrame and CMS NanoAOD format. Finally, performance results obtained by benchmarking the deployed infrastructure and software against a CMS analysis workflow are summarized

    FCNC Top Quark Decays in Extra Dimensions

    Full text link
    The flavor changing neutral top quark decay t -> c X is computed, where X is a neutral standard model particle, in a extended model with a single extra dimension. The cases for the photon, X= \gamma,andaStandardModelHiggsboson,X=H,areanalyzedindetailinanonlinear, and a Standard Model Higgs boson, X = H, are analyzed in detail in a non-linearR_\xi gauge. We find that the branching ratios can be enhanced by the dynamics originated in the extra dimension. In the limit where 1/R >> ->, we have found Br(t -> c \gamma) \simeq 10^{-10} for 1/R = 0.5 TeV. For the decay t -> c H, we have found Br(t -> cH) \simeq 10^{-10} for a low Higgs mass value. The branching ratios go to zero when 1/R -> \infty.Comment: Accepted to be published in the Europ. Phys. Jour. C; 16 pages, 2 figure

    A case study of content delivery networks for the CMS ex-periment

    Get PDF
    In 2029 the LHC will start the high-luminosity LHC program, with a boost in the integrated luminosity resulting in an unprecedented amount of ex- perimental and simulated data samples to be transferred, processed and stored in disk and tape systems across the worldwide LHC computing Grid. Content de- livery network solutions are being explored with the purposes of improving the performance of the compute tasks reading input data via the wide area network, and also to provide a mechanism for cost-effective deployment of lightweight storage systems supporting traditional or opportunistic compute resources. In this contribution we study the benefits of applying cache solutions for the CMS experiment, in particular the configuration and deployment of XCache serving data to two Spanish WLCG sites supporting CMS: the Tier-1 site at PIC and the Tier-2 site at CIEMAT. The deployment and configuration of the system and the developed monitoring tools will be shown, as well as data popularity studies in relation to the optimization of the cache configuration, the effects on CPU efficiency improvements for analysis tasks, and the cost benefits and impact of including this solution in the region

    INF2 promotes the formation of detyrosinated microtubules necessary for centrosome reorientation in T cells

    Get PDF
    T cell antigen receptor-proximal signaling components, Rho-family GTPases, and formin proteins DIA1 and FMNL1 have been implicated in centrosome reorientation to the immunological synapse of T lymphocytes. However, the role of these molecules in the reorientation process is not yet defined. Here we find that a subset of microtubules became rapidly stabilized and that their α-tubulin subunit posttranslationally detyrosinated after engagement of the T cell receptor. Formation of stabilized, detyrosinated microtubules required the formin INF2, which was also found to be essential for centrosome reorientation, but it occurred independently of T cell receptor-induced massive tyrosine phosphorylation. The FH2 domain, which was mapped as the INF2 region involved in centrosome repositioning, was able to mediate the formation of stable, detyrosinated microtubules and to restore centrosome translocation in DIA1-, FMNL1-, Rac1-, and Cdc42-deficient cells. Further experiments indicated that microtubule stabilization was required for centrosome polarization. Our work identifies INF2 and stable, detyrosinated microtubules as central players in centrosome reorientation in T cellsThis work was supported by grants BFU2009-07886 and CONSOLIDER COAT CSD2009-00016 to M.A. Alonso, and BFU2011-22859 to I. Correas (all of them from the Ministerio de Economía y Competitividad, Spain), and grant S2010/BMD-2305 from the Comunidad de Madrid to I. Correa

    Constraints on the χ_(c1) versus χ_(c2) polarizations in proton-proton collisions at √s = 8 TeV

    Get PDF
    The polarizations of promptly produced χ_(c1) and χ_(c2) mesons are studied using data collected by the CMS experiment at the LHC, in proton-proton collisions at √s=8  TeV. The χ_c states are reconstructed via their radiative decays χ_c → J/ψγ, with the photons being measured through conversions to e⁺e⁻, which allows the two states to be well resolved. The polarizations are measured in the helicity frame, through the analysis of the χ_(c2) to χ_(c1) yield ratio as a function of the polar or azimuthal angle of the positive muon emitted in the J/ψ → μ⁺μ⁻ decay, in three bins of J/ψ transverse momentum. While no differences are seen between the two states in terms of azimuthal decay angle distributions, they are observed to have significantly different polar anisotropies. The measurement favors a scenario where at least one of the two states is strongly polarized along the helicity quantization axis, in agreement with nonrelativistic quantum chromodynamics predictions. This is the first measurement of significantly polarized quarkonia produced at high transverse momentum
    corecore