998 research outputs found

    A Mediated Definite Delegation Model allowing for Certified Grid Job Submission

    Full text link
    Grid computing infrastructures need to provide traceability and accounting of their users" activity and protection against misuse and privilege escalation. A central aspect of multi-user Grid job environments is the necessary delegation of privileges in the course of a job submission. With respect to these generic requirements this document describes an improved handling of multi-user Grid jobs in the ALICE ("A Large Ion Collider Experiment") Grid Services. A security analysis of the ALICE Grid job model is presented with derived security objectives, followed by a discussion of existing approaches of unrestricted delegation based on X.509 proxy certificates and the Grid middleware gLExec. Unrestricted delegation has severe security consequences and limitations, most importantly allowing for identity theft and forgery of delegated assignments. These limitations are discussed and formulated, both in general and with respect to an adoption in line with multi-user Grid jobs. Based on the architecture of the ALICE Grid Services, a new general model of mediated definite delegation is developed and formulated, allowing a broker to assign context-sensitive user privileges to agents. The model provides strong accountability and long- term traceability. A prototype implementation allowing for certified Grid jobs is presented including a potential interaction with gLExec. The achieved improvements regarding system security, malicious job exploitation, identity protection, and accountability are emphasized, followed by a discussion of non- repudiation in the face of malicious Grid jobs

    Spin asymmetry in Muon-Deuteron deep inelastic scattering on a transversely polarized target

    Get PDF

    EPN2EOS Data Transfer System

    Get PDF
    ALICE is one of the four large experiments at the CERN LHC designed to study the structure and origins of matter in collisions of heavy ions and protons at ultra-relativistic energies. To collect, store, and process the experimental data, ALICE uses hundreds of thousands of CPU cores and more than 400 PB of different types of storage resources. During the LHC Run 3, started in 2022, ALICE is running with an upgraded detector and an entirely new data acquisition system (DAQ), capable of collecting 100 times more events than the previous setup. One of the key elements of the new DAQ is the Event Processing Nodes (EPN) farm, which currently comprises 250 servers, each equipped with 8 MI50 AMD GPU accelerators. The role of the EPN cluster is to compress the detector data in real-time. During heavy-ion data taking the experiment collects about 900 GB/s from the sensors, compressed down to 100 GB/s, and then written to a 130 PB persistent disk buffer for further processing. The EPNs handle data streams, called Time Frames, of 10 ms duration from the detector independently from each other and write the output, called Compressed Time Frames (CTF), to their local disk. The CTFs must be transferred to the disk buffer and removed from the EPNs as soon as possible, to be able to continue collecting data from the experiment. The data transfer functions are performed by the new EPN2EOS system that was introduced in the ALICE experiment in Run 3. EPN2EOS is highly optimized to perform the copy functions in parallel with the EPN data compression algorithms and has extensive monitoring and alerting capabilities to support the ALICE experiment operators. The service has been in production since November 2021. This paper presents the architecture, implementation, and analysis of its first years of utilization

    The ALICE Grid Workflow for LHC Run 3

    Get PDF
    In preparation for LHC Run 3 and 4 the ALICE Collaboration has moved to a new Grid middleware, JAliEn, and workflow management system. The migration was dictated by the substantially higher requirements on the Grid infrastructure in terms of payload complexity, increased number of jobs and managed data volume, all of which required a complete rewrite of the middleware using modern software languages and technologies. Through containerisation, self-contained binaries, managed by the JAliEn middleware, we provide a uniform execution environment across sites and various architectures, including accelerators. The model and implementation have proven their scalability and can be easily deployed across sites with minimal intervention. This contribution outlines the architecture of the new Grid workflow as deployed in production and the workflow process. Specifically shown is how core components are moved and bootstrapped through CVMFS, enabling the middleware to run anywhere fully independent of the host system. Furthermore, we examine how new middleware releases, containers and their runtimes are centrally maintained and easily deployed across the Grid, also by the means of a common build system

    Job splitting on the ALICE grid, introducing the new job optimizer for the ALICE grid middleware

    Get PDF
    This contribution introduces the job optimizer service for the nextgeneration ALICE Grid middleware, JAliEn (Java Alice Environment). It is a continuous service running on central machines and is essentially responsible for splitting jobs into subjobs, to then be distributed and executed on the ALICE grid. There are several ways of creating subjobs based on various strategies relevant to the aim of any particular grid job. Therefore a user has to explicitly declare that a job is to be split, and also define the strategy to be used. The new job optimizer service aims to retain the old ALICE grid middleware functionalities from the user’s point of view while increasing the performance and throughput. One aspect of increasing performance is looking at how the job optimizer interacts with the job queue database. A different way of describing subjobs in the database is presented, to minimize resource usage. There is also a focus on limiting communications with the database, as this is already a congested area. Furthermore, a new solution to splitting based on the locality of job input data will be presented, aiming to split into subjobs more efficiently, therefore making better use of resources on the grid to further increase throughput. Added options for the user regarding splitting by locality, such as setting a minimum limit for a subjob size, will also be explored

    Site Sonar-A Flexible and Extensible Infrastructure Monitoring Tool for ALICE Computing Grid

    Get PDF
    The ALICE experiment at the CERN Large Hadron Collider relies on a massive, distributed Computing Grid for its data processing. The ALICE Computing Grid is built by combining a large number of individual computing sites distributed globally. These Grid sites are maintained by different institutions across the world and contribute thousands of worker nodes possessing different capabilities and configurations. Developing software for Grid operations that works on all nodes while harnessing the maximum capabilities offered by any given Grid site is challenging without advance knowledge of what capabilities each site offers. Site Sonar is an architecture-independent Grid infrastructure monitoring framework developed by the ALICE Grid team to monitor the infrastructure capabilities and configurations of worker nodes at sites across the ALICE Grid without the need to contact local site administrators. Site Sonar is a highly flexible and extensible framework that offers infrastructure metric collection without local agent installations at Grid sites. This paper introduces the Site Sonar Grid infrastructure monitoring framework and reports significant findings acquired about the ALICE Computing Grid using Site Sonar

    Evidence for an exotic S=-2, Q=-2 baryon resonance in proton-proton collisions at the CERN SPS

    Get PDF
    Results of resonance searches in the Xi - pi -, Xi - pi +, Xi -bar+ pi -, and Xi -bar+ pi + invariant mass spectra in proton-proton collisions at sqrt[s]=17.2 GeV are presented. Evidence is shown for the existence of a narrow Xi - pi - baryon resonance with mass of 1.862±0.002 GeV/c2 and width below the detector resolution of about 0.018 GeV/c2. The significance is estimated to be above 4.2 sigma . This state is a candidate for the hypothetical exotic Xi --3/2 baryon with S=-2, I=3 / 2, and a quark content of (dsdsu-bar). At the same mass, a peak is observed in the Xi - pi + spectrum which is a candidate for the Xi 03/2 member of this isospin quartet with a quark content of (dsusd-bar). The corresponding antibaryon spectra also show enhancements at the same invariant mass

    System size and centrality dependence of the balance function in A+A collisions at sqrt[sNN]=17.2 GeV

    Get PDF
    Electric charge correlations were studied for p+p, C+C, Si+Si, and centrality selected Pb+Pb collisions at sqrt[sNN]=17.2 GeV with the NA49 large acceptance detector at the CERN SPS. In particular, long-range pseudorapidity correlations of oppositely charged particles were measured using the balance function method. The width of the balance function decreases with increasing system size and centrality of the reactions. This decrease could be related to an increasing delay of hadronization in central Pb+Pb collisions

    System size and centrality dependence of the balance function in A + A collisions at sqrt s NN = 17.2 GeV

    Get PDF
    Electric charge correlations were studied for p+p, C+C, Si+Si and centrality selected Pb+Pb collisions at sqrt s_NN = 17.2$ GeV with the NA49 large acceptance detector at the CERN-SPS. In particular, long range pseudo-rapidity correlations of oppositely charged particles were measured using the Balance Function method. The width of the Balance Function decreases with increasing system size and centrality of the reactions. This decrease could be related to an increasing delay of hadronization in central Pb+Pb collisions

    Azimuthal anisotropy of charged jet production in root s(NN)=2.76 TeV Pb-Pb collisions

    Get PDF
    We present measurements of the azimuthal dependence of charged jet production in central and semi-central root s(NN) = 2.76 TeV Pb-Pb collisions with respect to the second harmonic event plane, quantified as nu(ch)(2) (jet). Jet finding is performed employing the anti-k(T) algorithm with a resolution parameter R = 0.2 using charged tracks from the ALICE tracking system. The contribution of the azimuthal anisotropy of the underlying event is taken into account event-by-event. The remaining (statistical) region-to-region fluctuations are removed on an ensemble basis by unfolding the jet spectra for different event plane orientations independently. Significant non-zero nu(ch)(2) (jet) is observed in semi-central collisions (30-50% centrality) for 20 <p(T)(ch) (jet) <90 GeV/c. The azimuthal dependence of the charged jet production is similar to the dependence observed for jets comprising both charged and neutral fragments, and compatible with measurements of the nu(2) of single charged particles at high p(T). Good agreement between the data and predictions from JEWEL, an event generator simulating parton shower evolution in the presence of a dense QCD medium, is found in semi-central collisions. (C) 2015 CERN for the benefit of the ALICE Collaboration. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).Peer reviewe
    corecore