493 research outputs found

    Integrating multiple scientific computing needs via a Private Cloud infrastructure

    Get PDF
    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit

    Managing a tier-2 computer centre with a private cloud infrastructure

    Get PDF
    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI

    The Need for a Versioned Data Analysis Software Environment

    Full text link
    Scientific results in high-energy physics and in many other fields often rely on complex software stacks. In order to support reproducibility and scrutiny of the results, it is good practice to use open source software and to cite software packages and versions. With ever-growing complexity of scientific software on one side and with IT life-cycles of only a few years on the other side, however, it turns out that despite source code availability the setup and the validation of a minimal usable analysis environment can easily become prohibitively expensive. We argue that there is a substantial gap between merely having access to versioned source code and the ability to create a data analysis runtime environment. In order to preserve all the different variants of the data analysis runtime environment, we developed a snapshotting file system optimized for software distribution. We report on our experience in preserving the analysis environment for high-energy physics such as the software landscape used to discover the Higgs boson at the Large Hadron Collider

    PROOF as a Service on the Cloud: a Virtual Analysis Facility based on the CernVM ecosystem

    Full text link
    PROOF, the Parallel ROOT Facility, is a ROOT-based framework which enables interactive parallelism for event-based tasks on a cluster of computing nodes. Although PROOF can be used simply from within a ROOT session with no additional requirements, deploying and configuring a PROOF cluster used to be not as straightforward. Recently great efforts have been spent to make the provisioning of generic PROOF analysis facilities with zero configuration, with the added advantages of positively affecting both stability and scalability, making the deployment operations feasible even for the end user. Since a growing amount of large-scale computing resources are nowadays made available by Cloud providers in a virtualized form, we have developed the Virtual PROOF-based Analysis Facility: a cluster appliance combining the solid CernVM ecosystem and PoD (PROOF on Demand), ready to be deployed on the Cloud and leveraging some peculiar Cloud features such as elasticity. We will show how this approach is effective both for sysadmins, who will have little or no configuration to do to run it on their Clouds, and for the end users, who are ultimately in full control of their PROOF cluster and can even easily restart it by themselves in the unfortunate event of a major failure. We will also show how elasticity leads to a more optimal and uniform usage of Cloud resources.Comment: Talk from Computing in High Energy and Nuclear Physics 2013 (CHEP2013), Amsterdam (NL), October 2013, 7 pages, 4 figure

    Azimuthal anisotropy of charged jet production in root s(NN)=2.76 TeV Pb-Pb collisions

    Get PDF
    We present measurements of the azimuthal dependence of charged jet production in central and semi-central root s(NN) = 2.76 TeV Pb-Pb collisions with respect to the second harmonic event plane, quantified as nu(ch)(2) (jet). Jet finding is performed employing the anti-k(T) algorithm with a resolution parameter R = 0.2 using charged tracks from the ALICE tracking system. The contribution of the azimuthal anisotropy of the underlying event is taken into account event-by-event. The remaining (statistical) region-to-region fluctuations are removed on an ensemble basis by unfolding the jet spectra for different event plane orientations independently. Significant non-zero nu(ch)(2) (jet) is observed in semi-central collisions (30-50% centrality) for 20 <p(T)(ch) (jet) <90 GeV/c. The azimuthal dependence of the charged jet production is similar to the dependence observed for jets comprising both charged and neutral fragments, and compatible with measurements of the nu(2) of single charged particles at high p(T). Good agreement between the data and predictions from JEWEL, an event generator simulating parton shower evolution in the presence of a dense QCD medium, is found in semi-central collisions. (C) 2015 CERN for the benefit of the ALICE Collaboration. Published by Elsevier B.V. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).Peer reviewe

    Performance of the ALICE experiment at the CERN LHC

    Get PDF
    ALICE is the heavy-ion experiment at the CERN Large Hadron Collider. The experiment continuously took data during the first physics campaign of the machine from fall 2009 until early 2013, using proton and lead-ion beams. In this paper we describe the running environment and the data handling procedures, and discuss the performance of the ALICE detectors and analysis methods for various physics observables

    Long-range angular correlations on the near and away side in p&#8211;Pb collisions at

    Get PDF

    Production of He-4 and (4) in Pb-Pb collisions at root(NN)-N-S=2.76 TeV at the LHC

    Get PDF
    Results on the production of He-4 and (4) nuclei in Pb-Pb collisions at root(NN)-N-S = 2.76 TeV in the rapidity range vertical bar y vertical bar <1, using the ALICE detector, are presented in this paper. The rapidity densities corresponding to 0-10% central events are found to be dN/dy4(He) = (0.8 +/- 0.4 (stat) +/- 0.3 (syst)) x 10(-6) and dN/dy4 = (1.1 +/- 0.4 (stat) +/- 0.2 (syst)) x 10(-6), respectively. This is in agreement with the statistical thermal model expectation assuming the same chemical freeze-out temperature (T-chem = 156 MeV) as for light hadrons. The measured ratio of (4)/He-4 is 1.4 +/- 0.8 (stat) +/- 0.5 (syst). (C) 2018 Published by Elsevier B.V.Peer reviewe

    Forward-central two-particle correlations in p-Pb collisions at root s(NN)=5.02 TeV

    Get PDF
    Two-particle angular correlations between trigger particles in the forward pseudorapidity range (2.5 2GeV/c. (C) 2015 CERN for the benefit of the ALICE Collaboration. Published by Elsevier B. V.Peer reviewe

    Event-shape engineering for inclusive spectra and elliptic flow in Pb-Pb collisions at root(NN)-N-S=2.76 TeV

    Get PDF
    Peer reviewe
    corecore