906 research outputs found

    Phenomenology Tools on Cloud Infrastructures using OpenStack

    Get PDF
    We present a new environment for computations in particle physics phenomenology employing recent developments in cloud computing. On this environment users can create and manage "virtual" machines on which the phenomenology codes/tools can be deployed easily in an automated way. We analyze the performance of this environment based on "virtual" machines versus the utilization of "real" physical hardware. In this way we provide a qualitative result for the influence of the host operating system on the performance of a representative set of applications for phenomenology calculations.Comment: 25 pages, 12 figures; information on memory usage included, as well as minor modifications. Version to appear in EPJ

    Distributed Analysis within the LHC computing Grid

    Get PDF
    The distributed data analysis using Grid resources is one of the funda- mental applications in high energy physics to be addressed and realized before the start of LHC data taking. The needs to manage the resources are very high. In every experiment up to a thousand physicist will be submitting analysis jobs into the Grid. Appropriate user interfaces and helper applications have to be made available to assure that all users can use the Grid without too much expertise in Grid technology. These tools enlarge the number of Grid users from a few production adminis- trators to potentially all participating physicists. The GANGA job management system (http://cern.ch/ganga), devel- oped as a common project between the ATLAS and LHCb experiments provides and integrates these kind of tools. GANGA provides a sim- ple and consistent way of preparing, organizing and executing analysis tasks within the experiment analysis framework, implemented through a plug-in system. It allows trivial switching between running test jobs on a local batch system and running large-scale analyzes on the Grid, hiding Grid technicalities. We will be reporting on the plug-ins and our experiences of distributed data analysis using GANGA within the ATLAS experiment and the EGEE/LCG infrastructure. The integration and interaction with the ATLAS data management system DQ2/DDM into GANGA is a key functionality. In combination with the job splitting mechanism large amounts of analysis jobs can be sent to the locations of data following the ATLAS computing model. GANGA supports tasks of user analysis with reconstructed data and small scale production of Monte Carlo data

    Big Data in Critical Infrastructures Security Monitoring: Challenges and Opportunities

    Full text link
    Critical Infrastructures (CIs), such as smart power grids, transport systems, and financial infrastructures, are more and more vulnerable to cyber threats, due to the adoption of commodity computing facilities. Despite the use of several monitoring tools, recent attacks have proven that current defensive mechanisms for CIs are not effective enough against most advanced threats. In this paper we explore the idea of a framework leveraging multiple data sources to improve protection capabilities of CIs. Challenges and opportunities are discussed along three main research directions: i) use of distinct and heterogeneous data sources, ii) monitoring with adaptive granularity, and iii) attack modeling and runtime combination of multiple data analysis techniques.Comment: EDCC-2014, BIG4CIP-201

    Grid Computing in High Energy Physics Experiments

    Get PDF

    Survey and Analysis of Production Distributed Computing Infrastructures

    Full text link
    This report has two objectives. First, we describe a set of the production distributed infrastructures currently available, so that the reader has a basic understanding of them. This includes explaining why each infrastructure was created and made available and how it has succeeded and failed. The set is not complete, but we believe it is representative. Second, we describe the infrastructures in terms of their use, which is a combination of how they were designed to be used and how users have found ways to use them. Applications are often designed and created with specific infrastructures in mind, with both an appreciation of the existing capabilities provided by those infrastructures and an anticipation of their future capabilities. Here, the infrastructures we discuss were often designed and created with specific applications in mind, or at least specific types of applications. The reader should understand how the interplay between the infrastructure providers and the users leads to such usages, which we call usage modalities. These usage modalities are really abstractions that exist between the infrastructures and the applications; they influence the infrastructures by representing the applications, and they influence the ap- plications by representing the infrastructures

    From Lagrangians to Events: Computer Tutorial at the MC4BSM-2012 Workshop

    Full text link
    This is a written account of the computer tutorial offered at the Sixth MC4BSM workshop at Cornell University, March 22-24, 2012. The tools covered during the tutorial include: FeynRules, LanHEP, MadGraph, CalcHEP, Pythia 8, Herwig++, and Sherpa. In the tutorial, we specify a simple extension of the Standard Model, at the level of a Lagrangian. The software tools are then used to automatically generate a set of Feynman rules, compute the invariant matrix element for a sample process, and generate both parton-level and fully hadronized/showered Monte Carlo event samples. The tutorial is designed to be self-paced, and detailed instructions for all steps are included in this write-up. Installation instructions for each tool on a variety of popular platforms are also provided.Comment: 58 pages, 1 figur

    Storage Resource Manager version 2.2: design, implementation, and testing experience

    Get PDF
    Storage Services are crucial components of the Worldwide LHC Computing Grid Infrastructure spanning more than 200 sites and serving computing and storage resources to the High Energy Physics LHC communities. Up to tens of Petabytes of data are collected every year by the four LHC experiments at CERN. To process these large data volumes it is important to establish a protocol and a very efficient interface to the various storage solutions adopted by the WLCG sites. In this work we report on the experience acquired during the definition of the Storage Resource Manager v2.2 protocol. In particular, we focus on the study performed to enhance the interface and make it suitable for use by the WLCG communities. At the moment 5 different storage solutions implement the SRM v2.2 interface: BeStMan (LBNL), CASTOR (CERN and RAL), dCache (DESY and FNAL), DPM (CERN), and StoRM (INFN and ICTP). After a detailed inside review of the protocol, various test suites have been written identifying the most effective set of tests: the S2 test suite from CERN and the SRM-Tester test suite from LBNL. Such test suites have helped verifying the consistency and coherence of the proposed protocol and validating existing implementations. We conclude our work describing the results achieved

    The iEBE-VISHNU code package for relativistic heavy-ion collisions

    Full text link
    The iEBE-VISHNU code package performs event-by-event simulations for relativistic heavy-ion collisions using a hybrid approach based on (2+1)-dimensional viscous hydrodynamics coupled to a hadronic cascade model. We present the detailed model implementation, accompanied by some numerical code tests for the package. iEBE-VISHNU forms the core of a general theoretical framework for model-data comparisons through large scale Monte-Carlo simulations. A numerical interface between the hydrodynamically evolving medium and thermal photon radiation is also discussed. This interface is more generally designed for calculations of all kinds of rare probes that are coupled to the temperature and flow velocity evolution of the bulk medium, such as jet energy loss and heavy quark diffusion.Comment: 47 pages, 21 figures. Manuscript was accepted by Computer Physics Communication

    A new relativistic hydrodynamics code for high-energy heavy-ion collisions

    Full text link
    We construct a new Godunov type relativistic hydrodynamics code in Milne coordinates, using a Riemann solver based on the two-shock approximation which is stable under the existence of large shock waves. We check the correctness of the numerical algorithm by comparing numerical calculations and analytical solutions in various problems, such as shock tubes, expansion of matter into the vacuum, the Landau-Khalatnikov solution, and propagation of fluctuations around Bjorken flow and Gubser flow. We investigate the energy and momentum conservation property of our code in a test problem of longitudinal hydrodynamic expansion with an initial condition for high-energy heavy-ion collisions. We also discuss numerical viscosity in the test problems of expansion of matter into the vacuum and conservation properties. Furthermore, we discuss how the numerical stability is affected by the source terms of relativistic numerical hydrodynamics in Milne coordinates.Comment: 20 pages, 16 figure

    Flow in AA and pA as an interplay of fluid-like and non-fluid like excitations

    Full text link
    To study the microscopic structure of quark-gluon plasma, data from hadronic collisions must be confronted with models that go beyond fluid dynamics. Here, we study a simple kinetic theory model that encompasses fluid dynamics but contains also particle-like excitations in a boost invariant setting with no symmetries in the transverse plane and with large initial momentum asymmetries. We determine the relative weight of fluid dynamical and particle like excitations as a function of system size and energy density by comparing kinetic transport to results from the 0th, 1st and 2nd order gradient expansion of viscous fluid dynamics. We then confront this kinetic theory with data on azimuthal flow coefficients over a wide centrality range in PbPb collisions at the LHC, in AuAu collisions at RHIC, and in pPb collisions at the LHC. Evidence is presented that non-hydrodynamic excitations make the dominant contribution to collective flow signals in pPb collisions at the LHC and contribute significantly to flow in peripheral nucleus-nucleus collisions, while fluid-like excitations dominate collectivity in central nucleus-nucleus collisions at collider energies.Comment: 28 pages, 16 figure
    • …
    corecore