25 research outputs found

    EMI Deployment Planning

    Get PDF

    Features of Muon Arrival Time Distributions of High Energy EAS at Large Distances From the Shower Axis

    Get PDF
    In view of the current efforts to extend the KASCADE experiment (KASCADE-Grande) for observations of Extensive Air Showers (EAS) of primary energies up to 1 EeV, the features of muon arrival time distributions and their correlations with other observable EAS quantities have been scrutinised on basis of high-energy EAS, simulated with the Monte Carlo code CORSIKA and using in general the QGSJET model as generator. Methodically various correlations of adequately defined arrival time parameters with other EAS parameters have been investigated by invoking non-parametric methods for the analysis of multivariate distributions, studying the classification and misclassification probabilities of various observable sets. It turns out that adding the arrival time information and the multiplicity of muons spanning the observed time distributions has distinct effects improving the mass discrimination. A further outcome of the studies is the feature that for the considered ranges of primary energies and of distances from the shower axis the discrimination power of global arrival time distributions referring to the arrival time of the shower core is only marginally enhanced as compared to local distributions referring to the arrival of the locally first muon.Comment: 24 pages, Journal Physics G accepte

    Autonomic Management of Large Clusters and Their Integration into the Grid

    Get PDF
    We present a framework for the co-ordinated, autonomic management of multiple clusters in a compute center and their integration into a Grid environment. Site autonomy and the automation of administrative tasks are prime aspects in this framework. The system behavior is continuously monitored in a steering cycle and appropriate actions are taken to resolve any problems. All presented components have been implemented in the course of the EU project DataGrid: The Lemon monitoring components, the FT fault-tolerance mechanism, the quattor system for software installation and configuration, the RMS job and resource management system, and the Gridification scheme that integrates clusters into the Grid

    ALICE: Physics Performance Report, Volume I

    Get PDF
    ALICE is a general-purpose heavy-ion experiment designed to study the physics of strongly interacting matter and the quark-gluon plasma in nucleus-nucleus collisions at the LHC. It currently includes more than 900 physicists and senior engineers, from both nuclear and high-energy physics, from about 80 institutions in 28 countries. The experiment was approved in February 1997. The detailed design of the different detector systems has been laid down in a number of Technical Design Reports issued between mid-1998 and the end of 2001 and construction has started for most detectors. Since the last comprehensive information on detector and physics performance was published in the ALICE Technical Proposal in 1996, the detector as well as simulation, reconstruction and analysis software have undergone significant development. The Physics Performance Report (PPR) will give an updated and comprehensive summary of the current status and performance of the various ALICE subsystems, including updates to the Technical Design Reports, where appropriate, as well as a description of systems which have not been published in a Technical Design Report. The PPR will be published in two volumes. The current Volume I contains: 1. a short theoretical overview and an extensive reference list concerning the physics topics of interest to ALICE, 2. relevant experimental conditions at the LHC, 3. a short summary and update of the subsystem designs, and 4. a description of the offline framework and Monte Carlo generators. Volume II, which will be published separately, will contain detailed simulations of combined detector performance, event reconstruction, and analysis of a representative sample of relevant physics observables from global event characteristics to hard processes

    An integrated infrastructure in support of software development

    Get PDF
    This paper describes the design and the current state of implementation of an infrastructure made available to software developers within the Italian National Institute for Nuclear Physics (INFN) to support and facilitate their daily activity. The infrastructure integrates several tools, each providing a well-identified function: project management, version control system, continuous integration, dynamic provisioning of virtual machines, efficiency improvement, knowledge base. When applicable, access to the services is based on the INFN-wide Authentication and Authorization Infrastructure. The system is being installed and progressively made available to INFN users belonging to tens of sites and laboratories and will represent a solid foundation for the software development efforts of the many experiments and projects that see the involvement of the Institute. The infrastructure will be beneficial especially for small- and medium-size collaborations, which often cannot afford the resources, in particular in terms of know-how, needed to set up such services. © Published under licence by IOP Publishing Ltd

    Elastic extension of a local analysis facility on external clouds for the LHC experiments

    Get PDF
    The computing infrastructures serving the LHC experiments have been designed to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, the LHC experiments are exploring the opportunity to access Cloud resources provided by external partners or commercial providers. In this work we present the proof of concept of the elastic extension of a local analysis facility, specifically the Bologna Tier-3 Grid site, for the LHC experiments hosted at the site, on an external OpenStack infrastructure. We focus on the Cloud Bursting of the Grid site using DynFarm, a newly designed tool that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on an OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage

    Elastic Extension of a CMS Computing Centre Resources on External Clouds

    No full text
    After the successful LHC data taking in Run-I and in view of the future runs, the LHC experiments are facing new challenges in the design and operation of the computing facilities. The computing infrastructure for Run-II is dimensioned to cope at most with the average amount of data recorded. The usage peaks, as already observed in Run-I, may however originate large backlogs, thus delaying the completion of the data reconstruction and ultimately the data availability for physics analysis. In order to cope with the production peaks, CMS - along the lines followed by other LHC experiments - is exploring the opportunity to access Cloud resources provided by external partners or commercial providers. Specific use cases have already been explored and successfully exploited during Long Shutdown 1 (LS1) and the first part of Run 2. In this work we present the proof of concept of the elastic extension of a CMS site, specifically the Bologna Tier-3, on an external OpenStack infrastructure. We focus on the “Cloud Bursting” of a CMS Grid site using a newly designed LSF configuration that allows the dynamic registration of new worker nodes to LSF. In this approach, the dynamically added worker nodes instantiated on the OpenStack infrastructure are transparently accessed by the LHC Grid tools and at the same time they serve as an extension of the farm for the local usage. The amount of resources allocated thus can be elastically modeled to cope up with the needs of CMS experiment and local users. Moreover, a direct access/integration of OpenStack resources to the CMS workload management system is explored. In this paper we present this approach, we report on the performances of the on-demand allocated resources, and we discuss the lessons learned and the next steps
    corecore