485 research outputs found

    Study of the Underlying Event in pp collisions with the ALICE detector at the LHC

    Get PDF
    The bulk of particles produced in a high-energy hadronic collision originates from low-momentum-transfer processes, which are not amenable to a perturbative treatment and need to be modelled phenomenologically. In this thesis we present a measurement of the bulk event-activity or Underlying Event(UE) in pp collisions at ps = 0.9 and 7TeV at the LHC as a function of the hard scale. Di erent regions are de ned with respect to the azimuthal direction of the leading (highest pT) track: Toward, Transverse and Away. The Toward and Away regions collect the fragmentation products of the hardest interaction. The Transverse region is most sensitive to the UE. The study is performed with charged particles above three di erent pT thresholds: 0.15, 0.5 and 1.0 GeV=c. We observe that for values of the leading-track pT above 3{4 GeV/c the bulk particle production becomes independent of the hard scale. In the Transverse region the multiplicity increases by a factor 2{3 between the lower and higher collision energies, depending on the pT threshold considered. Data are compared to PYTHIA 6.4, PYTHIA 8.1 and PHOJET. On average, all models underestimate the UE activity by about 10{30%

    Integrating multiple scientific computing needs via a Private Cloud infrastructure

    Get PDF
    In a typical scientific computing centre, diverse applications coexist and share a single physical infrastructure. An underlying Private Cloud facility eases the management and maintenance of heterogeneous use cases such as multipurpose or application-specific batch farms, Grid sites catering to different communities, parallel interactive data analysis facilities and others. It allows to dynamically and efficiently allocate resources to any application and to tailor the virtual machines according to the applications' requirements. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques; for example, rolling updates can be performed easily and minimizing the downtime. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 site and a dynamically expandable PROOF-based Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The Private Cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem (used in two different configurations for worker- and service-class hypervisors) and the OpenWRT Linux distribution (used for network virtualization). A future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and by using mainstream contextualization tools like CloudInit

    Managing a tier-2 computer centre with a private cloud infrastructure

    Get PDF
    In a typical scientific computing centre, several applications coexist and share a single physical infrastructure. An underlying Private Cloud infrastructure eases the management and maintenance of such heterogeneous applications (such as multipurpose or application-specific batch farms, Grid sites, interactive data analysis facilities and others), allowing dynamic allocation resources to any application. Furthermore, the maintenance of large deployments of complex and rapidly evolving middleware and application software is eased by the use of virtual images and contextualization techniques. Such infrastructures are being deployed in some large centres (see e.g. the CERN Agile Infrastructure project), but with several open-source tools reaching maturity this is becoming viable also for smaller sites. In this contribution we describe the Private Cloud infrastructure at the INFN-Torino Computer Centre, that hosts a full-fledged WLCG Tier-2 centre, an Interactive Analysis Facility for the ALICE experiment at the CERN LHC and several smaller scientific computing applications. The private cloud building blocks include the OpenNebula software stack, the GlusterFS filesystem and the OpenWRT Linux distribution (used for network virtualization); a future integration into a federated higher-level infrastructure is made possible by exposing commonly used APIs like EC2 and OCCI

    Gravitational wave alert generation infrastructure on your laptop

    Get PDF
    Multi-messenger astrophysics provides valuable insights into the properties of the physical Universe. These insights arise from the complementary information carried by photons, gravitational waves, neutrinos and cosmic rays about individual cosmic sources and source populations. When a gravitational wave (GW) candidate is identified by the Ligo, Virgo and Kagra (LVK) observatory network, an alert is sent to astronomers in order to search for electromagnetic or neutrino counterparts. The current LVK framework for alert generation consists of the Gravitational-Wave Candidate Event Database (GraceDB), which provides a centralized location for aggregating and retrieving information about candidate GW events, the SCiMMA Hopskotch server (a publishsubscribe messaging system) and GWCelery (a package for annotating and orchestrating alerts). The first two services are deployed in the Cloud (Amazon Web Services), while the latter runs on dedicated physical resources. In this work, we propose a deployment strategy for the alert generation framework as a whole, based on Kubernetes. We present a set of tools (in the form of Helm charts, Python packages and scripts) which conveniently allows running a parallel deployment of the complete infrastructure in a private Cloud for scientific computing (the Cloud at CNAF, INFN Tier-1 Computing Centre), which is currently used for integration tests. As an outcome of this work, we deliver to the community a specific configuration option for a sandboxed deployment on Minikube, which can be used to test the integration of other components (i.e. the lowlatency pipelines for the detection of the GW candidate) with the alert generation infrastructure in an isolated local environment

    Improved Cloud resource allocation: how INDIGO-Datacloud is overcoming the current limitations in Cloud schedulers

    Get PDF
    Trabajo presentado a: 22nd International Conference on Computing in High Energy and Nuclear Physics (CHEP2016) 10–14 October 2016, San Francisco.Performing efficient resource provisioning is a fundamental aspect for any resource provider. Local Resource Management Systems (LRMS) have been used in data centers for decades in order to obtain the best usage of the resources, providing their fair usage and partitioning for the users. In contrast, current cloud schedulers are normally based on the immediate allocation of resources on a first-come, first-served basis, meaning that a request will fail if there are no resources (e.g. OpenStack) or it will be trivially queued ordered by entry time (e.g. OpenNebula). Moreover, these scheduling strategies are based on a static partitioning of the resources, meaning that existing quotas cannot be exceeded, even if there are idle resources allocated to other projects. This is a consequence of the fact that cloud instances are not associated with a maximum execution time and leads to a situation where the resources are under-utilized. These facts have been identified by the INDIGO-DataCloud project as being too simplistic for accommodating scientific workloads in an efficient way, leading to an underutilization of the resources, a non desirable situation in scientific data centers. In this work, we will present the work done in the scheduling area during the first year of the INDIGO project and the foreseen evolutions.The authors want to acknowledge the support of the INDIGO-DataCloud (grant number 653549) project, funded by the European Commission’s Horizon 2020 Framework Programme.Peer Reviewe
    • 

    corecore