25 research outputs found

    HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    Full text link
    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing nterest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized both local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. In addition, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.Comment: 15 pages, 9 figure

    CMS Connect

    Get PDF
    The CMS experiment collects and analyzes large amounts of data coming from high energy particle collisions produced by the Large Hadron Collider (LHC) at CERN. This involves a huge amount of real and simulated data processing that needs to be handled in batch-oriented platforms. The CMS Global Pool of computing resources provide +100K dedicated CPU cores and another 50K to 100K CPU cores from opportunistic resources for these kind of tasks and even though production and event processing analysis workflows are already managed by existing tools, there is still a lack of support to submit final stage condor-like analysis jobs familiar to Tier-3 or local Computing Facilities users into these distributed resources in an integrated (with other CMS services) and friendly way. CMS Connect is a set of computing tools and services designed to augment existing services in the CMS Physics community focusing on these kind of condor analysis jobs. It is based on the CI-Connect platform developed by the Open Science Grid and uses the CMS GlideInWMS infrastructure to transparently plug CMS global grid resources into a virtual pool accessed via a single submission machine. This paper describes the specific developments and deployment of CMS Connect beyond the CI-Connect platform in order to integrate the service with CMS specific needs, including specific Site submission, accounting of jobs and automated reporting to standard CMS monitoring resources in an effortless way to their users

    Early experience on using glideinWMS in the cloud

    Get PDF
    Abstract. Cloud computing is steadily gaining traction both in commercial and research worlds, and there seems to be significant potential to the HEP community as well. However, most of the tools used in the HEP community are tailored to the current computing model, which is based on grid computing. One such tool is glideinWMS, a pilot-based workload management system. In this paper we present both what code changes were needed to make it work in the cloud world, as well as what architectural problems we encountered and how we solved them. Benchmarks comparing grid, Magellan, and Amazon EC2 resources are also included

    二战前新、马华商及其经贸网络的形成、发展与变化

    Get PDF
    本文以近现代华侨一个较大的海外聚居地——英属马来亚作为考察区域,以当地华商经贸网络为研究对象,尽可能运用所学习掌握的历史学、社会学等多学科的基本研究理论与方法,初步考察了二战前在以新加坡为商贸中心的英属马来亚,华商经贸网络的形成、发展与变化情况。本文着重探讨了当时新马华商的经贸活动,以及当时新马华商经贸网络的内部联系等有关方面情况,并通过各个时期的典型个案来具体分析当时的新马华商经贸网络。 全文共分为七个部分。 第一部分:绪言。简要阐述本课题研究概况,本文的选题动机、主要框架与内容,以及本课题研究所涉及的若干概念的界定。 第二部分:背景。首先简要介绍二战前英属马来亚的政治经济概况,接着略...学位:历史学硕士院系专业:南洋研究院_专门史学号:19981900

    Observation of a new boson at a mass of 125 GeV with the CMS experiment at the LHC

    Get PDF

    The Diverse use of Clouds by CMS

    Get PDF
    The resources CMS is using are increasingly being offered as clouds. In Run 2 of the LHC the majority of CMS CERN resources, both in Meyrin and at the Wigner Computing Centre, will be presented as cloud resources on which CMS will have to build its own infrastructure. This infrastructure will need to run all of the CMS workflows including: Tier 0, production and user analysis. In addition, the CMS High Level Trigger will provide a compute resource comparable in scale to the total offered by the CMS Tier 1 sites, when it is not running as part of the trigger system. During these periods a cloud infrastructure will be overlaid on this resource, making it accessible for general CMS use. Finally, CMS is starting to utilise cloud resources being offered by individual institutes and is gaining experience to facilitate the use of opportunistically available cloud resources.We present a snap shot of this infrastructure and its operation at the time of the CHEP2015 conference

    Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources

    No full text
    Abstract. Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the productioncomputing model of CMS, and find that they work very well as worker nodes. The coststructure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely "ondemand" as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the "base-line" needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required

    CMS Connect

    Get PDF
    The CMS experiment collects and analyzes large amounts of data coming from high energy particle collisions produced by the Large Hadron Collider (LHC) at CERN. This involves a huge amount of real and simulated data processing that needs to be handled in batch-oriented platforms. The CMS Global Pool of computing resources provide +100K dedicated CPU cores and another 50K to 100K CPU cores from opportunistic resources for these kind of tasks and even though production and event processing analysis workflows are already managed by existing tools, there is still a lack of support to submit final stage condor-like analysis jobs familiar to Tier-3 or local Computing Facilities users into these distributed resources in an integrated (with other CMS services) and friendly way. CMS Connect is a set of computing tools and services designed to augment existing services in the CMS Physics community focusing on these kind of condor analysis jobs. It is based on the CI-Connect platform developed by the Open Science Grid and uses the CMS GlideInWMS infrastructure to transparently plug CMS global grid resources into a virtual pool accessed via a single submission machine. This paper describes the specific developments and deployment of CMS Connect beyond the CI-Connect platform in order to integrate the service with CMS specific needs, including specific Site submission, accounting of jobs and automated reporting to standard CMS monitoring resources in an effortless way to their users

    Functional SNPs within the Intron 1 of the PROP1 Gene Contribute to Combined Growth Hormone Deficiency (CPHD).

    No full text
    Context:Mutations within the PROP1 gene represent one of the main causes of familial combined pituitary hormone deficiency (CPHD). However, most of the cases are sporadic with an unknown genetic cause.Objective:The aim of this study was the search for low penetrance variations within and around a conserved regulatory element in the intron 1 of PROP1, contributing to a multifactorial form of the disease in sporadic patients.Methods and Patients:A fragment of 570 bp encompassing the conserved region was sequenced in 107 CPHD patients and 294 controls, and an association study was performed with the four identified variants, namely c.109+435G>A (rs73346254), c.109+463C>T (rs4498267), c.109+768C>G (rs4431364), and c.109+915_917ins/delTAG (rs148607624). The functional role of the associated polymorphisms was evaluated by luciferase reporter gene expression analyses and EMSA.Results:A statistically significant increased frequency was observed in the patients for rs73346254A (P = 5 × 10(-4)) and rs148607624delTAG (P = 0.01) alleles. Among all the possible allele combinations, only the haplotype bearing both risk alleles showed a significantly higher frequency in the patients vs. controls (P = 4.7 × 10(-4)) and conferred a carrier risk of 4.19 (P = 1.2 × 10(-4)). This haplotype determined a significant decrease of the luciferase activity in comparison with a basal promoter and the other allelic combinations in GH4C and MCF7 cells (P = 4.6 × 10(-6); P = 5.5 × 10(-4), respectively). The EMSA showed a differential affinity for nuclear proteins for the alternative alleles of the two associated variations.Conclusions:Variations with a functional significance conferring susceptibility to CPHD have been identified in the PROP1 gene, indicating a multifactorial origin of this disorder in sporadic cases
    corecore