136 research outputs found

    DZero data-intensive computing on the Open Science Grid

    No full text
    International audienceHigh energy physics experiments periodically reprocess data, in order to take advantage of improved understanding of the detector and the data processing code. Between February and May 2007, the DZero experiment has reprocessed a substantial fraction of its dataset. This consists of half a billion events, corresponding to about 100 TB of data, organized in 300,000 files. The activity utilized resources from sites around the world, including a dozen sites participating to the Open Science Grid consortium (OSG). About 1,500 jobs were run every day across the OSG, consuming and producing hundreds of Gigabytes of data. Access to OSG computing and storage resources was coordinated by the SAM-Grid system. This system organized job access to a complex topology of data queues and job scheduling to clusters, using a SAM-Grid to OSG job forwarding infrastructure. For the first time in the lifetime of the experiment, a data intensive production activity was managed on a general purpose grid, such as OSG. This paper describes the implications of using OSG, where all resources are granted following an opportunistic model, the challenges of operating a data intensive activity over such large computing infrastructure, and the lessons learned throughout the project

    GridCertLib: a Single Sign-on Solution for Grid Web Applications and Portals

    Full text link
    This paper describes the design and implementation of GridCertLib, a Java library leveraging a Shibboleth-based authentication infrastructure and the SLCS online certificate signing service, to provide short-lived X.509 certificates and Grid proxies. The main use case envisioned for GridCertLib, is to provide seamless and secure access to Grid/X.509 certificates and proxies in web applications and portals: when a user logs in to the portal using Shibboleth authentication, GridCertLib can automatically obtain a Grid/X.509 certificate from the SLCS service and generate a VOMS proxy from it. We give an overview of the architecture of GridCertLib and briefly describe its programming model. Its application to some deployment scenarios is outlined, as well as a report on practical experience integrating GridCertLib into portals for Bioinformatics and Computational Chemistry applications, based on the popular P-GRADE and Django softwares.Comment: 18 pages, 1 figure; final manuscript accepted for publication by the "Journal of Grid Computing

    High-resolution analysis of 1 day extreme precipitation in Sicily

    Get PDF
    Sicily, a major Mediterranean island, has experienced several exceptional precipitation episodes and floods during the last century, with serious damage to human life and the environment. Long-term, rational planning of urban development is indispensable to protect the population and to avoid huge economic losses in the future. This requires a thorough knowledge of the distributional features of extreme precipitation over the complex territory of Sicily. In this study, we perform a detailed investigation of observed 1 day precipitation extremes and their frequency distribution, based on a dense data set of high-quality, homogenized station records in 1921-2005. We estimate very high quantiles (return levels) corresponding to 10-, 50-and 100-year return periods, as predicted by a generalized extreme value distribution. Return level estimates are produced on a regular high-resolution grid (30 arcsec) using a variant of regional frequency analysis combined with regression techniques. Results clearly reflect the complexity of this region, and show the high vulnerability of its eastern and northeastern parts as those prone to the most intense and potentially damaging events

    HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    Full text link
    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing nterest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized both local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. In addition, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.Comment: 15 pages, 9 figure

    ReSS: A Resource Selection Service for the Open Science Grid

    Get PDF
    The Open Science Grid offers access to hundreds of computing and storage resources via standard Grid interfaces. Before the deployment of an automated resource selection system, users had to submit jobs directly to these resources. They would manually select a resource and specify all relevant attributes in the job description prior to submitting the job. The necessity of a human intervention in resource selection and attribute specification hinders automated job management components from accessing OSG resources and it is inconvenient for the users. The Resource Selection Service (ReSS) project addresses these shortcomings. The system integrates condor technology, for the core match making service, with the gLite CEMon component, for gathering and publishing resource information in the Glue Schema format. Each one of these components communicates over secure protocols via web services interfaces. The system is currently used in production on OSG by the DZero Experiment, the Engagement Virtual Organization, and the Dark Energy. It is also the resource selection service for the Fermilab Campus Grid, FermiGrid. ReSS is considered a lightweight solution to push-based workload management. This paper describes the architecture, performance, and typical usage of the system

    Proposed New Antiproton Experiments at Fermilab

    Full text link
    Fermilab operates the world's most intense source of antiprotons. Recently various experiments have been proposed that can use those antiprotons either parasitically during Tevatron Collider running or after the Tevatron Collider finishes in about 2010. We discuss the physics goals and prospects of the proposed experiments.Comment: 6 pages, 2 figures, to appear in Proceedings of IXth International Conference on Low Energy Antiproton Physics (LEAP'08), Vienna, Austria, September 16 to 19, 200

    New Experiments with Antiprotons

    Full text link
    Fermilab operates the world's most intense antiproton source. Newly proposed experiments can use those antiprotons either parasitically during Tevatron Collider running or after the Tevatron Collider finishes in about 2011. For example, the annihilation of 8 GeV antiprotons might make the world's most intense source of tagged D^0 mesons, and thus the best near-term opportunity to study charm mixing and, via CP violation, to search for new physics. Other potential measurements include sensitive studies of hyperons and of the mysterious X, Y, and Z states. Production of antihydrogen in flight can be used for first searches for antihydrogen CPT violation. With antiproton deceleration to low energy, an experiment using a Penning trap and an atom interferometer could make the world's first measurement of the gravitational force on antimatter.Comment: Prepared for the Proceedings of the 4th International Symposium on Symmetries in Subatomic Physics (SSP2009), June 2-5, 2009, Department of Physics, National Taiwan University, Taipei, Taiwa
    corecore