684 research outputs found

    Data handling in KLOE

    Get PDF
    Abstract The KLOE experiment is going to acquire and manage petabytes of data. An efficient and easy to use system is essential to cope with this amount of data. In this paper a general overview of the approach chosen at KLOE is presented

    Measuring gravitational lensing of the cosmic microwave background using cross correlation with large scale structure

    Full text link
    We cross correlate the gravitational lensing map extracted from cosmic microwave background measurements by the Wilkinson Microwave Anisotropy Probe (WMAP) with the radio galaxy distribution from the NRAO VLA Sky Survey (NVSS) by using a quadratic estimator technique. We use the full covariance matrix to filter the data, and calculate the cross-power spectra for the lensing-galaxy correlation. We explore the impact of changing the values of cosmological parameters on the lensing reconstruction, and obtain statistical detection significances at >3σ>3\sigma. The results of all cross correlations pass the curl null test as well as a complementary diagnostic test using the NVSS data in equatorial coordinates. We forecast the potential for Planck and NVSS to constrain the lensing-galaxy cross correlation as well as the galaxy bias. The lensing-galaxy cross-power spectra are found to be Gaussian distributed.Comment: 16 pages, 10 figure

    Flexible Session Management in a Distributed Environment

    Full text link
    Many secure communication libraries used by distributed systems, such as SSL, TLS, and Kerberos, fail to make a clear distinction between the authentication, session, and communication layers. In this paper we introduce CEDAR, the secure communication library used by the Condor High Throughput Computing software, and present the advantages to a distributed computing system resulting from CEDAR's separation of these layers. Regardless of the authentication method used, CEDAR establishes a secure session key, which has the flexibility to be used for multiple capabilities. We demonstrate how a layered approach to security sessions can avoid round-trips and latency inherent in network authentication. The creation of a distinct session management layer allows for optimizations to improve scalability by way of delegating sessions to other components in the system. This session delegation creates a chain of trust that reduces the overhead of establishing secure connections and enables centralized enforcement of system-wide security policies. Additionally, secure channels based upon UDP datagrams are often overlooked by existing libraries; we show how CEDAR's structure accommodates this as well. As an example of the utility of this work, we show how the use of delegated security sessions and other techniques inherent in CEDAR's architecture enables US CMS to meet their scalability requirements in deploying Condor over large-scale, wide-area grid systems

    Use of glide-ins in CMS for production and analysis

    Get PDF
    With the evolution of various grid federations, the Condor glide-ins represent a key feature in providing a homogeneous pool of resources using late-binding technology. The CMS collaboration uses the glide-in based Workload Management System, glideinWMS, for production (ProdAgent) and distributed analysis (CRAB) of the data. The Condor glide-in daemons traverse to the worker nodes, submitted via Condor-G. Once activated, they preserve the Master-Worker relationships, with the worker first validating the execution environment on the worker node before pulling the jobs sequentially until the expiry of their lifetimes. The combination of late-binding and validation significantly reduces the overall failure rate visible to CMS physicists. We discuss the extensive use of the glideinWMS since the computing challenge, CCRC-08, in order to prepare for the forthcoming LHC data-taking period. The key features essential to the success of large-scale production and analysis on CMS resources across major grid federations, including EGEE, OSG and NorduGrid are outlined. Use of glide-ins via the CRAB server mechanism and ProdAgent, as well as first hand experience of using the next generation CREAM computing element within the CMS framework is discussed

    CDF experience with monte carlo production using LCG grid

    Get PDF
    The upgrades of the Tevatron collider and CDF detector have considerably increased the demand on computing resources, in particular for Monte Carlo production. This has forced the collaboration to move beyond the usage of dedicated resources and start exploiting the Grid. The CDF Analysis Farm (CAF) model has been reimplemented into LcgCAF in order to access Grid resources by using the LCG/EGEE middleware. Many sites in Italy and in Europe are accessed through this portal by CDF users mainly to produce Monte Carlo data but also for other analysis jobs. We review here the setup used to submit jobs to Grid sites and retrieve the output, including CDF-specific configuration of some Grid components. We also describe the batch and interactive monitor tools developed to allow users to verify the jobs status during their lifetime in the Grid environment. Finally we analyze the efficiency and typical failure modes of the current Grid infrastructure reporting the performances of different parts of the system used

    CMS@home: Integrating the Volunteer Cloud and High‑Throughput Computing

    Get PDF
    Volunteer computing has the potential to provide significant additional computing capacity for the LHC experiments. Initiatives such as the CMS@home project are aiming to integrate volunteer computing resources into the experiment’s computational frameworks to support their scientific workloads. This is especially important, as over the next few years the demands on computing capacity will increase beyond what can be supported by general technology trends. This paper describes how a volunteer computing project that uses virtualization to run high energy physics simulations can integrate those resources into their computing infrastructure. The concept of the volunteer cloud is introduced and how this model can simplify the integration is described. An architecture for implementing the volunteer cloud model is presented along with an implementation for the CMS@home project. Finally, the submission of real CMS workloads to this volunteer cloud are compared to identical workloads submitted to the grid
    corecore