3,603 research outputs found

    Simulating Distributed Systems

    Get PDF
    The simulation framework developed within the "Models of Networked Analysis at Regional Centers" (MONARC) project as a design and optimization tool for large scale distributed systems is presented. The goals are to provide a realistic simulation of distributed computing systems, customized for specific physics data processing tasks and to offer a flexible and dynamic environment to evaluate the performance of a range of possible distributed computing architectures. A detailed simulation of a large system, the CMS High Level Trigger (HLT) production farm, is also presented

    Object Database Scalability for Scientific Workloads

    Get PDF
    We describe the PetaByte-scale computing challenges posed by the next generation of particle physics experiments, due to start operation in 2005. The computing models adopted by the experiments call for systems capable of handling sustained data acquisition rates of at least 100 MBytes/second into an Object Database, which will have to handle several PetaBytes of accumulated data per year. The systems will be used to schedule CPU intensive reconstruction and analysis tasks on the highly complex physics Object data which need then be served to clients located at universities and laboratories worldwide. We report on measurements with a prototype system that makes use of a 256 CPU HP Exemplar X Class machine running the Objectivity/DB database. Our results show excellent scalability for up to 240 simultaneous database clients, and aggregate I/O rates exceeding 150 Mbytes/second, indicating the viability of the computing models

    A self-organizing neural network for job scheduling in distributed systems

    Get PDF

    Search for Randall-Sundrum excitations of gravitons decaying into two photons for CMS at LHC

    Get PDF
    The CMS detector discovery potential to the resonant production of massive Kaluza - Klein excitations expected in Randall-Sundrum model is studied. Full simulation and reconstruction are used to study diphoton decay of Randall-Sundrum gravitons. For an integrated luminosity of 30 fb^-1 diphoton decay of Randall-Sundrum graviton can be discovered at 5 sigma level for masses up to 1.61~tevsucqua in case of weak coupling between graviton excitations and Standard model particles (c=0.01). Heavier resonances can be detected for larger coupling constant (c=0.1), with mass reach of 3.95~tevsucqua

    The Clarens web services architecture

    Get PDF
    Clarens is a uniquely flexible web services infrastructure providing a unified access protocol to a diverse set of functions useful to the HEP community. It uses the standard HTTP protocol combined with application layer, certificate based authentication to provide single sign-on to individuals, organizations and hosts, with fine-grained access control to services, files and virtual organization (VO) management. This contribution describes the server functionality, while client applications are described in a subsequent talk.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 6 pages, LaTeX, 4 figures, PSN MONT00

    Distributed Heterogeneous Relational Data Warehouse In A Grid Environment

    Get PDF
    This paper examines how a "Distributed Heterogeneous Relational Data Warehouse" can be integrated in a Grid environment that will provide physicists with efficient access to large and small object collections drawn from databases at multiple sites. This paper investigates the requirements of Grid-enabling such a warehouse, and explores how these requirements may be met by extensions to existing Grid middleware. We present initial results obtained with a working prototype warehouse of this kind using both SQLServer and Oracle9i, where a Grid-enabled web-services interface makes it easier for web-applications to access the distributed contents of the databases securely. Based on the success of the prototype, we proposes a framework for using heterogeneous relational data warehouse through the web-service interface and create a single "Virtual Database System" for users. The ability to transparently access data in this way, as shown in prototype, is likely to be a very powerful facility for HENP and other grid users wishing to collate and analyze information distributed over Grid.Comment: 4 pages, 6 figure

    Clarens Client and Server Applications

    Get PDF
    Several applications have been implemented with access via the Clarens web service infrastructure, including virtual organization management, JetMET physics data analysis using relational databases, and Storage Resource Broker (SRB) access. This functionality is accessible transparently from Python scripts, the Root analysis framework and from Java applications and browser applets.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 4 pages, LaTeX, no figures, PSN TUCT00

    A Quantum Monte Carlo Method at Fixed Energy

    Full text link
    In this paper we explore new ways to study the zero temperature limit of quantum statistical mechanics using Quantum Monte Carlo simulations. We develop a Quantum Monte Carlo method in which one fixes the ground state energy as a parameter. The Hamiltonians we consider are of the form H=H0+λVH=H_{0}+\lambda V with ground state energy E. For fixed H0H_{0} and V, one can view E as a function of λ\lambda whereas we view λ\lambda as a function of E. We fix E and define a path integral Quantum Monte Carlo method in which a path makes no reference to the times (discrete or continuous) at which transitions occur between states. For fixed E we can determine λ(E)\lambda(E) and other ground state properties of H

    US LHCNet: Transatlantic Networking for the LHC and the U.S. HEP Community

    Get PDF
    US LHCNet provides the transatlantic connectivity between the Tier1 computing facilities at the Fermilab and Brookhaven National Labs and the Tier0 and Tier1 facilities at CERN, as well as Tier1s elsewhere in Europe and Asia. Together with ESnet, Internet2, and other R&E Networks participating in the LHCONE initiative, US LHCNet also supports transatlantic connections between the Tier2 centers (where most of the data analysis is taking place) and the Tier1s as needed. Given the key roles of the US and European Tier1 centers as well as Tier2 centers on both continents, the largest data flows are across the Atlantic, where US LHCNet has the major role. US LHCNet manages and operates the transatlantic network infrastructure including four Points of Presence (PoPs) and currently six transatlantic OC-192 (10Gbps) leased links. Operating at the optical layer, the network provides a highly resilient fabric for data movement, with a target service availability level in excess of 99.95%. This level of resilience and seamless operation is achieved through careful design including path diversity on both submarine and terrestrial segments, use of carrier-grade equipment with built-in high-availability and redundancy features, deployment of robust failover mechanisms based on SONET protection schemes, as well as the design of facility-diverse paths between the LHC computing sites. The US LHCNet network provides services at Layer 1(optical), Layer 2 (Ethernet) and Layer 3 (IPv4 and IPv6). The flexible design of the network, including modular equipment, a talented and agile team, and flexible circuit lease management, allows US LHCNet to react quickly to changing requirements form the LHC community. Network capacity is provisioned just-in-time to meet the needs, as demonstrated in the past years during the changing LHC start-up plans
    corecore