3,603 research outputs found
Simulating Distributed Systems
The simulation framework developed within the "Models of Networked Analysis at Regional Centers" (MONARC) project as a design and optimization tool for large scale distributed systems is presented. The goals are to provide a realistic simulation of distributed computing systems, customized for specific physics data processing tasks and to offer a flexible and dynamic environment to evaluate the performance of a range of possible distributed computing architectures. A detailed simulation of a large system, the CMS High Level Trigger (HLT) production farm, is also presented
Object Database Scalability for Scientific Workloads
We describe the PetaByte-scale computing challenges posed by the next generation of particle physics experiments, due to start operation in 2005. The computing models adopted by the experiments call for systems capable of handling sustained data acquisition rates of at least 100 MBytes/second into an Object Database, which will have to handle several PetaBytes of accumulated data per year. The systems will be used to schedule CPU intensive reconstruction and analysis tasks on the highly complex physics Object data which need then be served to clients located at universities and laboratories worldwide. We report on measurements with a prototype system that makes use of a 256 CPU HP Exemplar X Class machine running the Objectivity/DB database. Our results show excellent scalability for up to 240 simultaneous database clients, and aggregate I/O rates exceeding 150 Mbytes/second, indicating the viability of the computing models
Recommended from our members
Next Generation Integrated Environment for Collaborative Work Across Internets
We are now well-advanced in our development, prototyping and deployment of a high performance next generation Integrated Environment for Collaborative Work. The system, aimed at using the capability of ESnet and Internet2 for rapid data exchange, is based on the Virtual Room Videoconferencing System (VRVS) developed by Caltech. The VRVS system has been chosen by the Internet2 Digital Video (I2-DV) Initiative as a preferred foundation for the development of advanced video, audio and multimedia collaborative applications by the Internet2 community. Today, the system supports high-end, broadcast-quality interactivity, while enabling a wide variety of clients (Mbone, H.323) to participate in the same conference by running different standard protocols in different contexts with different bandwidth connection limitations, has a fully Web-integrated user interface, developers and administrative APIs, a widely scalable video network topology based on both multicast domains and unicast tunnels, and demonstrated multiplatform support. This has led to its rapidly expanding production use for national and international scientific collaborations in more than 60 countries. We are also in the process of creating a 'testbed video network' and developing the necessary middleware to support a set of new and essential requirements for rapid data exchange, and a high level of interactivity in large-scale scientific collaborations. These include a set of tunable, scalable differentiated network services adapted to each of the data streams associated with a large number of collaborative sessions, policy-based and network state-based resource scheduling, authentication, and optional encryption to maintain confidentiality of inter-personal communications. High performance testbed video networks will be established in ESnet and Internet2 to test and tune the implementation, using a few target application-sets
Search for Randall-Sundrum excitations of gravitons decaying into two photons for CMS at LHC
The CMS detector discovery potential to the resonant production of massive Kaluza - Klein excitations expected in Randall-Sundrum model is studied. Full simulation and reconstruction are used to study diphoton decay of Randall-Sundrum gravitons. For an integrated luminosity of 30 fb^-1 diphoton decay of Randall-Sundrum graviton can be discovered at 5 sigma level for masses up to 1.61~tevsucqua in case of weak coupling between graviton excitations and Standard model particles (c=0.01). Heavier resonances can be detected for larger coupling constant (c=0.1), with mass reach of 3.95~tevsucqua
The Clarens web services architecture
Clarens is a uniquely flexible web services infrastructure providing a
unified access protocol to a diverse set of functions useful to the HEP
community. It uses the standard HTTP protocol combined with application layer,
certificate based authentication to provide single sign-on to individuals,
organizations and hosts, with fine-grained access control to services, files
and virtual organization (VO) management. This contribution describes the
server functionality, while client applications are described in a subsequent
talk.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics
(CHEP03), La Jolla, Ca, USA, March 2003, 6 pages, LaTeX, 4 figures, PSN
MONT00
Distributed Heterogeneous Relational Data Warehouse In A Grid Environment
This paper examines how a "Distributed Heterogeneous Relational Data
Warehouse" can be integrated in a Grid environment that will provide physicists
with efficient access to large and small object collections drawn from
databases at multiple sites. This paper investigates the requirements of
Grid-enabling such a warehouse, and explores how these requirements may be met
by extensions to existing Grid middleware. We present initial results obtained
with a working prototype warehouse of this kind using both SQLServer and
Oracle9i, where a Grid-enabled web-services interface makes it easier for
web-applications to access the distributed contents of the databases securely.
Based on the success of the prototype, we proposes a framework for using
heterogeneous relational data warehouse through the web-service interface and
create a single "Virtual Database System" for users. The ability to
transparently access data in this way, as shown in prototype, is likely to be a
very powerful facility for HENP and other grid users wishing to collate and
analyze information distributed over Grid.Comment: 4 pages, 6 figure
Clarens Client and Server Applications
Several applications have been implemented with access via the Clarens web
service infrastructure, including virtual organization management, JetMET
physics data analysis using relational databases, and Storage Resource Broker
(SRB) access. This functionality is accessible transparently from Python
scripts, the Root analysis framework and from Java applications and browser
applets.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics
(CHEP03), La Jolla, Ca, USA, March 2003, 4 pages, LaTeX, no figures, PSN
TUCT00
A Quantum Monte Carlo Method at Fixed Energy
In this paper we explore new ways to study the zero temperature limit of
quantum statistical mechanics using Quantum Monte Carlo simulations. We develop
a Quantum Monte Carlo method in which one fixes the ground state energy as a
parameter. The Hamiltonians we consider are of the form
with ground state energy E. For fixed and V, one can view E as a
function of whereas we view as a function of E. We fix E
and define a path integral Quantum Monte Carlo method in which a path makes no
reference to the times (discrete or continuous) at which transitions occur
between states. For fixed E we can determine and other ground
state properties of H
US LHCNet: Transatlantic Networking for the LHC and the U.S. HEP Community
US LHCNet provides the transatlantic connectivity between the Tier1 computing facilities at the Fermilab and Brookhaven National Labs and the Tier0 and Tier1 facilities at CERN, as well as Tier1s elsewhere in Europe and Asia. Together with ESnet, Internet2, and other R&E Networks participating in the LHCONE initiative, US LHCNet also supports transatlantic connections between the Tier2 centers (where most of the data analysis is taking place) and the Tier1s as needed. Given the key roles of the US and European Tier1 centers as well as Tier2 centers on both continents, the largest data flows are across the Atlantic, where US LHCNet has the major role. US LHCNet manages and operates the transatlantic network infrastructure including four Points of Presence (PoPs) and currently six transatlantic OC-192 (10Gbps) leased links. Operating at the optical layer, the network provides a highly resilient fabric for data movement, with a target service availability level in excess of 99.95%. This level of resilience and seamless operation is achieved through careful design including path diversity on both submarine and terrestrial segments, use of carrier-grade equipment with built-in high-availability and redundancy features, deployment of robust failover mechanisms based on SONET protection schemes, as well as the design of facility-diverse paths between the LHC computing sites. The US LHCNet network provides services at Layer 1(optical), Layer 2 (Ethernet) and Layer 3 (IPv4 and IPv6). The flexible design of the network, including modular equipment, a talented and agile team, and flexible circuit lease management, allows US LHCNet to react quickly to changing requirements form the LHC community. Network capacity is provisioned just-in-time to meet the needs, as demonstrated in the past years during the changing LHC start-up plans
- …