91 research outputs found
DZero data-intensive computing on the Open Science Grid
International audienceHigh energy physics experiments periodically reprocess data, in order to take advantage of improved understanding of the detector and the data processing code. Between February and May 2007, the DZero experiment has reprocessed a substantial fraction of its dataset. This consists of half a billion events, corresponding to about 100 TB of data, organized in 300,000 files. The activity utilized resources from sites around the world, including a dozen sites participating to the Open Science Grid consortium (OSG). About 1,500 jobs were run every day across the OSG, consuming and producing hundreds of Gigabytes of data. Access to OSG computing and storage resources was coordinated by the SAM-Grid system. This system organized job access to a complex topology of data queues and job scheduling to clusters, using a SAM-Grid to OSG job forwarding infrastructure. For the first time in the lifetime of the experiment, a data intensive production activity was managed on a general purpose grid, such as OSG. This paper describes the implications of using OSG, where all resources are granted following an opportunistic model, the challenges of operating a data intensive activity over such large computing infrastructure, and the lessons learned throughout the project
Proposed New Antiproton Experiments at Fermilab
Fermilab operates the world's most intense source of antiprotons. Recently
various experiments have been proposed that can use those antiprotons either
parasitically during Tevatron Collider running or after the Tevatron Collider
finishes in about 2010. We discuss the physics goals and prospects of the
proposed experiments.Comment: 6 pages, 2 figures, to appear in Proceedings of IXth International
Conference on Low Energy Antiproton Physics (LEAP'08), Vienna, Austria,
September 16 to 19, 200
GridCertLib: a Single Sign-on Solution for Grid Web Applications and Portals
This paper describes the design and implementation of GridCertLib, a Java
library leveraging a Shibboleth-based authentication infrastructure and the
SLCS online certificate signing service, to provide short-lived X.509
certificates and Grid proxies. The main use case envisioned for GridCertLib, is
to provide seamless and secure access to Grid/X.509 certificates and proxies in
web applications and portals: when a user logs in to the portal using
Shibboleth authentication, GridCertLib can automatically obtain a Grid/X.509
certificate from the SLCS service and generate a VOMS proxy from it. We give an
overview of the architecture of GridCertLib and briefly describe its
programming model. Its application to some deployment scenarios is outlined, as
well as a report on practical experience integrating GridCertLib into portals
for Bioinformatics and Computational Chemistry applications, based on the
popular P-GRADE and Django softwares.Comment: 18 pages, 1 figure; final manuscript accepted for publication by the
"Journal of Grid Computing
Precision measurements of the total and partial widths of the psi(2S) charmonium meson with a new complementary-scan technique in antiproton-proton annihilations
We present new precision measurements of the psi(2S) total and partial widths
from excitation curves obtained in antiproton-proton annihilations by Fermilab
experiment E835 at the Antiproton Accumulator in the year 2000. A new technique
of complementary scans was developed to study narrow resonances with
stochastically cooled antiproton beams. The technique relies on precise
revolution-frequency and orbit-length measurements, while making the analysis
of the excitation curve almost independent of machine lattice parameters. We
study the psi(2S) meson through the processes pbar p -> e+ e- and pbar p ->
J/psi + X -> e+ e- + X. We measure the width to be Gamma = 290 +- 25(sta) +-
4(sys) keV and the combination of partial widths Gamma_e+e- * Gamma_pbarp /
Gamma = 579 +- 38(sta) +- 36(sys) meV, which represent the most precise
measurements to date.Comment: 17 pages, 3 figures, 3 tables. Final manuscript accepted for
publication in Phys. Lett. B. Parts of the text slightly expanded or
rearranged; results are unchange
Adapting SAM for CDF
The CDF and D0 experiments probe the high-energy frontier and as they do so
have accumulated hundreds of Terabytes of data on the way to petabytes of data
over the next two years. The experiments have made a commitment to use the
developing Grid based on the SAM system to handle these data. The D0 SAM has
been extended for use in CDF as common patterns of design emerged to meet the
similar requirements of these experiments. The process by which the merger was
achieved is explained with particular emphasis on lessons learned concerning
the database design patterns plus realization of the use cases.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics
(CHEP03), La Jolla, Ca, USA, March 2003, 4 pages, pdf format, TUAT00
E835 at FNAL: Charmonium Spectroscopy in Annihilations
I present preliminary results on the search for in its
and decay modes. We observe an excess of \eta_c\gamma{\cal P} \sim 0.001M=3525.8 \pm 0.2 \pm 0.2
\Gamma\leq10.6\pm 3.7\pm3.4(br) <
\Gamma_{\bar{p}p}B_{\eta_c\gamma} < 12.8\pm 4.8\pm4.5(br) J/\psi\pi^0$ mode.Comment: Presented at the 6th International Conference on Hyperons, Charm and
Beauty Hadrons (BEACH 2004), Chicago(Il), June 27-July 3,200
Recommended from our members
DZero data-intensive computing on the Open Science Grid
High energy physics experiments periodically reprocess data, in order to take advantage of improved understanding of the detector and the data processing code. Between February and May 2007, the DZero experiment has reprocessed a substantial fraction of its dataset. This consists of half a billion events, corresponding to about 100 TB of data, organized in 300,000 files. The activity utilized resources from sites around the world, including a dozen sites participating to the Open Science Grid consortium (OSG). About 1,500 jobs were run every day across the OSG, consuming and producing hundreds of Gigabytes of data. Access to OSG computing and storage resources was coordinated by the SAM-Grid system. This system organized job access to a complex topology of data queues and job scheduling to clusters, using a SAM-Grid to OSG job forwarding infrastructure. For the first time in the lifetime of the experiment, a data intensive production activity was managed on a general purpose grid, such as OSG. This paper describes the implications of using OSG, where all resources are granted following an opportunistic model, the challenges of operating a data intensive activity over such large computing infrastructure, and the lessons learned throughout the project
Interference Study of the chi_c0 (1^3P_0) in the Reaction Proton-Antiproton -> pi^0 pi^0
Fermilab experiment E835 has observed proton-antiproton annihilation
production of the charmonium state chi_c0 and its subsequent decay into pi^0
pi^0. Although the resonant amplitude is an order of magnitude smaller than
that of the non-resonant continuum production of pi^0 pi^0, an enhanced
interference signal is evident. A partial wave expansion is used to extract
physics parameters. The amplitudes J=0 and 2, of comparable strength, dominate
the expansion. Both are accessed by L=1 in the entrance proton-antiproton
channel. The product of the input and output branching fractions is determined
to be B(pbar p -> chi_c0) x B(chi_c0 -> pi^0 pi^0)= (5.09 +- 0.81 +- 0.25) x
10^-7.Comment: 4 pages, 4 figures, Accepted by PRL (July 2003
Recommended from our members
FermiGrid - experience and future plans
Fermilab supports a scientific program that includes experiments and scientists located across the globe. In order to better serve this community, Fermilab has placed its production computer resources in a Campus Grid infrastructure called 'FermiGrid'. The FermiGrid infrastructure allows the large experiments at Fermilab to have priority access to their own resources, enables sharing of these resources in an opportunistic fashion, and movement of work (jobs, data) between the Campus Grid and National Grids such as Open Science Grid and the WLCG. FermiGrid resources support multiple Virtual Organizations (VOs), including VOs from the Open Science Grid (OSG), EGEE and the Worldwide LHC Computing Grid Collaboration (WLCG). Fermilab also makes leading contributions to the Open Science Grid in the areas of accounting, batch computing, grid security, job management, resource selection, site infrastructure, storage management, and VO services. Through the FermiGrid interfaces, authenticated and authorized VOs and individuals may access our core grid services, the 10,000+ Fermilab resident CPUs, near-petabyte (including CMS) online disk pools and the multi-petabyte Fermilab Mass Storage System. These core grid services include a site wide Globus gatekeeper, VO management services for several VOs, Fermilab site authorization services, grid user mapping services, as well as job accounting and monitoring, resource selection and data movement services. Access to these services is via standard and well-supported grid interfaces. We will report on the user experience of using the FermiGrid campus infrastructure interfaced to a national cyberinfrastructure--the successes and the problems
- …