18,990 research outputs found
Data production models for the CDF experiment
The data production for the CDF experiment is conducted on a large Linux PC
farm designed to meet the needs of data collection at a maximum rate of 40
MByte/sec. We present two data production models that exploits advances in
computing and communication technology. The first production farm is a
centralized system that has achieved a stable data processing rate of
approximately 2 TByte per day. The recently upgraded farm is migrated to the
SAM (Sequential Access to data via Metadata) data handling system. The software
and hardware of the CDF production farms has been successful in providing large
computing and data throughput capacity to the experiment.Comment: 8 pages, 9 figures; presented at HPC Asia2005, Beijing, China, Nov 30
- Dec 3, 200
Global-Scale Resource Survey and Performance Monitoring of Public OGC Web Map Services
One of the most widely-implemented service standards provided by the Open
Geospatial Consortium (OGC) to the user community is the Web Map Service (WMS).
WMS is widely employed globally, but there is limited knowledge of the global
distribution, adoption status or the service quality of these online WMS
resources. To fill this void, we investigated global WMSs resources and
performed distributed performance monitoring of these services. This paper
explicates a distributed monitoring framework that was used to monitor 46,296
WMSs continuously for over one year and a crawling method to discover these
WMSs. We analyzed server locations, provider types, themes, the spatiotemporal
coverage of map layers and the service versions for 41,703 valid WMSs.
Furthermore, we appraised the stability and performance of basic operations for
1210 selected WMSs (i.e., GetCapabilities and GetMap). We discuss the major
reasons for request errors and performance issues, as well as the relationship
between service response times and the spatiotemporal distribution of client
monitoring sites. This paper will help service providers, end users and
developers of standards to grasp the status of global WMS resources, as well as
to understand the adoption status of OGC standards. The conclusions drawn in
this paper can benefit geospatial resource discovery, service performance
evaluation and guide service performance improvements.Comment: 24 pages; 15 figure
AstroGrid-D: Grid Technology for Astronomical Science
We present status and results of AstroGrid-D, a joint effort of
astrophysicists and computer scientists to employ grid technology for
scientific applications. AstroGrid-D provides access to a network of
distributed machines with a set of commands as well as software interfaces. It
allows simple use of computer and storage facilities and to schedule or monitor
compute tasks and data management. It is based on the Globus Toolkit middleware
(GT4). Chapter 1 describes the context which led to the demand for advanced
software solutions in Astrophysics, and we state the goals of the project. We
then present characteristic astrophysical applications that have been
implemented on AstroGrid-D in chapter 2. We describe simulations of different
complexity, compute-intensive calculations running on multiple sites, and
advanced applications for specific scientific purposes, such as a connection to
robotic telescopes. We can show from these examples how grid execution improves
e.g. the scientific workflow. Chapter 3 explains the software tools and
services that we adapted or newly developed. Section 3.1 is focused on the
administrative aspects of the infrastructure, to manage users and monitor
activity. Section 3.2 characterises the central components of our architecture:
The AstroGrid-D information service to collect and store metadata, a file
management system, the data management system, and a job manager for automatic
submission of compute tasks. We summarise the successfully established
infrastructure in chapter 4, concluding with our future plans to establish
AstroGrid-D as a platform of modern e-Astronomy.Comment: 14 pages, 12 figures Subjects: data analysis, image processing,
robotic telescopes, simulations, grid. Accepted for publication in New
Astronom
Data processing model for the CDF experiment
The data processing model for the CDF experiment is described. Data
processing reconstructs events from parallel data streams taken with different
combinations of physics event triggers and further splits the events into
datasets of specialized physics datasets. The design of the processing control
system faces strict requirements on bookkeeping records, which trace the status
of data files and event contents during processing and storage. The computing
architecture was updated to meet the mass data flow of the Run II data
collection, recently upgraded to a maximum rate of 40 MByte/sec. The data
processing facility consists of a large cluster of Linux computers with data
movement managed by the CDF data handling system to a multi-petaByte Enstore
tape library. The latest processing cycle has achieved a stable speed of 35
MByte/sec (3 TByte/day). It can be readily scaled by increasing CPU and
data-handling capacity as required.Comment: 12 pages, 10 figures, submitted to IEEE-TN
Sparse cross-products of metadata in scientific simulation management
Managing scientific data is by no means a trivial task even in a single site environment
with a small number of researchers involved. We discuss some issues concerned with posing
well-specified experiments in terms of parameters or instrument settings and the metadata
framework that arises from doing so. We are particularly interested in parallel computer
simulation experiments, where very large quantities of warehouse-able data are involved. We
consider SQL databases and other framework technologies for manipulating experimental data.
Our framework manages the the outputs from parallel runs that arise from large cross-products
of parameter combinations. Considerable useful experiment planning and analysis can be done
with the sparse metadata without fully expanding the parameter cross-products. Extra value
can be obtained from simulation output that can subsequently be data-mined. We have
particular interests in running large scale Monte-Carlo physics model simulations. Finding
ourselves overwhelmed by the problems of managing data and compute ¿resources, we have
built a prototype tool using Java and MySQL that addresses these issues. We use this example
to discuss type-space management and other fundamental ideas for implementing a laboratory
information management system
- …