519 research outputs found

    The case for preserving our knowledge and data in physics experiments

    Full text link
    This proceeding covers tools and technologies at our disposal for scientific data preservation and shows that this extends the scientific reach of our experiments. It is cost-efficient to warehouse data from completed experiments on the tape archives of our national and international laboratories. These subject-specific data stores also offer the technologies to capture and archive knowledge about experiments in the form of technical notes, electronic logs, websites, etc. Furthermore, it is possible to archive our source code and computing environments. The paper illustrates these challenges with experience from preserving the LEP data for the long term.Comment: 5 pages, 1 figur

    Using the Mass Storage System at ZIB within I3HP

    Full text link
    In the framework of I3HP there are two Transnational Access Activities related to Computational Hadron Physics. One of these activities is access to the mass storage system at Konrad-Zuse-Zentrum fuer Informationstechnik Berlin (ZIB). European lattice physics collaborations can apply for mass storage capacity in order to store and share their configurations or other data (see http://www.zib.de/i3hp/). In this paper formal and technical aspects of usage as well as the conformance to the International Lattice DataGrid (ILDG) are explained.Comment: Talk given at the Workshop on Computational Hadron Physics, Nicosia, Cyprus, 14--17 September 200

    A Monitoring System for the BaBar INFN Computing Cluster

    Full text link
    Monitoring large clusters is a challenging problem. It is necessary to observe a large quantity of devices with a reasonably short delay between consecutive observations. The set of monitored devices may include PCs, network switches, tape libraries and other equipments. The monitoring activity should not impact the performances of the system. In this paper we present PerfMC, a monitoring system for large clusters. PerfMC is driven by an XML configuration file, and uses the Simple Network Management Protocol (SNMP) for data collection. SNMP is a standard protocol implemented by many networked equipments, so the tool can be used to monitor a wide range of devices. System administrators can display informations on the status of each device by connecting to a WEB server embedded in PerfMC. The WEB server can produce graphs showing the value of different monitored quantities as a function of time; it can also produce arbitrary XML pages by applying XSL Transformations to an internal XML representation of the cluster's status. XSL Transformations may be used to produce HTML pages which can be displayed by ordinary WEB browsers. PerfMC aims at being relatively easy to configure and operate, and highly efficient. It is currently being used to monitor the Italian Reprocessing farm for the BaBar experiment, which is made of about 200 dual-CPU Linux machines.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 10 pages, LaTeX, 4 eps figures. PSN MOET00

    Earth science data study

    Get PDF
    The research proposed in this contract concerning investigations of existing and planned Earth Science and Applications Division (ESAD) data management systems and research into utilities for the access and display of scientific data products was completed. A summary of this work is provided

    CASTOR status and evolution

    Full text link
    In January 1999, CERN began to develop CASTOR ("CERN Advanced STORage manager"). This Hierarchical Storage Manager targetted at HEP applications has been in full production at CERN since May 2001. It now contains more than two Petabyte of data in roughly 9 million files. In 2002, 350 Terabytes of data were stored for COMPASS at 45 MB/s and a Data Challenge was run for ALICE in preparation for the LHC startup in 2007 and sustained a data transfer to tape of 300 MB/s for one week (180 TB). The major functionality improvements were the support for files larger than 2 GB (in collaboration with IN2P3) and the development of Grid interfaces to CASTOR: GridFTP and SRM ("Storage Resource Manager"). An ongoing effort is taking place to copy the existing data from obsolete media like 9940 A to better cost effective offerings. CASTOR has also been deployed at several HEP sites with little effort. In 2003, we plan to continue working on Grid interfaces and to improve performance not only for Central Data Recording but also for Data Analysis applications where thousands of processes possibly access the same hot data. This could imply the selection of another filesystem or the use of replication (hardware or software).Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 2 pages, PDF. PSN TUDT00

    Robo-line storage: Low latency, high capacity storage systems over geographically distributed networks

    Get PDF
    Rapid advances in high performance computing are making possible more complete and accurate computer-based modeling of complex physical phenomena, such as weather front interactions, dynamics of chemical reactions, numerical aerodynamic analysis of airframes, and ocean-land-atmosphere interactions. Many of these 'grand challenge' applications are as demanding of the underlying storage system, in terms of their capacity and bandwidth requirements, as they are on the computational power of the processor. A global view of the Earth's ocean chlorophyll and land vegetation requires over 2 terabytes of raw satellite image data. In this paper, we describe our planned research program in high capacity, high bandwidth storage systems. The project has four overall goals. First, we will examine new methods for high capacity storage systems, made possible by low cost, small form factor magnetic and optical tape systems. Second, access to the storage system will be low latency and high bandwidth. To achieve this, we must interleave data transfer at all levels of the storage system, including devices, controllers, servers, and communications links. Latency will be reduced by extensive caching throughout the storage hierarchy. Third, we will provide effective management of a storage hierarchy, extending the techniques already developed for the Log Structured File System. Finally, we will construct a protototype high capacity file server, suitable for use on the National Research and Education Network (NREN). Such research must be a Cornerstone of any coherent program in high performance computing and communications
    corecore