138,218 research outputs found

    Mass Storage Management and the Grid

    Full text link
    The University of Edinburgh has a significant interest in mass storage systems as it is one of the core groups tasked with the roll out of storage software for the UK's particle physics grid, GridPP. We present the results of a development project to provide software interfaces between the SDSC Storage Resource Broker, the EU DataGrid and the Storage Resource Manager. This project was undertaken in association with the eDikt group at the National eScience Centre, the Universities of Bristol and Glasgow, Rutherford Appleton Laboratory and the San Diego Supercomputing Center.Comment: 4 pages, 3 figures, Presented at Computing for High Energy and Nuclear Physics 2004 (CHEP '04), Interlaken, Switzerland, September 200

    Management of Data Access with Quality of Service in PL-Grid Environment

    Get PDF
    e-Science applications increasingly require both computational power and storage resources, currently supported with a certain level of quality. Since in the grid and cloud environments, where we can execute the e-Science applications, heterogeneity of storage systems is higher than that of computational power resources, optimization of data access defines one of challenging tasks nowadays. In this paper we present our approach to management of data access in the grid environment. The main issue is to organize data in such a way that users requirements in the form of QoS/SLA are met. For this purpose we make use of a storage monitoring system and a mass storage system model -- CMSSM. The experiments are performed in the PL-Grid environment

    Grid Data Management in Action: Experience in Running and Supporting Data Management Services in the EU DataGrid Project

    Full text link
    In the first phase of the EU DataGrid (EDG) project, a Data Management System has been implemented and provided for deployment. The components of the current EDG Testbed are: a prototype of a Replica Manager Service built around the basic services provided by Globus, a centralised Replica Catalogue to store information about physical locations of files, and the Grid Data Mirroring Package (GDMP) that is widely used in various HEP collaborations in Europe and the US for data mirroring. During this year these services have been refined and made more robust so that they are fit to be used in a pre-production environment. Application users have been using this first release of the Data Management Services for more than a year. In the paper we present the components and their interaction, our implementation and experience as well as the feedback received from our user communities. We have resolved not only issues regarding integration with other EDG service components but also many of the interoperability issues with components of our partner projects in Europe and the U.S. The paper concludes with the basic lessons learned during this operation. These conclusions provide the motivation for the architecture of the next generation of Data Management Services that will be deployed in EDG during 2003.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 9 pages, LaTeX, PSN: TUAT007 all figures are in the directory "figures

    HEP Applications Evaluation of the EDG Testbed and Middleware

    Full text link
    Workpackage 8 of the European Datagrid project was formed in January 2001 with representatives from the four LHC experiments, and with experiment independent people from five of the six main EDG partners. In September 2002 WP8 was strengthened by the addition of effort from BaBar and D0. The original mandate of WP8 was, following the definition of short- and long-term requirements, to port experiment software to the EDG middleware and testbed environment. A major additional activity has been testing the basic functionality and performance of this environment. This paper reviews experiences and evaluations in the areas of job submission, data management, mass storage handling, information systems and monitoring. It also comments on the problems of remote debugging, the portability of code, and scaling problems with increasing numbers of jobs, sites and nodes. Reference is made to the pioneeering work of Atlas and CMS in integrating the use of the EDG Testbed into their data challenges. A forward look is made to essential software developments within EDG and to the necessary cooperation between EDG and LCG for the LCG prototype due in mid 2003.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics Conference (CHEP03), La Jolla, CA, USA, March 2003, 7 pages. PSN THCT00

    Parametric and cycle tests of a 40-A-hr bipolar nickel-hydrogen battery

    Get PDF
    A series of tests was performed to characterize battery performance relating to certain operating parameters which included charge current, discharge current, temperature and pressure. The parameters were varied to confirm battery design concepts and to determine optimal operating conditions. Spacecraft power requirements are constantly increasing. Special spacecraft such as the Space Station and platforms will require energy storage systems of 130 and 25 kWh, respectively. The complexity of these high power systems will demand high reliability, and reduced mass and volume. A system that uses batteries for storage will require a cell count in excess of 400 units. These cell units must then be assembled into several batteries with over 100 cells in a series connected string. In an attempt to simplify the construction of conventional cells and batteries, the NASA Lewis Research Center battery systems group initiated work on a nickel-hydrogen battery in a bipolar configuration in early 1981. Features of the battery with this bipolar construction show promise in improving both volumetric and gravimetric energy densities as well as thermal management. Bipolar construction allows cooling in closer proximity to the cell components, thus heat removal can be accomplished at a higher rejection temperature than conventional cell designs. Also, higher current densities are achievable because of low cell impedance. Lower cell impedance is achieved via current flow perpendicular to the electrode face, thus reducing voltage drops in the electrode grid and electrode terminals tabs

    Grid collector: an event catalog with automated file management

    Full text link
    High Energy Nuclear Physics (HENP) experiments such as STAR at BNL and ATLAS at CERN produce large amounts of data that are stored as files on mass storage systems in computer centers. In these files, the basic unit of data is an event. Analysis is typically performed on a selected set of events. The files containing these events have to be located, copied from mass storage systems to disks before analysis, and removed when no longer needed. These file management tasks are tedious and time consuming. Typically, all events contained in the files are read into memory before a selection is made. Since the time to read the events dominate the overall execution time, reading the unwanted event needlessly increases the analysis time. The Grid Collector is a set of software modules that works together to address these two issues. It automates the file management tasks and provides ''direct'' access to the selected events for analyses. It is currently integrated with the STAR analysis framework. The users can select events based on tags, such as, ''production date between March 10 and 20, and the number of charged tracks > 100.'' The Grid Collector locates the files containing relevant events, transfers the files across the Grid if necessary, and delivers the events to the analysis code through the familiar iterators. There has been some research efforts to address the file management issues, the Grid Collector is unique in that it addresses the event access issue together with the file management issues. This makes it more useful to a large variety of users

    Modeling of GRACE-Derived Groundwater Information in the Colorado River Basin

    Get PDF
    Groundwater depletion has been one of the major challenges in recent years. Analysis of groundwater levels can be beneficial for groundwater management. The National Aeronautics and Space Administration’s twin satellite, Gravity Recovery and Climate Experiment (GRACE), serves in monitoring terrestrial water storage. Increasing freshwater demand amidst recent drought (2000–2014) posed a significant groundwater level decline within the Colorado River Basin (CRB). In the current study, a non-parametric technique was utilized to analyze historical groundwater variability. Additionally, a stochastic Autoregressive Integrated Moving Average (ARIMA) model was developed and tested to forecast the GRACE-derived groundwater anomalies within the CRB. The ARIMA model was trained with the GRACE data from January 2003 to December of 2013 and validated with GRACE data from January 2014 to December of 2016. Groundwater anomaly from January 2017 to December of 2019 was forecasted with the tested model. Autocorrelation and partial autocorrelation plots were drawn to identify and construct the seasonal ARIMA models. ARIMA order for each grid was evaluated based on Akaike’s and Bayesian information criterion. The error analysis showed the reasonable numerical accuracy of selected seasonal ARIMA models. The proposed models can be used to forecast groundwater variability for sustainable groundwater planning and management

    Parsimonious Catchment and River Flow Modelling

    Get PDF
    It is increasingly the case that models are being developed as “evolving” products rather than\ud one-off application tools, such that auditable modelling versus ad hoc treatment of models becomes a\ud pivotal issue. Auditable modelling is particularly vital to “parsimonious modelling” aimed at meeting\ud specific modelling requirements. This paper outlines various contributory factors and aims to seed\ud proactively a research topic by inextricably linking value/risk management to parsimonious modelling.\ud Value management in modelling may be implemented in terms of incorporating “enough detail” into a\ud model so that the synergy among the constituent units of the model captures that of the real system. It is a\ud problem of diminishing returns, since further reductions in the constituent units will create an\ud unacceptable difference between the model and the real system; conversely, any further detail will add to\ud the cost of modelling without returning any significant benefit. The paper also defines risk management\ud in relation to modelling. It presents a qualitative framework for value/risk management towards\ud parsimonious modelling by the categorisation of “modelling techniques” in terms of “control volume.
    • …
    corecore