1,975 research outputs found

    Adapting a HEP Application for Running on the Grid

    Get PDF
    The goal of the EU IST int.eu.grid project is to build middleware facilities which enable the execution of real-time and interactive applications on the Grid. Within this research, relevant support for the HEP application is provided by Virtual Organization, monitoring system, and real-time dispatcher (RTD). These facilities realize the pilot jobs idea that allows to allocate grid resources in advance and to analyze events in real time. In the paper we present HEP Virtual Organization, the details of monitoring, and RTD. We present the way of running the HEP application using the above facilities to fit into the real-time application requirements

    Large-scale grid-enabled lattice-Boltzmann simulations of complex fluid flow in porous media and under shear

    Get PDF
    Well designed lattice-Boltzmann codes exploit the essentially embarrassingly parallel features of the algorithm and so can be run with considerable efficiency on modern supercomputers. Such scalable codes permit us to simulate the behaviour of increasingly large quantities of complex condensed matter systems. In the present paper, we present some preliminary results on the large scale three-dimensional lattice-Boltzmann simulation of binary immiscible fluid flows through a porous medium derived from digitised x-ray microtomographic data of Bentheimer sandstone, and from the study of the same fluids under shear. Simulations on such scales can benefit considerably from the use of computational steering and we describe our implementation of steering within the lattice-Boltzmann code, called LB3D, making use of the RealityGrid steering library. Our large scale simulations benefit from the new concept of capability computing, designed to prioritise the execution of big jobs on major supercomputing resources. The advent of persistent computational grids promises to provide an optimal environment in which to deploy these mesoscale simulation methods, which can exploit the distributed nature of compute, visualisation and storage resources to reach scientific results rapidly; we discuss our work on the grid-enablement of lattice-Boltzmann methods in this context.Comment: 17 pages, 6 figures, accepted for publication in Phil.Trans.R.Soc.Lond.

    Sustained Software for Cyberinfrastructure - Analyses of Successful Efforts with a Focus on NSF-Funded Software

    Get PDF
    Reliable software that provides needed functionality is clearly essential for an effective distributed cyberinfrastructure (CI) that supports comprehensive, balanced, and flexible distributed CI that, in turn, supports science and engineering applications. The purpose of this study was to understand what factors lead to software projects being well sustained over the long run, focusing on software created with funding from the US National Science Foundation (NSF) and/or used by researchers funded by the NSF. We surveyed NSF-funded researchers and performed in-depth studies of software projects that have been sustained over many years. Successful projects generally used open-source software licenses and employed good software engineering practices and test practices. However, many projects that have not been well sustained over time also meet these criteria. The features that stood out about successful projects included deeply committed leadership and some sort of user forum or conference at least annually. In some cases, software project leaders have employed multiple financial strategies over the course of a decades-old software project. Such well-sustained software is used in major distributed CI projects that support thousands of users, and this software is critical to the operation of major distributed CI facilities in the US. The findings of our study identify some characteristics of software that is relevant to the NSF-supported research community, and that has been sustained over many years

    Resource provisioning in Science Clouds: Requirements and challenges

    Full text link
    Cloud computing has permeated into the information technology industry in the last few years, and it is emerging nowadays in scientific environments. Science user communities are demanding a broad range of computing power to satisfy the needs of high-performance applications, such as local clusters, high-performance computing systems, and computing grids. Different workloads are needed from different computational models, and the cloud is already considered as a promising paradigm. The scheduling and allocation of resources is always a challenging matter in any form of computation and clouds are not an exception. Science applications have unique features that differentiate their workloads, hence, their requirements have to be taken into consideration to be fulfilled when building a Science Cloud. This paper will discuss what are the main scheduling and resource allocation challenges for any Infrastructure as a Service provider supporting scientific applications

    10th SC@RUG 2013 proceedings:Student Colloquium 2012-2013

    Get PDF

    10th SC@RUG 2013 proceedings:Student Colloquium 2012-2013

    Get PDF
    • …
    corecore