2,496 research outputs found

    Investigating grid computing technologies for use with commercial simulation packages

    Get PDF
    As simulation experimentation in industry become more computationally demanding, grid computing can be seen as a promising technology that has the potential to bind together the computational resources needed to quickly execute such simulations. To investigate how this might be possible, this paper reviews the grid technologies that can be used together with commercial-off-the-shelf simulation packages (CSPs) used in industry. The paper identifies two specific forms of grid computing (Public Resource Computing and Enterprise-wide Desktop Grid Computing) and the middleware associated with them (BOINC and Condor) as being suitable for grid-enabling existing CSPs. It further proposes three different CSP-grid integration approaches and identifies one of them to be the most appropriate. It is hoped that this research will encourage simulation practitioners to consider grid computing as a technologically viable means of executing CSP-based experiments faster

    Integrating BOINC with Microsoft Excel: A case study

    Get PDF
    The convergence of conventional Grid computing with public resource computing (PRC) offers potential benefits in the enterprise setting. For this work we took the popular PRC toolkit BOINC and used it to execute a previously monolithic Microsoft Excel financial model across several commodity computers. Our experience indicates that speedup approaching linear may be realised for certain scenarios, and that this approach offers a viable route to leveraging idle desktop PCs in the enterprise

    Distributed computing practice for large-scale science and engineering applications

    Get PDF
    It is generally accepted that the ability to develop large-scale distributed applications has lagged seriously behind other developments in cyberinfrastructure. In this paper, we provide insight into how such applications have been developed and an understanding of why developing applications for distributed infrastructure is hard. Our approach is unique in the sense that it is centered around half a dozen existing scientific applications; we posit that these scientific applications are representative of the characteristics, requirements, as well as the challenges of the bulk of current distributed applications on production cyberinfrastructure (such as the US TeraGrid). We provide a novel and comprehensive analysis of such distributed scientific applications. Specifically, we survey existing models and methods for large-scale distributed applications and identify commonalities, recurring structures, patterns and abstractions. We find that there are many ad hoc solutions employed to develop and execute distributed applications, which result in a lack of generality and the inability of distributed applications to be extensible and independent of infrastructure details. In our analysis, we introduce the notion of application vectors: a novel way of understanding the structure of distributed applications. Important contributions of this paper include identifying patterns that are derived from a wide range of real distributed applications, as well as an integrated approach to analyzing applications, programming systems and patterns, resulting in the ability to provide a critical assessment of the current practice of developing, deploying and executing distributed applications. Gaps and omissions in the state of the art are identified, and directions for future research are outlined

    Speeding up systems biology simulations of biochemical pathways using condor

    Get PDF
    This is the accepted version of the following article: Speeding up Systems Biology Simulations of Biochemical Pathways using Condor". Concurrency and Computation: Practice and Experience Volume 26, Issue 17, pages 2727–2742, 10 December 2014 which has been published in final form at http://onlinelibrary.wiley.com/doi/10.1002/cpe.3161/abstractSystems biology is a scientific field that uses computational modelling to study biological and biochemical systems. The simulation and analysis of models of these systems typically explore behaviour over a wide range of parameter values; as such, they are usually characterised by the need for nontrivial amounts of computing power. Grid computing provides access to such computational resources. In previous research, we created the grid-enabled biochemical networks simulation environment to attempt to speed up system biology simulations over a grid (the UK National Grid Service and ScotGrid). Following on from this work, we have created the simulation modelling of the epidermal growth factor receptor microtubule-associated protein kinase pathway utility, a standalone simulation tool dedicated to the modelling and analysis of the epidermal growth factor receptor microtubule-associated protein kinase pathway. This builds on experiences from biochemical networks simulation environment by decoupling the simulation modelling elements from the Grid middleware. This new utility enables us to interface with different grid technologies. This paper therefore describes the new SIMAP utility and an empirical investigation of its performance when deployed over a desktop grid based on the high throughput computing middleware Condor. We present our results based on a case study with a model of the mammalian ErbB signalling pathway, a pathway strongly linked to cance

    Rapid Earthquake Characterization Using MEMS Accelerometers and Volunteer Hosts Following the M 7.2 Darfield, New Zealand, Earthquake

    Get PDF
    We test the feasibility of rapidly detecting and characterizing earthquakes with the Quake‐Catcher Network (QCN) that connects low‐cost microelectromechanical systems accelerometers to a network of volunteer‐owned, Internet‐connected computers. Following the 3 September 2010 M 7.2 Darfield, New Zealand, earthquake we installed over 180 QCN sensors in the Christchurch region to record the aftershock sequence. The sensors are monitored continuously by the host computer and send trigger reports to the central server. The central server correlates incoming triggers to detect when an earthquake has occurred. The location and magnitude are then rapidly estimated from a minimal set of received ground‐motion parameters. Full seismic time series are typically not retrieved for tens of minutes or even hours after an event. We benchmark the QCN real‐time detection performance against the GNS Science GeoNet earthquake catalog. Under normal network operations, QCN detects and characterizes earthquakes within 9.1 s of the earthquake rupture and determines the magnitude within 1 magnitude unit of that reported in the GNS catalog for 90% of the detections

    Enhancing reliability with Latin Square redundancy on desktop grids.

    Get PDF
    Computational grids are some of the largest computer systems in existence today. Unfortunately they are also, in many cases, the least reliable. This research examines the use of redundancy with permutation as a method of improving reliability in computational grid applications. Three primary avenues are explored - development of a new redundancy model, the Replication and Permutation Paradigm (RPP) for computational grids, development of grid simulation software for testing RPP against other redundancy methods and, finally, running a program on a live grid using RPP. An important part of RPP involves distributing data and tasks across the grid in Latin Square fashion. Two theorems and subsequent proofs regarding Latin Squares are developed. The theorems describe the changing position of symbols between the rows of a standard Latin Square. When a symbol is missing because a column is removed the theorems provide a basis for determining the next row and column where the missing symbol can be found. Interesting in their own right, the theorems have implications for redundancy. In terms of the redundancy model, the theorems allow one to state the maximum makespan in the face of missing computational hosts when using Latin Square redundancy. The simulator software was developed and used to compare different data and task distribution schemes on a simulated grid. The software clearly showed the advantage of running RPP, which resulted in faster completion times in the face of computational host failures. The Latin Square method also fails gracefully in that jobs complete with massive node failure while increasing makespan. Finally an Inductive Logic Program (ILP) for pharmacophore search was executed, using a Latin Square redundancy methodology, on a Condor grid in the Dahlem Lab at the University of Louisville Speed School of Engineering. All jobs completed, even in the face of large numbers of randomly generated computational host failures

    OpenIFS@home version 1: a citizen science project for ensemble weather and climate forecasting

    Get PDF
    Weather forecasts rely heavily on general circulation models of the atmosphere and other components of the Earth system. National meteorological and hydrological services and intergovernmental organizations, such as the European Centre for Medium-Range Weather Forecasts (ECMWF), provide routine operational forecasts on a range of spatio-temporal scales by running these models at high resolution on state-of-the-art high-performance computing systems. Such operational forecasts are very demanding in terms of computing resources. To facilitate the use of a weather forecast model for research and training purposes outside the operational environment, ECMWF provides a portable version of its numerical weather forecast model, OpenIFS, for use by universities and other research institutes on their own computing systems. In this paper, we describe a new project (OpenIFS@home) that combines OpenIFS with a citizen science approach to involve the general public in helping conduct scientific experiments. Volunteers from across the world can run OpenIFS@home on their computers at home, and the results of these simulations can be combined into large forecast ensembles. The infrastructure of such distributed computing experiments is based on our experience and expertise with the climateprediction.net (https://www.climateprediction.net/, last access: 1 June 2021) and weather@home systems. In order to validate this first use of OpenIFS in a volunteer computing framework, we present results from ensembles of forecast simulations of Tropical Cyclone Karl from September 2016 studied during the NAWDEX field campaign. This cyclone underwent extratropical transition and intensified in mid-latitudes to give rise to an intense jet streak near Scotland and heavy rainfall over Norway. For the validation we use a 2000-member ensemble of OpenIFS run on the OpenIFS@home volunteer framework and a smaller ensemble of the size of operational forecasts using ECMWF's forecast model in 2016 run on the ECMWF supercomputer with the same horizontal resolution as OpenIFS@home. We present ensemble statistics that illustrate the reliability and accuracy of the OpenIFS@home forecasts and discuss the use of large ensembles in the context of forecasting extreme events.</p
    corecore