30 research outputs found
Scalable desktop grid system
Desktop grids are easy to install on large number of personal computers, which is a prerequisite for the spread of grid technology. Current desktop grids connect all PCs into a flat hierarchy, that is, all computers to a central server. SZTAKI Desktop Grid starts from a standalone desktop grid, as a building block. It is extended to include clusters displaying as single powerful PCs, while using their local resource management system. Such building blocks support overtaking additional tasks from other desktop grids, enabling the set-up of a hierarchy. Desktop grids with different owners thus can share resources, although only in a hierarchical structure. This brings desktop grids closer to other grid technologies where sharing resources by several users is the most important feature
From Quantity to Quality: Massive Molecular Dynamics Simulation of Nanostructures under Plastic Deformation in Desktop and Service Grid Distributed Computing Infrastructure
The distributed computing infrastructure (DCI) on the basis of BOINC and
EDGeS-bridge technologies for high-performance distributed computing is used
for porting the sequential molecular dynamics (MD) application to its parallel
version for DCI with Desktop Grids (DGs) and Service Grids (SGs). The actual
metrics of the working DG-SG DCI were measured, and the normal distribution of
host performances, and signs of log-normal distributions of other
characteristics (CPUs, RAM, and HDD per host) were found. The practical
feasibility and high efficiency of the MD simulations on the basis of DG-SG DCI
were demonstrated during the experiment with the massive MD simulations for the
large quantity of aluminum nanocrystals (-). Statistical
analysis (Kolmogorov-Smirnov test, moment analysis, and bootstrapping analysis)
of the defect density distribution over the ensemble of nanocrystals had shown
that change of plastic deformation mode is followed by the qualitative change
of defect density distribution type over ensemble of nanocrystals. Some
limitations (fluctuating performance, unpredictable availability of resources,
etc.) of the typical DG-SG DCI were outlined, and some advantages (high
efficiency, high speedup, and low cost) were demonstrated. Deploying on DG DCI
allows to get new scientific from the simulated
of numerous configurations by harnessing sufficient computational power to
undertake MD simulations in a wider range of physical parameters
(configurations) in a much shorter timeframe.Comment: 13 pages, 11 pages (http://journals.agh.edu.pl/csci/article/view/106
Change of Scaling and Appearance of Scale-Free Size Distribution in Aggregation Kinetics by Additive Rules
The idealized general model of aggregate growth is considered on the basis of
the simple additive rules that correspond to one-step aggregation process. The
two idealized cases were analytically investigated and simulated by Monte Carlo
method in the Desktop Grid distributed computing environment to analyze
"pile-up" and "wall" cluster distributions in different aggregation scenarios.
Several aspects of aggregation kinetics (change of scaling, change of size
distribution type, and appearance of scale-free size distribution) driven by
"zero cluster size" boundary condition were determined by analysis of evolving
cumulative distribution functions. The "pile-up" case with a \textit{minimum}
active surface (singularity) could imitate piling up aggregations of
dislocations, and the case with a \textit{maximum} active surface could imitate
arrangements of dislocations in walls. The change of scaling law (for pile-ups
and walls) and availability of scale-free distributions (for walls) were
analytically shown and confirmed by scaling, fitting, moment, and bootstrapping
analyses of simulated probability density and cumulative distribution
functions. The initial "singular" \textit{symmetric} distribution of pile-ups
evolves by the "infinite" diffusive scaling law and later it is replaced by the
other "semi-infinite" diffusive scaling law with \textit{asymmetric}
distribution of pile-ups. In contrast, the initial "singular"
\textit{symmetric} distributions of walls initially evolve by the diffusive
scaling law and later it is replaced by the other ballistic (linear) scaling
law with \textit{scale-free} exponential distributions without distinctive
peaks. The conclusion was made as to possible applications of such approach for
scaling, fitting, moment, and bootstrapping analyses of distributions in
simulated and experimental data.Comment: 37 pages, 16 figures, 1 table; accepted preprint version after
comments of reviewers, Physica A: Statistical Mechanics and its Applications
(2014
Molecular docking with Raccoon2 on clouds: extending desktop applications with cloud computing
Molecular docking is a computer simulation that predicts the binding affinity between two molecules, a ligand and a receptor. Large-scale docking simulations, using one receptor and many ligands, are known as structure-based virtual screening. Often used in drug discovery, virtual screening can be very computationally demanding. This is why user-friendly domain-specific web or desktop applications that enable running simulations on powerful computing infrastructures have been created. Cloud computing provides on-demand availability, pay-per-use pricing, and great scalability which can improve the performance and efficiency of scientific applications. This paper investigates how domain-specific desktop applications can be extended to run scientific simulations on various clouds. A generic approach based on scientific workflows is proposed, and a proof of concept is implemented using the Raccoon2 desktop application for virtual screening, WS-PGRADE workflows, and gUSE services with the CloudBroker platform. The presented analysis illustrates that this approach of extending a domain-specific desktop application can run workflows on different types of clouds, and indeed makes use of the on-demand scalability provided by cloud computing. It also facilitates the execution of virtual screening simulations by life scientists without requiring them to abandon their favourite desktop environment and providing them resources without major capital investment
SZTAKI desktop grid: building a scalable, secure platform for desktop grid computing
In this paper we present a concept how separate desktop grids can be used as building blocks for larger scale grids by organizing them in a hierarchical tree. We describe an enhanced security model which satisfies the requirements of the hierarchical setup and is aimed for real-world deployment
Supporting environmental modelling with Taverna workflows, web services and desktop grid technology
Ecosystem functioning, climate change, and multiple interactions among biogeochemical cycles, climate system, site conditions and land use options are leading-edge topics in recent environmental modelling. Terrestrial ecosystem models are widely used to support carbon sequestration and ecosystem studies under various ecological circumstances. Our team uses the Biome-BGC model (Numerical Terradynamic Simulation Group, University of Montana), and develops an improved model version of it, called Biome-BGC MuSo. Both the original and the improved model estimate the ecosystem scale storage and fluxes of energy, carbon, nitrogen and water, controlled by various physical and biological processes on a daily time-scale. Web services were also developed and integrated with parallel processing desktop grid technology. Taverna workflow management system was used to build up and carry out elaborated workflows like seamless data flow to model simulation, Monte Carlo experiment, model sensitivity analysis, model-data fusion, estimation of ecosystem service indicators or extensive spatial modelling. Straightforward management of complex data analysis tasks, organized into appropriately documented, shared and reusable scientific workflows enables researchers to carry out detailed and scientifically challenging ‘in silico’ experiments and applications that could open new directions in ecosystem research and in a broader sense it supports progress in environmental modelling. The workflow approach built upon these web services allows even the most complicated computations to be initiated without the need of programming skills and deep understanding of model structure and initialization. The developments enable a wider array of scientists to perform ecosystem scale simulations, and to perform analyses not previously possible due to high complexity and computational demand