1,627 research outputs found
Grid-enabled SIMAP utility: Motivation, integration technology and performance results
A biological system comprises large numbers of functionally diverse and frequently multifunctional sets of elements that interact selectively and nonlinearly to produce coherent behaviours. Such a system can be anything from an intracellular biological process (such as a biochemical reaction cycle, gene regulatory network or signal transduction pathway) to a cell, tissue, entire organism, or even an ecological web. Biochemical systems are
responsible for processing environmental signals, inducing the appropriate cellular responses and sequence of
internal events. However, such systems are not fully or even poorly understood. Systems biology is a scientific field that is concerned with the systematic study of biological and biochemical systems in terms of complex interactions rather than their individual molecular components. At the core of systems biology is computational
modelling (also called mathematical modelling), which is the process of constructing and simulating an abstract
model of a biological system for subsequent analysis. This methodology can be used to test hypotheses via insilico experiments, providing predictions that can be tested by in-vitro and in-vivo studies. For example, the ERbB1-4 receptor tyrosine kinases (RTKs) and the signalling pathways they activate, govern most core cellular processes such as cell division, motility and survival (Citri and Yarden, 2006) and are strongly linked to cancer when they malfunction due to mutations etc. An ODE (ordinary differential equation)-based mass action ErbB model has been constructed and analysed by Chen et al. (2009) in order to depict what roles of each protein plays and ascertain to how sets of proteins coordinate with each other to perform distinct physiological functions. The
model comprises 499 species (molecules), 201 parameters and 828 reactions. These in silico experiments can often be computationally very expensive, e.g. when multiple biochemical factors are being considered or a variety of complex networks are being simulated simultaneously. Due to the size and complexity of the models
and the requirement to perform comprehensive experiments it is often necessary to use high-performance computing (HPC) to keep the experimental time within tractable bounds. Based on this as part of an EC funded
cancer research project, we have developed the SIMAP Utility that allows the SImulation modeling of the MAP kinase pathway (http://www.simap-project.org). In this paper we present experiences with Grid-enabling SIMAP using Condor
Libra: An Economy driven Job Scheduling System for Clusters
Clusters of computers have emerged as mainstream parallel and distributed
platforms for high-performance, high-throughput and high-availability
computing. To enable effective resource management on clusters, numerous
cluster managements systems and schedulers have been designed. However, their
focus has essentially been on maximizing CPU performance, but not on improving
the value of utility delivered to the user and quality of services. This paper
presents a new computational economy driven scheduling system called Libra,
which has been designed to support allocation of resources based on the users?
quality of service (QoS) requirements. It is intended to work as an add-on to
the existing queuing and resource management system. The first version has been
implemented as a plugin scheduler to the PBS (Portable Batch System) system.
The scheduler offers market-based economy driven service for managing batch
jobs on clusters by scheduling CPU time according to user utility as determined
by their budget and deadline rather than system performance considerations. The
Libra scheduler ensures that both these constraints are met within an O(n)
run-time. The Libra scheduler has been simulated using the GridSim toolkit to
carry out a detailed performance analysis. Results show that the deadline and
budget based proportional resource allocation strategy improves the utility of
the system and user satisfaction as compared to system-centric scheduling
strategies.Comment: 13 page
Recommended from our members
Leveraging simulation practice in industry through use of desktop grid middleware
This chapter focuses on the collaborative use of computing resources to support decision making in industry. Through the use of middleware for desktop grid computing, the idle CPU cycles available on existing computing resources can be harvested and used for speeding-up the execution of applications that have “non-trivial” processing requirements. This chapter focuses on the desktop grid middleware BOINC and Condor, and discusses the integration of commercial simulation software together with free-to-download grid middleware so as to offer competitive advantage to organizations that opt for this technology. It is expected that the low-intervention integration approach presented in this chapter (meaning no changes to source code required) will appeal to both simulation practitioners (as simulations can be executed faster, which in turn would mean that more replications and optimization is possible in the same amount of time) and the management (as it can potentially increase the return on investment on existing resources)
Monte Carlo validation of a mu-SPECT imaging system on the lightweight grid CiGri
à paraître dans Future Generation Computer SystemsMonte Carlo Simulations (MCS) are nowadays widely used in the field of nuclear medicine for system and algorithms designs. They are valuable for accurately reproducing experimental data, but at the expense of a long computing time. An efficient solution for shorter elapsed time has recently been proposed: grid computing. The aim of this work is to validate a small animal gamma camera MCS and to confirm the usefulness of grid computing for such a study. Good matches between measured and simulated data were achieved and a crunching factor up to 70 was attained on a lightweight campus grid
Economic-based Distributed Resource Management and Scheduling for Grid Computing
Computational Grids, emerging as an infrastructure for next generation
computing, enable the sharing, selection, and aggregation of geographically
distributed resources for solving large-scale problems in science, engineering,
and commerce. As the resources in the Grid are heterogeneous and geographically
distributed with varying availability and a variety of usage and cost policies
for diverse users at different times and, priorities as well as goals that vary
with time. The management of resources and application scheduling in such a
large and distributed environment is a complex task. This thesis proposes a
distributed computational economy as an effective metaphor for the management
of resources and application scheduling. It proposes an architectural framework
that supports resource trading and quality of services based scheduling. It
enables the regulation of supply and demand for resources and provides an
incentive for resource owners for participating in the Grid and motives the
users to trade-off between the deadline, budget, and the required level of
quality of service. The thesis demonstrates the capability of economic-based
systems for peer-to-peer distributed computing by developing users'
quality-of-service requirements driven scheduling strategies and algorithms. It
demonstrates their effectiveness by performing scheduling experiments on the
World-Wide Grid for solving parameter sweep applications
GridSim: A Toolkit for the Modeling and Simulation of Distributed Resource Management and Scheduling for Grid Computing
Clusters, grids, and peer-to-peer (P2P) networks have emerged as popular
paradigms for next generation parallel and distributed computing. The
management of resources and scheduling of applications in such large-scale
distributed systems is a complex undertaking. In order to prove the
effectiveness of resource brokers and associated scheduling algorithms, their
performance needs to be evaluated under different scenarios such as varying
number of resources and users with different requirements. In a grid
environment, it is hard and even impossible to perform scheduler performance
evaluation in a repeatable and controllable manner as resources and users are
distributed across multiple organizations with their own policies. To overcome
this limitation, we have developed a Java-based discrete-event grid simulation
toolkit called GridSim. The toolkit supports modeling and simulation of
heterogeneous grid resources (both time- and space-shared), users and
application models. It provides primitives for creation of application tasks,
mapping of tasks to resources, and their management. To demonstrate suitability
of the GridSim toolkit, we have simulated a Nimrod-G like grid resource broker
and evaluated the performance of deadline and budget constrained cost- and
time-minimization scheduling algorithms
Probabilistic grid scheduling based on job statistics and monitoring information
This transfer thesis presents a novel, probabilistic approach to scheduling applications on computational Grids based on their historical behaviour, current state of the Grid and predictions of the future execution times and resource utilisation of such applications. The work lays a foundation for enabling a more intuitive, user-friendly and effective scheduling technique termed deadline scheduling.
Initial work has established motivation and requirements for a more efficient Grid scheduler, able to adaptively handle dynamic nature of the Grid resources and submitted workload. Preliminary scheduler research identified the need for a detailed monitoring of Grid resources on the process level, and for a tool to simulate non-deterministic behaviour and statistical properties of Grid applications.
A simulation tool, GridLoader, has been developed to enable modelling of application loads similar to a number of typical Grid applications. GridLoader is able to simulate CPU utilisation, memory allocation and network transfers according to limits set through command line parameters or a configuration file. Its specific strength is in achieving set resource utilisation targets in a probabilistic manner, thus creating a dynamic environment, suitable for testing the scheduler’s adaptability and its prediction algorithm.
To enable highly granular monitoring of Grid applications, a monitoring framework based on the Ganglia Toolkit was developed and tested. The suite is able to collect resource usage information of individual Grid applications, integrate it into standard XML based information flow, provide visualisation through a Web portal, and export data into a format suitable for off-line analysis.
The thesis also presents initial investigation of the utilisation of University College London Central Computing Cluster facility running Sun Grid Engine middleware. Feasibility of basic prediction concepts based on the historical information and process meta-data have been successfully established and possible scheduling improvements using such predictions identified.
The thesis is structured as follows: Section 1 introduces Grid computing and its major concepts; Section 2 presents open research issues and specific focus of the author’s research; Section 3 gives a survey of the related literature, schedulers, monitoring tools and simulation packages; Section 4 presents the platform for author’s work – the Self-Organising Grid Resource management project; Sections 5 and 6 give detailed accounts of the monitoring framework and simulation tool developed; Section 7 presents the initial data analysis while Section 8.4 concludes the thesis with appendices and references
- …