11 research outputs found

    A framework for evolving grid computing systems.

    Get PDF
    Grid computing was born in the 1990s, when researchers were looking for a way to share expensive computing resources and experiment equipment. Grid computing is becoming increasingly popular because it promotes the sharing of distributed resources that may be heterogeneous in nature, and it enables scientists and engineering professionals to solve large scale computing problems. In reality, there are already huge numbers of grid computing facilities distributed around the world, each one having been created to serve a particular group of scientists such as weather forecasters, or a group of users such as stock markets. However, the need to extend the functionalities of current grid systems lends itself to the consideration of grid evolution. This allows the combination of many disjunct grids into a single powerful grid that can operate as one vast computational resource, as well as for grid environments to be flexible, to be able to change and to evolve. The rationale for grid evolution is the current rapid and increasing advances in both software and hardware. Evolution means adding or removing capabilities. This research defines grid evolution as adding new functions and/or equipment and removing unusable resources that affect the performance of some nodes. This thesis produces a new technique for grid evolution, allowing it to be seamless and to operate at run time. Within grid computing, evolution is an integration of software and hardware and can be of two distinct types, external and internal. Internal evolution occurs inside the grid boundary by migrating special resources such as application software from node to node inside the grid. While external evolution occurs between grids. This thesis develops a framework for grid evolution that insulates users from the complexities of grids. This framework has at its core a resource broker together with a grid monitor to cope with internal and external evolution, advance reservation, fault tolerance, the monitoring of the grid environment, increased resource utilisation and the high availability of grid resources. The starting point for the present framework of grid evolution is when the grid receives a job whose requirements do not exist on the required node which triggers grid evolution. If the grid has all the requirements scattered across its nodes, internal evolution enabling the grid to migrate the required resources to the required node in order to satisfy job requirements ensues, but if the grid does not have these resources, external evolution enables the grid either to collect them from other grids (permanent evolution) or to send the job to other grids for execution (just in time) evolution. Finally a simulation tool called (EVOSim) has been designed, developed and tested. It is written in Oracle 10g and has been used for the creation of four grids, each of which has a different setup including different nodes, application software, data and polices. Experiments were done by submitting jobs to the grid at run time, and then comparing the results and analysing the performance of those grids that use the approach of evolution with those that do not. The results of these experiments have demonstrated that these features significantly improve the performance of grid environments and provide excellent scheduling results, with a decreasing number of rejected jobs

    QoS-aware Storage Virtualization: A Framework for Multi-tier Infrastructures in Cloud Storage Systems

    Get PDF
    The emergence of the relatively modern phenomenon of cloud computing has manifested a different approach to the availability and storage of software and data on a remote online server ‘in the cloud’, which can be accessed by pre-determined users through the Internet, even allowing sharing of data in certain scenarios. Data availability, reliability, and access performance are three important factors that need to be taken into consideration by cloud providers when designing a high-performance storage system for any organization. Due to the high costs of maintaining and managing multiple local storage systems, it is now considered more applicable to design a virtualized multi-tier storage infrastructure, yet, the existing Quality of Service (QoS) must be guaranteed on the application level within the cloud without ongoing human intervention. Such interference seems necessary since the delivered QoS can vary widely both across and within storage tiers, depending on the access profile of the data. This survey paper encompasses a general framework for the optimal design of a distributed system in order to attain efficient data availability and reliability. To this extent, numerous state-of-the-art technologies and methods have been revised, especially for multi-tiered distributed cloud systems. Moreover, several critical aspects that must be taken into consideration for getting optimal performance of QoS-aware cloud systems are discussed, highlighting some solutions to handle failure situations, and the possible advantages and benefits of QoS. Finally, this papers attempts to argue the possible improvements that have been developed on QoS-aware cloud systems like Q-cloud since 2010, such as any extra attempts been carried forward to make the Q-cloud more adaptable and secure

    Co-simulation of multiple vehicle routing problem models

    Get PDF
    Complex systems are often designed in a decentralized and open way so that they can operate on heterogeneous entities that communicate with each other. Numerous studies consider the process of components simulation in a complex system as a proven approach to realistically predict the behavior of a complex system or to effectively manage its complexity. The simulation of different complex system components can be coupled via co-simulation to reproduce the behavior emerging from their interaction. On the other hand, multi-agent simulations have been largely implemented in complex system modeling and simulation. Each multi-agent simulator’s role is to solve one of the VRP objectives. These simulators interact within a co-simulation platform called MECSYCO, to ensure the integration of the various proposed VRP models. This paper presents the Vehicle Routing Problem (VRP) simulation results in several aspects, where the main goal is to satisfy several client demands. The experiments show the performance of the proposed VRP multi-model and carry out its improvement in terms of computational complexity

    Temporal dimension for job submission description language.

    No full text

    Grid evolution.

    No full text

    Augmenting High-Performance Mobile Cloud Computations for Big Data in AMBER

    No full text
    Big data is an inspirational area of research that involves best practices used in the industry and academia. Challenging and complex systems are the core requirements for the data collation and analysis of big data. Data analysis approaches and algorithms development are the necessary and essential components of the big data analytics. Big data and high-performance computing emergent nature help to solve complex and challenging problems. High-Performance Mobile Cloud Computing (HPMCC) technology contributes to the execution of the intensive computational application at any location independently on laptops using virtual machines. HPMCC technique enables executing computationally extreme scientific tasks on a cloud comprising laptops. Assisted Model Building with Energy Refinement (AMBER) with the force fields calculations for molecular dynamics is a computationally hungry task that requires high and computational hardware resources for execution. The core objective of the study is to deliver and provide researchers with a mobile cloud of laptops capable of doing the heavy processing. An innovative execution of AMBER with force field empirical formula using Message Passing Interface (MPI) infrastructure on HPMCC is proposed. It is homogeneous mobile cloud platform comprising a laptop and virtual machines as processors nodes along with dynamic parallelism. Some processes can be executed to distribute and run the task among the various computational nodes. This task-based and data-based parallelism is achieved in proposed solution by using a Message Passing Interface. Trace-based results and graphs will present the significance of the proposed method
    corecore