375 research outputs found

    LBSim: A simulation system for dynamic load-balancing algorithms for distributed systems.

    Get PDF
    In a distributed system consisting of autonomous computational units, the total computational power of all the units needs to be utilized efficiently by applying suitable load-balancing policies. For accomplishing the task, a large number of load balancing algorithms have been proposed in the literature. To facilitate the performance study of each of these load-balancing strategies, simulation has been widely used. However comparison of the load balancing algorithms becomes difficult if a different simulator is used for each case. There have been few studies on generalized simulation of load-balancing algorithms in distributed systems. Most of the simulation systems address the experiments for some particular load-balancing algorithms, whereas this thesis aims to study the simulation for a broad range of algorithms. After the characterization of the distributed systems and the extraction of the common components of load-balancing algorithms, a simulation system, called LBSim, has been built. LBSim is a generalized event-driven simulator for studying load-balancing algorithms with coarse-grained applications running on distributed networks of autonomous processing nodes. In order to verify that the simulation model can represent actual systems reasonably well, we have validated LBSim both qualitatively and quantitatively. As a toolkit of simulation, LBSim programming libraries can be reused to implement load-balancing algorithms for the purpose of performance measurement and analysis from different perspectives. As a framework of algorithm simulation can be extended with a moderate effort by following object-oriented methodology, to meet any new requirements that may arise in the future.Dept. of Computer Science. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2004 .D8. Source: Masters Abstracts International, Volume: 43-05, page: 1747. Adviser: A. K. Aggarwal. Thesis (M.Sc.)--University of Windsor (Canada), 2004

    The Inter-cloud meta-scheduling

    Get PDF
    Inter-cloud is a recently emerging approach that expands cloud elasticity. By facilitating an adaptable setting, it purposes at the realization of a scalable resource provisioning that enables a diversity of cloud user requirements to be handled efficiently. This study’s contribution is in the inter-cloud performance optimization of job executions using metascheduling concepts. This includes the development of the inter-cloud meta-scheduling (ICMS) framework, the ICMS optimal schemes and the SimIC toolkit. The ICMS model is an architectural strategy for managing and scheduling user services in virtualized dynamically inter-linked clouds. This is achieved by the development of a model that includes a set of algorithms, namely the Service-Request, Service-Distribution, Service-Availability and Service-Allocation algorithms. These along with resource management optimal schemes offer the novel functionalities of the ICMS where the message exchanging implements the job distributions method, the VM deployment offers the VM management features and the local resource management system details the management of the local cloud schedulers. The generated system offers great flexibility by facilitating a lightweight resource management methodology while at the same time handling the heterogeneity of different clouds through advanced service level agreement coordination. Experimental results are productive as the proposed ICMS model achieves enhancement of the performance of service distribution for a variety of criteria such as service execution times, makespan, turnaround times, utilization levels and energy consumption rates for various inter-cloud entities, e.g. users, hosts and VMs. For example, ICMS optimizes the performance of a non-meta-brokering inter-cloud by 3%, while ICMS with full optimal schemes achieves 9% optimization for the same configurations. The whole experimental platform is implemented into the inter-cloud Simulation toolkit (SimIC) developed by the author, which is a discrete event simulation framework

    Decentralized load balancing in heterogeneous computational grids

    Get PDF
    With the rapid development of high-speed wide-area networks and powerful yet low-cost computational resources, grid computing has emerged as an attractive computing paradigm. The space limitations of conventional distributed systems can thus be overcome, to fully exploit the resources of under-utilised computing resources in every region around the world for distributed jobs. Workload and resource management are key grid services at the service level of grid software infrastructure, where issues of load balancing represent a common concern for most grid infrastructure developers. Although these are established research areas in parallel and distributed computing, grid computing environments present a number of new challenges, including large-scale computing resources, heterogeneous computing power, the autonomy of organisations hosting the resources, uneven job-arrival pattern among grid sites, considerable job transfer costs, and considerable communication overhead involved in capturing the load information of sites. This dissertation focuses on designing solutions for load balancing in computational grids that can cater for the unique characteristics of grid computing environments. To explore the solution space, we conducted a survey for load balancing solutions, which enabled discussion and comparison of existing approaches, and the delimiting and exploration of the apportion of solution space. A system model was developed to study the load-balancing problems in computational grid environments. In particular, we developed three decentralised algorithms for job dispatching and load balancing—using only partial information: the desirability-aware load balancing algorithm (DA), the performance-driven desirability-aware load-balancing algorithm (P-DA), and the performance-driven region-based load-balancing algorithm (P-RB). All three are scalable, dynamic, decentralised and sender-initiated. We conducted extensive simulation studies to analyse the performance of our load-balancing algorithms. Simulation results showed that the algorithms significantly outperform preexisting decentralised algorithms that are relevant to this research

    DRAGON: Decentralized fault tolerance in edge federations

    Get PDF
    Edge Federation is a new computing paradigm that seamlessly interconnects the resources of multiple edge service providers. A key challenge in such systems is the deployment of latency-critical and AI based resource-intensive applications in constrained devices. To address this challenge, we propose a novel memory-efficient deep learning based model, namely generative optimization networks (GON). Unlike GANs, GONs use a single network to both discriminate input and generate samples, significantly reducing their memory footprint. Leveraging the low memory footprint of GONs, we propose a decentralized fault-tolerance method called DRAGON that runs simulations (as per a digital modeling twin) to quickly predict and optimize the performance of the edge federation. Extensive experiments with real-world edge computing benchmarks on multiple Raspberry-Pi based federated edge configurations show that DRAGON can outperform the baseline methods in fault-detection and Quality of Service (QoS) metrics. Specifically, the proposed method gives higher F1 scores for fault-detection than the best deep learning (DL) method, while consuming lower memory than the heuristic methods. This allows for improvement in energy consumption, response time and service level agreement violations by up to 74, 63 and 82 percent, respectively

    Queueing networks: solutions and applications

    Get PDF
    During the pasttwo decades queueing network models have proven to be a versatile tool for computer system and computer communication system performance evaluation. This chapter provides a survey of th field with a particular emphasis on applications. We start with a brief historical retrospective which also servesto introduce the majr issues and application areas. Formal results for product form queuenig networks are reviewed with particular emphasis on the implications for computer systems modeling. Computation algorithms, sensitivity analysis and optimization techniques are among the topics covered. Many of the important applicationsof queueing networks are not amenableto exact analysis and an (often confusing) array of approximation methods have been developed over the years. A taxonomy of approximation methods is given and used as the basis for for surveing the major approximation methods that have been studied. The application of queueing network to a number of areas is surveyed, including computer system cpacity planning, packet switching networks, parallel processing, database systems and availability modeling.Durante as Ășltimas duas dĂ©cadas modelos de redes de filas provaram ser uma ferramenta versĂĄtil para avaliação de desempenho de sistemas de computação e sistemas de comunicação. Este capĂ­tulo faz um apanhado geral da ĂĄrea, com ĂȘnfase em aplicaçÔes. Começamos com uma breve retrospectiva histĂłrica que serve tambĂ©m para introduzir os pontos mais importantes e as ĂĄreas de aplicação. Resultados formais para redes de filas em forma de produto sĂŁo revisados com ĂȘnfase na modelagem de sistemas de computação. Algoritmos de computação, anĂĄlise de sensibilidade e tĂ©cnicas de otimização estĂŁo entre os tĂłpicos revistos. Muitas dentre importantes aplicaçÔes de redes de filas nĂŁo sĂŁo tratĂĄveis por anĂĄlise exata e uma sĂ©rie (frequentemente confusa) de mĂ©todos de aproximação tem sido desenvolvida. Uma taxonomia de mĂ©todos de aproximação Ă© dada e usada como base para revisĂŁo dos mais importantes mĂ©todos de aproximação propostos. Uma revisĂŁo das aplicaçÔes de redes de filas em um nĂșmero de ĂĄreas Ă© feita, incluindo planejamento de capacidade de sistemas de computação, redes de comunicação por chaveamento de pacotes, processamento paralelo, sistemas de bancos de dados e modelagem de confiabilidade

    Intrusion Detection and Prevention Systems In the Cloud Environment

    Get PDF
    Cloud computing provides users with computing resources on demand. Despite the recent boom in adoption of cloud services, security remains an important issue. The aim of this work is to study the structure of cloud systems and propose a new security architecture in protecting cloud against attacks. This work also investigates auto-scaling and how it affects cloud computing security. Finally, this thesis studies load balancing and scheduling in cloud computing particularly when some of the workload is faulty or malicious. The first original contribution proposes a hierarchical model for intrusion detection in the cloud environment. Finite state machines (FSM) of the model were produced and verified then analyzed using probabilistic model checker. Results indicate that given certain conditions the proposed model will be in a state that efficiently utilize resources despite the presence of attack. In this part of work how cloud handles failure and its relationship to auto-scaling mechanisms within the cloud has been investigated. The second original contribution proposes a lightweight robust scheduling algorithm for load balancing in the cloud. Here some of the traffic is not reliable. Formal analysis of the algorithm were conducted and results showed that given some arrival rates of both genuine and malicious traffic average queues will stabilize, i.e. they will not grow to infinity. Experimental results studied both queues and latency, and they showed that under the same conditions naive algorithms do not stabilize. The algorithm was then extended to decentralized settings where servers maintain separate queues. In this approach when a job arrives, a dispatching algorithm is used to decide which server to send it to. Different dispatching algorithms were proposed and experimental results indicate that the new algorithms perform better than some of the existing algorithms. The results were further extended to heterogeneous (servers with different configuration) settings and it was shown that some algorithms that were stable in homogeneous setting are not stable under this setting. Simulations monitoring queue sizes confirmed that some algorithms which are stable in homogeneous setting, are not stable under this setting. It is hoped that this study with inform and enlighten cloud service providers about new ways to improve the security of the cloud in the presence of failure/attacks

    An Experiment in the complexity of load balancing algorithms

    Get PDF
    Not provided
    • 

    corecore