155,440 research outputs found

    ANFIS Modeling of Dynamic Load Balancing in LTE

    Get PDF
    Modelling of ill-defined or unpredictable systems can be very challenging. Most models have relied on conventional mathematical models which does not adequately track some of the multifaceted challenges of such a system. Load balancing, which is a self-optimization operation of Self-Organizing Networks (SON), aims at ensuring an equitable distribution of users in the network. This translates into better user satisfaction and a more efficient use of network resources. Several methods for load balancing have been proposed. While some of them have a very buoyant theoretical basis, they are not practical. Furthermore, most of the techniques proposed the use of an iterative algorithm, which in itself is not computationally efficient as it does not take the unpredictable fluctuation of network load into consideration. This chapter proposes the use of soft computing, precisely Adaptive Neuro-Fuzzy Inference System (ANFIS) model, for dynamic QoS aware load balancing in 3GPP LTE. The use of ANFIS offers learning capability of neural network and knowledge representation of fuzzy logic for a load balancing solution that is cost effective and closer to human intuition. Three key load parameters (number of satisfied user in the net- work, virtual load of the serving eNodeB, and the overall state of the target eNodeB) are used to adjust the hysteresis value for load balancing

    Parallel Global Aircraft Configuration Design Space Exploration

    Get PDF
    The preliminary design space exploration for large,interdisciplinary engineering problems is often a difficult and time-consuming task. General techniques are needed that efficiently and methodically search the design space. This work focuses on the use of parallel load balancing techniques integrated with a global optimizer to reduce the computational time of the design space exploration. The method is applied to the multidisciplinary design of a High Speed Civil Transport (HSCT). A modified Lipschitzian optimization algorithm generates large sets of design points that are evaluated concurrently using a variety of load balancing schemes.The load balancing schemes implemented in this study are: static load balancing, dynamic load balancing with a master-slave organization, fully distributed dynamic load balancing, an fully distributed dynamic load balancing via threads. All of the parallel computing schemes have high parallel efficiencies. When the variation in the design evaluation times is small, the computational overhead needed for fully distributed dynamic load balancing is substantial enough so that it is more efficient to use a master-slave paradigm. However, when the variation in evaluation times is increased, fully distributed load balancing is the most efficient

    A Prolific Scheme for Load Balancing Relying on Task Completion Time

    Get PDF
    In networks with lot of computation, load balancing gains increasing significance. To offer various resources, services and applications, the ultimate aim is to facilitate the sharing of services and resources on the network over the Internet. A key issue to be focused and addressed in networks with large amount of computation is load balancing. Load is the number of tasks‘t’ performed by a computation system. The load can be categorized as network load and CPU load. For an efficient load balancing strategy, the process of assigning the load between the nodes should enhance the resource utilization and minimize the computation time. This can be accomplished by a uniform distribution of load of to all the nodes. A Load balancing method should guarantee that, each node in a network performs almost equal amount of work pertinent to their capacity and availability of resources. Relying on task subtraction, this work has presented a pioneering algorithm termed as E-TS (Efficient-Task Subtraction). This algorithm has selected appropriate nodes for each task. The proposed algorithm has improved the utilization of computing resources and has preserved the neutrality in assigning the load to the nodes in the network

    MINIMIZATION OF LOAD BASED RESOURCES IN CLOUD COMPUTING SYSTEMS

    Get PDF
    “Cloud computing” is a term, which involves virtualization, distributed computing, networking, software and Web services. Our Objective is to develop an effective load balancing algorithm using Divisible Load Scheduling Theorem to maximize or minimize different performance parameters (throughput, latency for example) for the clouds of different sizes. Central to these issues lays the establishment of an efficient load balancing algorithm. The load can be CPU load, memory capacity, delay or network load. Load balancing is the process of distributing the load among various nodes of a distributed system to improve both resource utilization and job response time while also avoiding a situation where some of the nodes are heavily loaded while other nodes are idle or doing very little work. Load balancing ensures that all processor in the system or every node in the network does approximately the equal amount of work at any instant of time

    A Survey on Load Balancing Algorithms for VM Placement in Cloud Computing

    Get PDF
    The emergence of cloud computing based on virtualization technologies brings huge opportunities to host virtual resource at low cost without the need of owning any infrastructure. Virtualization technologies enable users to acquire, configure and be charged on pay-per-use basis. However, Cloud data centers mostly comprise heterogeneous commodity servers hosting multiple virtual machines (VMs) with potential various specifications and fluctuating resource usages, which may cause imbalanced resource utilization within servers that may lead to performance degradation and service level agreements (SLAs) violations. To achieve efficient scheduling, these challenges should be addressed and solved by using load balancing strategies, which have been proved to be NP-hard problem. From multiple perspectives, this work identifies the challenges and analyzes existing algorithms for allocating VMs to PMs in infrastructure Clouds, especially focuses on load balancing. A detailed classification targeting load balancing algorithms for VM placement in cloud data centers is investigated and the surveyed algorithms are classified according to the classification. The goal of this paper is to provide a comprehensive and comparative understanding of existing literature and aid researchers by providing an insight for potential future enhancements.Comment: 22 Pages, 4 Figures, 4 Tables, in pres

    Secure and Sustainable Load Balancing of Edge Data Centers in Fog Computing

    Full text link
    © 1979-2012 IEEE. Fog computing is a recent research trend to bring cloud computing services to network edges. EDCs are deployed to decrease the latency and network congestion by processing data streams and user requests in near real time. EDC deployment is distributed in nature and positioned between cloud data centers and data sources. Load balancing is the process of redistributing the work load among EDCs to improve both resource utilization and job response time. Load balancing also avoids a situation where some EDCs are heavily loaded while others are in idle state or doing little data processing. In such scenarios, load balancing between the EDCs plays a vital role for user response and real-Time event detection. As the EDCs are deployed in an unattended environment, secure authentication of EDCs is an important issue to address before performing load balancing. This article proposes a novel load balancing technique to authenticate the EDCs and find less loaded EDCs for task allocation. The proposed load balancing technique is more efficient than other existing approaches in finding less loaded EDCs for task allocation. The proposed approach not only improves efficiency of load balancing; it also strengthens the security by authenticating the destination EDCs

    Design Principles for Sparse Matrix Multiplication on the GPU

    Full text link
    We implement two novel algorithms for sparse-matrix dense-matrix multiplication (SpMM) on the GPU. Our algorithms expect the sparse input in the popular compressed-sparse-row (CSR) format and thus do not require expensive format conversion. While previous SpMM work concentrates on thread-level parallelism, we additionally focus on latency hiding with instruction-level parallelism and load-balancing. We show, both theoretically and experimentally, that the proposed SpMM is a better fit for the GPU than previous approaches. We identify a key memory access pattern that allows efficient access into both input and output matrices that is crucial to getting excellent performance on SpMM. By combining these two ingredients---(i) merge-based load-balancing and (ii) row-major coalesced memory access---we demonstrate a 4.1x peak speedup and a 31.7% geomean speedup over state-of-the-art SpMM implementations on real-world datasets.Comment: 16 pages, 7 figures, International European Conference on Parallel and Distributed Computing (Euro-Par) 201
    corecore