21,488 research outputs found

    A Survey on Load Balancing Algorithms for VM Placement in Cloud Computing

    Get PDF
    The emergence of cloud computing based on virtualization technologies brings huge opportunities to host virtual resource at low cost without the need of owning any infrastructure. Virtualization technologies enable users to acquire, configure and be charged on pay-per-use basis. However, Cloud data centers mostly comprise heterogeneous commodity servers hosting multiple virtual machines (VMs) with potential various specifications and fluctuating resource usages, which may cause imbalanced resource utilization within servers that may lead to performance degradation and service level agreements (SLAs) violations. To achieve efficient scheduling, these challenges should be addressed and solved by using load balancing strategies, which have been proved to be NP-hard problem. From multiple perspectives, this work identifies the challenges and analyzes existing algorithms for allocating VMs to PMs in infrastructure Clouds, especially focuses on load balancing. A detailed classification targeting load balancing algorithms for VM placement in cloud data centers is investigated and the surveyed algorithms are classified according to the classification. The goal of this paper is to provide a comprehensive and comparative understanding of existing literature and aid researchers by providing an insight for potential future enhancements.Comment: 22 Pages, 4 Figures, 4 Tables, in pres

    Adaptive Network Coding for Scheduling Real-time Traffic with Hard Deadlines

    Full text link
    We study adaptive network coding (NC) for scheduling real-time traffic over a single-hop wireless network. To meet the hard deadlines of real-time traffic, it is critical to strike a balance between maximizing the throughput and minimizing the risk that the entire block of coded packets may not be decodable by the deadline. Thus motivated, we explore adaptive NC, where the block size is adapted based on the remaining time to the deadline, by casting this sequential block size adaptation problem as a finite-horizon Markov decision process. One interesting finding is that the optimal block size and its corresponding action space monotonically decrease as the deadline approaches, and the optimal block size is bounded by the "greedy" block size. These unique structures make it possible to narrow down the search space of dynamic programming, building on which we develop a monotonicity-based backward induction algorithm (MBIA) that can solve for the optimal block size in polynomial time. Since channel erasure probabilities would be time-varying in a mobile network, we further develop a joint real-time scheduling and channel learning scheme with adaptive NC that can adapt to channel dynamics. We also generalize the analysis to multiple flows with hard deadlines and long-term delivery ratio constraints, devise a low-complexity online scheduling algorithm integrated with the MBIA, and then establish its asymptotical throughput-optimality. In addition to analysis and simulation results, we perform high fidelity wireless emulation tests with real radio transmissions to demonstrate the feasibility of the MBIA in finding the optimal block size in real time.Comment: 11 pages, 13 figure

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    A Novel Workload Allocation Strategy for Batch Jobs

    Get PDF
    The distribution of computational tasks across a diverse set of geographically distributed heterogeneous resources is a critical issue in the realisation of true computational grids. Conventionally, workload allocation algorithms are divided into static and dynamic approaches. Whilst dynamic approaches frequently outperform static schemes, they usually require the collection and processing of detailed system information at frequent intervals - a task that can be both time consuming and unreliable in the real-world. This paper introduces a novel workload allocation algorithm for optimally distributing the workload produced by the arrival of batches of jobs. Results show that, for the arrival of batches of jobs, this workload allocation algorithm outperforms other commonly used algorithms in the static case. A hybrid scheduling approach (using this workload allocation algorithm), where information about the speed of computational resources is inferred from previously completed jobs, is then introduced and the efficiency of this approach demonstrated using a real world computational grid. These results are compared to the same workload allocation algorithm used in the static case and it can be seen that this hybrid approach comprehensively outperforms the static approach

    A simheuristic algorithm for solving an integrated resource allocation and scheduling problem

    Get PDF
    Modern companies have to face challenging configuration issues in their manufacturing chains. One of these challenges is related to the integrated allocation and scheduling of resources such as machines, workers, energy, etc. These integrated optimization problems are difficult to solve, but they can be even more challenging when real-life uncertainty is considered. In this paper, we study an integrated allocation and scheduling optimization problem with stochastic processing times. A simheuristic algorithm is proposed in order to effectively solve this integrated and stochastic problem. Our approach relies on the hybridization of simulation with a metaheuristic to deal with the stochastic version of the allocation-scheduling problem. A series of numerical experiments contribute to illustrate the efficiency of our methodology as well as their potential applications in real-life enterprise settings

    Control and data channel resource allocation in OFDMA heterogeneous networks

    Get PDF
    This paper investigates the downlink resource allocation problem in Orthogonal Frequency Division Multiple Access (OFDMA) Heterogeneous Networks (HetNets) consisting of macro cells and small cells sharing the same frequency band. Dense deployment of small cells overlaid by a macro layer is considered to be one of the most promising solutions for providing hotspot coverage in future 5G networks. The focus is to devise an optimised policy for small cells’ access to the shared spectrum, in terms of their transmissions, in order to keep small cell served users sum data rate at high levels while ensuring that certain level of quality of service (QoS) for the macro cell users in the vicinity of small cells is provided. Both data and control channel constraints are considered, to ensure that not only the macro cell users’ data rate demands are met, but also a certain level of Bit Error Rate (BER) is ensured for the control channel information. Control channel reliability is especially important as it holds key information to successfully decode the data channel. The problem is addressed by our proposed linear binary integer programming heuristic algorithm which maximises the small cells utility while ensuring the macro users imposed constraints. To further reduce the computational complexity, we propose a progressive interference aware low complexity heuristic solution. Discussion is also presented for the implementation possibility of our proposed algorithms in a practical network. The performance of both the proposed algorithms is compared with the conventional Reuse-1 scheme under different fading conditions and small cell loads. Results show a negligible drop in small cell performance for our proposed schemes, as a trade-off for ensuring all macro users data rate demands, while Reuse-1 scheme can even lead up to 40 % outage when control region of the small cells in heavily loaded
    • …
    corecore