1,620 research outputs found

    Revising Max-min for Scheduling in a Cloud Computing Context

    Get PDF
    Paper presented at the 2017 IEEE 26th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE), Poznan, Poland, 21-23 June 2017. © 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Adoption of Cloud Computing is on the rise[1] and many datacenter operators adhere to strict energy efficiency guidelines[2]. In this paper a novel approach to scheduling in a Cloud Computing context is proposed. The algorithm Maxmin Fast Track (MXFT) revises the Max-min algorithm to better support smaller tasks with stricter Service Level Agreements (SLAs), which makes it more relevant to Cloud Computing. MXFT is inspired by queuing in supermarkets, where there is a fast lane for customers with a smaller number of items. The algorithm outperforms Max-min in task execution times and outperforms Min-min in overall makespan. A by-product of investigating this algorithm was the development of simulator called “ScheduleSim”[3] which makes it simpler to prove a scheduling algorithm before committing to a specific scheduling problem in Cloud Computing and therefore might be a useful precursor to experiments using the established simulator CloudSim[4].Final Accepted Versio

    Structure-Aware Dynamic Scheduler for Parallel Machine Learning

    Full text link
    Training large machine learning (ML) models with many variables or parameters can take a long time if one employs sequential procedures even with stochastic updates. A natural solution is to turn to distributed computing on a cluster; however, naive, unstructured parallelization of ML algorithms does not usually lead to a proportional speedup and can even result in divergence, because dependencies between model elements can attenuate the computational gains from parallelization and compromise correctness of inference. Recent efforts toward this issue have benefited from exploiting the static, a priori block structures residing in ML algorithms. In this paper, we take this path further by exploring the dynamic block structures and workloads therein present during ML program execution, which offers new opportunities for improving convergence, correctness, and load balancing in distributed ML. We propose and showcase a general-purpose scheduler, STRADS, for coordinating distributed updates in ML algorithms, which harnesses the aforementioned opportunities in a systematic way. We provide theoretical guarantees for our scheduler, and demonstrate its efficacy versus static block structures on Lasso and Matrix Factorization

    Secure Cloud-Edge Deployments, with Trust

    Get PDF
    Assessing the security level of IoT applications to be deployed to heterogeneous Cloud-Edge infrastructures operated by different providers is a non-trivial task. In this article, we present a methodology that permits to express security requirements for IoT applications, as well as infrastructure security capabilities, in a simple and declarative manner, and to automatically obtain an explainable assessment of the security level of the possible application deployments. The methodology also considers the impact of trust relations among different stakeholders using or managing Cloud-Edge infrastructures. A lifelike example is used to showcase the prototyped implementation of the methodology

    Performance Analysis of OpenAirInterface System Emulation

    Get PDF
    With the rapid growth of mobile networks, the radio access network becomes more and more costly to deploy, operate, maintain and upgrade. The most effective answer to this problem lies in the centralization and virtualization of the eNodeBs. This solution is known as Cloud RAN and is one of the key topics in the development of fifth generation networks. Within this context OpenAirInterface is a promising emulation tool that can be used for prototyping innovative scheduling algorithms, making the most of the new architecture. In this work we first describe the emulation environment of OpenAirInterface and its scheduling framework and we use it to implement two MAC schedulers. Moreover we validate the above schedulers and we perform a thorough profiling of OpenAirInterface, in terms of both memory occupancy and execution time. Our results show that OpenAirInterface can be effectively used for prototyping scheduling algorithms in emulated LTE networks

    Scheduling Under Non-Uniform Job and Machine Delays

    Get PDF

    epcAware: a game-based, energy, performance and cost efficient resource management technique for multi-access edge computing

    Get PDF
    The Internet of Things (IoT) is producing an extraordinary volume of data daily, and it is possible that the data may become useless while on its way to the cloud for analysis, due to longer distances and delays. Fog/edge computing is a new model for analyzing and acting on time-sensitive data (real-time applications) at the network edge, adjacent to where it is produced. The model sends only selected data to the cloud for analysis and long-term storage. Furthermore, cloud services provided by large companies such as Google, can also be localized to minimize the response time and increase service agility. This could be accomplished through deploying small-scale datacenters (reffered to by name as cloudlets) where essential, closer to customers (IoT devices) and connected to a centrealised cloud through networks - which form a multi-access edge cloud (MEC). The MEC setup involves three different parties, i.e. service providers (IaaS), application providers (SaaS), network providers (NaaS); which might have different goals, therefore, making resource management a defficult job. In the literature, various resource management techniques have been suggested in the context of what kind of services should they host and how the available resources should be allocated to customers’ applications, particularly, if mobility is involved. However, the existing literature considers the resource management problem with respect to a single party. In this paper, we assume resource management with respect to all three parties i.e. IaaS, SaaS, NaaS; and suggest a game theoritic resource management technique that minimises infrastructure energy consumption and costs while ensuring applications performance. Our empirical evaluation, using real workload traces from Google’s cluster, suggests that our approach could reduce up to 11.95% energy consumption, and approximately 17.86% user costs with negligible loss in performance. Moreover, IaaS can reduce up to 20.27% energy bills and NaaS can increase their costs savings up to 18.52% as compared to other methods
    corecore