9 research outputs found

    Fast DRL-based scheduler configuration tuning for reducing tail latency in edge-cloud jobs

    No full text
    Edge-cloud applications are rapidly prevailing in recent years and pose the challenge of using both resource-strenuous edge devices and elastic cloud resources under dynamic workloads. Efficient resource allocation on edge-cloud jobs via cluster schedulers (e.g. Kubernetes/Volcano scheduler) is essential to guarantee their performance, e.g. tail latency, and such allocation is sensitive to scheduler configurations such as applied scheduling algorithms and task restart/discard policy. Deep reinforcement learning (DRL) is increasingly applied to optimize scheduling decisions. However, DRL faces the conundrum of achieving high rewards at a dauntingly long training time (e.g. hours or days), making it difficult to tune the scheduler configurations online in accordance to dynamically changing edge-cloud workloads and resources. For such an issue, this paper proposes EdgeTuner, a fast scheduler configuration tuning approach that efficiently leverages DRL to reduce tail latency of edge-cloud jobs. The enabling feature of EdgeTuner is to effectively simulate the execution of edge-cloud jobs under different scheduler configurations and thus quickly estimate these configurations’ influence on job performance. The simulation results allow EdgeTuner to timely train a DRL agent in order to properly tune scheduler configurations in dynamic edge-cloud environment. We implement EdgeTuner in both Kubernetes and Volcano schedulers and extensively evaluate it on real workloads driven by Alibaba production traces. Our results show that EdgeTuner outperforms prevailing scheduling algorithms by achieving much lower tail latency while accelerating DRL training speed by an average of 151.63x.Distributed System

    EdgeVisionBench: A Benchmark of Evolving Input Domains for Vision Applications at Edge

    No full text
    Vision applications powered by deep neural networks (DNNs) are widely deployed on edge devices and solve the learning tasks of incoming data streams whose class label and input feature continuously evolve, known as domain shift. Despite its prominent presence in real-world edge scenarios, existing benchmarks used by domain adaptation methods overlook evolving domains and under represent their shifts in label and feature distributions. To address this gap, we present EdgeVisionBench, a benchmark seeking to generate evolving domains of various types and reflect their realistic label and feature shifts encountered by edge-based vision applications. To facilitate evaluating domain adaptation methods on edge devices, we provide an open-source package that automates workload generation, contains popular DNN models and compression techniques, and standardizes evaluations with interactive interfaces. Code and datasets are available at https://github.com/LINC-BIT/EdgeVisionBench. Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Data-Intensive System

    Workload-Adaptive Configuration Tuning for Hierarchical Cloud Schedulers

    No full text
    Cluster schedulers provide flexible resource sharing mechanism for best-effort cloud jobs, which occupy a majority in modern datacenters. Properly tuning a scheduler's configurations is the key to these jobs' performance because it decides how to allocate resources among them. Today's cloud scheduling systems usually rely on cluster operators to set the configuration and thus overlook the potential performance improvement through optimally configuring the scheduler according to the heterogeneous and dynamic cloud workloads. In this paper, we introduce AdaptiveConfig, a run-time configurator for cluster schedulers that automatically adapts to the changing workload and resource status in two steps. First, a comparison approach estimates jobs' performances under different configurations and diverse scheduling scenarios. The key idea here is to transform a scheduler's resource allocation mechanism and their variable influence factors (configurations, scheduling constraints, available resources, and workload status) into business rules and facts in a rule engine, thereby reasoning about these correlated factors in job performance comparison. Second, a workload-adaptive optimizer transforms the cluster-level searching of huge configuration space into an equivalent dynamic programming problem that can be efficiently solved at scale. We implement AdaptiveConfig on the popular YARN Capacity and Fair schedulers and demonstrate its effectiveness using real-world Facebook and Google workloads, i.e., successfully finding best configurations for most of scheduling scenarios and considerably reducing latencies by a factor of two with low optimization time.Distributed System

    FedKNOW: Federated Continual Learning with Signature Task Knowledge Integration at Edge

    No full text
    Deep Neural Networks (DNNs) have been ubiquitously adopted in internet of things and are becoming an integral of our daily life. When tackling the evolving learning tasks in real world, such as classifying different types of objects, DNNs face the challenge to continually retrain themselves according to the tasks on different edge devices. Federated continual learning is a promising technique that offers partial solutions but yet to overcome the following difficulties: the significant accuracy loss due to the limited on-device processing, the negative knowledge transfer caused by the limited communication of non-IID data, and the limited scalability on the tasks and edge devices. In this paper, we propose FedKNOW, an accurate and scalable federated continual learning framework, via a novel concept of signature task knowledge. FedKNOW is a client side solution that continuously extracts and integrates the knowledge of signature tasks which are highly influenced by the current task. Each client of FedKNOW is composed of a knowledge extractor, a gradient restorer and, most importantly, a gradient integrator. Upon training for a new task, the gradient integrator ensures the prevention of catastrophic forgetting and mitigation of negative knowledge transfer by effectively combining signature tasks identified from the past local tasks and other clients' current tasks through the global model. We implement FedKNOW in PyTorch and extensively evaluate it against state-of-the-art techniques using popular federated continual learning benchmarks. Extensive evaluation results on heterogeneous edge devices show that FedKNOW improves model accuracy by 63.24% without increasing model training time, reduces communication cost by 34.28%, and achieves more improvements under difficult scenarios such as large numbers of tasks or clients, and training different complex networks. Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Data-Intensive System

    Lightweight and Accurate DNN-Based Anomaly Detection at Edge

    No full text
    Deep neural networks (DNNs) have been showing significant success in various anomaly detection applications such as smart surveillance and industrial quality control. It is increasingly important to detect anomalies directly on edge devices, because of high responsiveness requirements and tight latency constraints. The accuracy of DNN-based solutions rely on large model capacity and thus long training and inference time, making them inapplicable on resource strenuous edge devices. It is hence imperative to scale DNN model sizes in correspondence to the run-time system requirements, i.e., meeting deadlines with minimal accuracy losses, which are highly dependent on the platforms and real-time system status. Existing scaling techniques either take long training time to pre-generate scaling options or disturb the unsteady training process of anomaly detection DNNs, lacking the adaptability to heterogeneous edge systems and incurring low inference accuracies. In this article, we present LightDNN to scale DNN models for anomaly detection applications at edge, featuring high detection accuracies with lightweight training and inference time. To this end, LightDNN quickly extracts and compresses blocks in a DNN, and provides large scaling space (e.g., 1 million options) by dynamically combining these compressed blocks online. At run-time, LightDNN predicts the DNN's inference latency according to the monitored system status, and optimizes the combination of blocks to maximize its accuracy under deadline constraints. We implement and extensively evaluate LightDNN on both CPU and GPU edge platforms using 8 popular anomaly detection workloads. Comparative experiments with state-of-the-art methods show that our approach provides 145.8 to 0.56 trillion times more scaling options without increasing training and inference overheads, thus achieving as much as 15.05% increase in accuracy under the same deadlines. Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Distributed System

    EdgeTuner: Fast Scheduling Algorithm Tuning for Dynamic Edge-Cloud Workloads and Resources

    No full text
    Edge-cloud jobs are rapidly prevailing in many application domains, posing the challenge of using both resource-strenuous edge devices and elastic cloud resources. Efficient resource allocation on such jobs via scheduling algorithms is essential to guarantee their performance, e.g. latency. Deep reinforcement learning (DRL) is increasingly adopted to make scheduling decisions but faces the conundrum of achieving high rewards at a low training overhead. It is unknown if such a DRL can be applied to timely tune the scheduling algorithms that are adopted in response to fast changing workloads and resources. In this paper, we propose EdgeTuner to effectively leverage DRL to select scheduling algorithms online for edge-cloud jobs. The enabling features of EdgeTuner are sophisticated DRL model that captures complex dynamics of Edge-Cloud jobs/tasks and an effective simulator to emulate the response times of short-running jobs in accordance to dynamically changing scheduling algorithms. EdgeTuner trains DRL agents offline by directly interacting with the simulator. We implement EdgeTuner on Kubernetes scheduler and extensively evaluate it on Kubernetes cluster testbed driven by the production traces. Our results show that EdgeTuner outperforms prevailing scheduling algorithms by achieving significant lower job response time while accelerating DRL training speed by more than 180x.Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Distributed System

    SlimML: Removing Non- critical Input Data in Large-scale Iterative Machine Learning

    No full text
    The core of many large-scale machine learning (ML) applications, such as neural networks (NN), support vector machine (SVM), and convolutional neural network (CNN), is the training algorithm that iteratively updates model parameters by processing massive datasets. From a plethora of studies aiming at accelerating ML, being data parallelization and parameter server, the prevalent assumption is that all data points are equivalently relevant to model parameter updating. In this article, we challenge this assumption by proposing a criterion to measure a data point's effect on model parameter updating, and experimentally demonstrate that the majority of data points are non-critical in the training process. We develop a slim learning framework, termed SlimML, which trains the ML models only on the critical data and thus significantly improves training performance. To such an end, SlimML efficiently leverages a small number of aggregated data points per iteration to approximate the criticalness of original input data instances. The proposed approach can be used by changing a few lines of code in a standard stochastic gradient descent (SGD) procedure, and we demonstrate experimentally, on NN regression, SVM classification, and CNN training, that for large datasets, it accelerates model training process by an average of 3.61 times while only incurring accuracy losses of 0.37 percent.Distributed System

    Accurate Differentially Private Deep Learning on the Edge

    No full text
    Deep learning (DL) models are increasingly built on federated edge participants holding local data. To enable insight extractions without the risk of information leakage, DL training is usually combined with differential privacy (DP). The core theme is to tradeoff learning accuracy by adding statistically calibrated noises, particularly to local gradients of edge learners, during model training. However, this privacy guarantee unfortunately degrades model accuracy due to edge learners' local noises, and the global noise aggregated at the central server. Existing DP frameworks for edge focus on local noise calibration via gradient clipping techniques, overlooking the heterogeneity and dynamic changes of local gradients, and their aggregated impact on accuracy. In this article, we present a systematical analysis that unveils the influential factors capable of mitigating local and aggregated noises, and design PrivateDL to leverage these factors in noise calibration so as to improve model accuracy while fulfilling privacy guarantee. PrivateDL features on: (i) sampling-based sensitivity estimation for local noise calibration and (ii) combining large batch sizes and critical data identification in global training. We implement PrivateDL on the popular Laplace/Gaussian DP mechanisms and demonstrate its effectiveness using Intel BigDL workloads, i.e., considerably improving model accuracy by up to 5X when comparing against existing DP frameworks.Distributed System

    Federated Learning With Heterogeneity-Aware Probabilistic Synchronous Parallel on Edge

    No full text
    With the massive amount of data generated from mobile devices and the increase of computing power of edge devices, the paradigm of Federated Learning has attracted great momentum. In federated learning, distributed and heterogeneous nodes collaborate to learn model parameters. However, while providing benefits such as privacy by design and reduced latency, the heterogeneous network present challenges to the synchronisation methods, or barrier control methods, used in training, regarding system progress and model convergence etc. The design of these barrier mechanisms is critical for the performance and scalability of federated learning systems. We propose a new barrier control technique called Probabilistic Synchronous Parallel (PSP). In contrast to existing mechanisms, it introduces a sampling primitive that composes with existing barrier control mechanisms to produce a family of mechanisms with improved convergence speed and scalability. Our proposal is supported with a convergence analysis of PSP-based SGD algorithm. In practice, we also propose heuristic techniques that further improve the efficiency of PSP. We evaluate the performance of proposed methods using the federated learning specific FEMNSIT dataset. The evaluation results show that PSP can effectively achieve good balance between system efficiency and model accuracy, mitigating the challenge of heterogeneity in federated learning.Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.Distributed System
    corecore