26 research outputs found

    Container deployment strategy for edge networking

    Get PDF
    Conference code: 156753 Cited By :2 Export Date: 1 February 2021 References: AlertManager, , https://prometheus.io/docs/alerting/alertmanager/, Accessed: 2019-01-30; Docker Swarm Mode Overview, , https://docs.docker.com/engine/swarm/, Accessed: 2019-01-30; Google cAdvisor, , https://github.com/google/cadvisor, Accessed: 2019-01-30; Prometheus - Monitoring System & Time Series Database, , https://prometheus.io, Accessed: 2019-01-30; The Kubernetes Scheduler, , https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/, Accessed: 2019-01-30; (2018) Ericsson Mobility Report, , https://www.ericsson.com/assets/local/mobility-report/documents/2018/ericsson-mobilityreport-june-2018.pdf, Technical Report; Balan, R., Flinn, J., Satyanarayanan, M., Sinnamohideen, S., Yang, H.-I., The case for cyber foraging (2002) Proceedings of the 10th Workshop on ACM SIGOPS European Workshop (EW 10), pp. 87-92. , https://doi.org/10.1145/1133373.1133390, ACM, New York, NY, USA; Gordon, M.S., Anoushe Jamshidi, D., Mahlke, S., Mao, Z.M., Chen, X., CoMET: Code offload by migrating execution transparently (2012) Proceedings of the 10th USENIX Conference on Operating Systems Design and Implementation (OSDI’12), pp. 93-106. , http://dl.acm.org/citation.cfm?id=2387880.2387890, USENIX Association, Berkeley, CA, USA; Habak, K., Ammar, M., Harras, K.A., Zegura, E., Femto clouds: Leveraging mobile devices to provide cloud service at the edge (2015) 2015 IEEE 8th International Conference on Cloud Computing, pp. 9-16. , https://doi.org/10.1109/CLOUD.2015.12; Hindman, B., Konwinski, A., Zaharia, M., Ghodsi, A., Joseph, A.D., Katz, R., Shenker, S., Stoica, I., Mesos: A Platform for Fine-grained Resource Sharing in the Data Center (2011) Proceedings of the 8th USENIX Conference on Networked Systems Design and Implementation (NSDI’11), pp. 295-308. , http://dl.acm.org/citation.cfm?id=1972457.1972488, USENIX Association, Berkeley, CA, USA; Pahl, C., Lee, B., Containers and clusters for edge cloud architectures – A technology review (2015) 2015 3rd International Conference on Future Internet of Things and Cloud, pp. 379-386. , https://doi.org/10.1109/FiCloud.2015.35; Roughan, M., Simplifying the synthesis of internet traffic matrices (2005) SIGCOMM Comput. Commun. Rev., 35 (5), pp. 93-96. , https://doi.org/10.1145/1096536.1096551, Oct. 2005; Satyanarayanan, M., Bahl, P., Caceres, R., Davies, N., The case for VM-based cloudlets in mobile computing (2009) IEEE Pervasive Computing, 8 (4), pp. 14-23. , https://doi.org/10.1109/MPRV.2009.82, Oct 2009; Saurez, E., Hong, K., Lillethun, D., Ramachandran, U., Ottenwälder, B., Incremental Deployment and Migration of Geo-distributed Situation Awareness Applications in the Fog (2016) Proceedings of the 10th ACM International Conference on Distributed and Event-Based Systems (DEBS’16), pp. 258-269. , https://doi.org/10.1145/2933267.2933317, ACM, New York, NY, USA; Shi, W., Cao, J., Zhang, Q., Li, Y., Xu, L., Edge computing: Vision and challenges (2016) IEEE Internet of Things Journal, 3 (5), pp. 637-646. , https://doi.org/10.1109/JIOT.2016.2579198, Oct 2016; Wu, C.-P., Suresh, M.A., Silva, D.D., Container lifecycle management for edge nodes: Poster (2017) Proceedings of the Second ACM/IEEE Symposium on Edge Computing (SEC’17), p. 2; Yi, S., Hao, Z., Qin, Z., Li, Q., Fog computing: Platform and applications (2015) 2015 Third IEEE Workshop on Hot Topics in Web Systems and Technologies (HotWeb), pp. 73-78Edge computing paradigm has been proposed to support latency-sensitive applications such as Augmented Reality (AR)/ Virtual Reality(VR) and online gaming, by placing computing resources close to where they are most demanded, at the edge of the network. Many solutions have proposed to deploy virtual resources as close as possible to the consumers using virtual machines and containers. However, the most popular container orchestration tools, e.g., Docker Swarm and Kubernetes, do not take into account the locality aspect during deployment, resulting in poor location choices at the edge of the network. In this paper, we propose an edge deployment strategy to tackle the lack of locality awareness of the container orchestrator. In this strategy, the orchestrator collects information about latency and the real-time resource consumption from the current container deployments, providing a bird’s-eye view of the most demanded locations and the best places for deployment to cover the largest number of clients. We evaluated the proposed model using 16 AWS regions across the globe and compared to the standard deployment strategies. The experimental results show our edge strategy reduces the average latency between serving container to the clients by up to 4 times compared to the standard deployment algorithms. © 2019 Association for Computing Machinery.Peer reviewe

    Independent tasks on 2 resources with co-scheduling effects

    Get PDF
    Concurrent kernel execution is a relatively new feature in modern GPUs, which was designed to improve hardware utilization and the overall system throughput. However, the decision on the simultaneous execution of tasks is performed by the hardware with a leftover policy, that assigns as many resources as possible for one task and then assigns the remaining resources to the next task. This can lead to unreasonable use of resources. In this work, we tackle the problem of co-scheduling for GPUs with and without preemption, with the focus on determining the kernels submission order to reduce the number of preemptions and the kernels makespan, respectively. We propose a graph-based theoretical model to build preemptive and non-preemptive schedules. We show that the optimal preemptive makespan can be computed by solving a Linear Program in polynomial time, and we propose an algorithm based on this solution which minimizes the number of preemptions. We also propose an algorithm that transforms a preemptive solution of optimal makespan into a non-preemptive solution with the smallest possible preemption overhead. We show, however, that finding the minimal amount of preemptions among all preemptive solutions of optimal makespan is a NP-hard problem, and computing the optimal non-preemptive schedule is also NP-hard. In addition, we study the non-preemptive problem, without searching first for a good preemptive solution, and present a Mixed Integer Linear Program solution to this problem. We performed experiments on real-world GPU applications and our approach can achieve optimal makespan by preempting 6 to 9% of the tasks. Our non-preemptive approach, on the other side, obtains makespan within 2.5% of the optimal preemptive schedules, while previous approaches exceed the preemptive makespan by 5 to 12%

    A checkpointing mechanism for GPU intensive HPC applications

    Get PDF
    Please refer to pdf.James Watt ScholarshipEngineering and Physical Sciences Research Council (EPSRC) grants EP/N028201/1 and EP/L00058X/

    Building Computing-As-A-Service Mobile Cloud System

    Get PDF
    The last five years have witnessed the proliferation of smart mobile devices, the explosion of various mobile applications and the rapid adoption of cloud computing in business, governmental and educational IT deployment. There is also a growing trends of combining mobile computing and cloud computing as a new popular computing paradigm nowadays. This thesis envisions the future of mobile computing which is primarily affected by following three trends: First, servers in cloud equipped with high speed multi-core technology have been the main stream today. Meanwhile, ARM processor powered servers is growingly became popular recently and the virtualization on ARM systems is also gaining wide ranges of attentions recently. Second, high-speed internet has been pervasive and highly available. Mobile devices are able to connect to cloud anytime and anywhere. Third, cloud computing is reshaping the way of using computing resources. The classic pay/scale-as-you-go model allows hardware resources to be optimally allocated and well-managed. These three trends lend credence to a new mobile computing model with the combination of resource-rich cloud and less powerful mobile devices. In this model, mobile devices run the core virtualization hypervisor with virtualized phone instances, allowing for pervasive access to more powerful, highly-available virtual phone clones in the cloud. The centralized cloud, powered by rich computing and memory recourses, hosts virtual phone clones and repeatedly synchronize the data changes with virtual phone instances running on mobile devices. Users can flexibly isolate different computing environments. In this dissertation, we explored the opportunity of leveraging cloud resources for mobile computing for the purpose of energy saving, performance augmentation as well as secure computing enviroment isolation. We proposed a framework that allows mo- bile users to seamlessly leverage cloud to augment the computing capability of mobile devices and also makes it simpler for application developers to run their smartphone applications in the cloud without tedious application partitioning. This framework was built with virtualization on both server side and mobile devices. It has three building blocks including agile virtual machine deployment, efficient virtual resource management, and seamless mobile augmentation. We presented the design, imple- mentation and evaluation of these three components and demonstrated the feasibility of the proposed mobile cloud model

    Towards An Efficient Cloud Computing System: Data Management, Resource Allocation and Job Scheduling

    Get PDF
    Cloud computing is an emerging technology in distributed computing, and it has proved to be an effective infrastructure to provide services to users. Cloud is developing day by day and faces many challenges. One of challenges is to build cost-effective data management system that can ensure high data availability while maintaining consistency. Another challenge in cloud is efficient resource allocation which ensures high resource utilization and high SLO availability. Scheduling, referring to a set of policies to control the order of the work to be performed by a computer system, for high throughput is another challenge. In this dissertation, we study how to manage data and improve data availability while reducing cost (i.e., consistency maintenance cost and storage cost); how to efficiently manage the resource for processing jobs and increase the resource utilization with high SLO availability; how to design an efficient scheduling algorithm which provides high throughput, low overhead while satisfying the demands on completion time of jobs. Replication is a common approach to enhance data availability in cloud storage systems. Previously proposed replication schemes cannot effectively handle both correlated and non-correlated machine failures while increasing the data availability with the limited resource. The schemes for correlated machine failures must create a constant number of replicas for each data object, which neglects diverse data popularities and cannot utilize the resource to maximize the expected data availability. Also, the previous schemes neglect the consistency maintenance cost and the storage cost caused by replication. It is critical for cloud providers to maximize data availability hence minimize SLA (Service Level Agreement) violations while minimize cost caused by replication in order to maximize the revenue. In this dissertation, we build a nonlinear programming model to maximize data availability in both types of failures and minimize the cost caused by replication. Based on the model\u27s solution for the replication degree of each data object, we propose a low-cost multi-failure resilient replication scheme (MRR). MRR can effectively handle both correlated and non-correlated machine failures, considers data popularities to enhance data availability, and also tries to minimize consistency maintenance and storage cost. In current cloud, providers still need to reserve resources to allow users to scale on demand. The capacity offered by cloud offerings is in the form of pre-defined virtual machine (VM) configurations. This incurs resource wastage and results in low resource utilization when the users actually consume much less resource than the VM capacity. Existing works either reallocate the unused resources with no Service Level Objectives (SLOs) for availability\footnote{Availability refers to the probability of an allocated resource being remain operational and accessible during the validity of the contract~\cite{CarvalhoCirne14}.} or consider SLOs to reallocate the unused resources for long-running service jobs. This approach increases the allocated resource whenever it detects that SLO is violated in order to achieve SLO in the long term, neglecting the frequent fluctuations of jobs\u27 resource requirements in real-time application especially for short-term jobs that require fast responses and decision making for resource allocation. Thus, this approach cannot fully utilize the resources to process data because they cannot quickly adjust the resource allocation strategy dealing with the fluctuations of jobs\u27 resource requirements. What\u27s more, the previous opportunistic based resource allocation approach aims at providing long-term availability SLOs with good QoS for long-running jobs, which ensures that the jobs can be finished within weeks or months by providing slighted degraded resources with moderate availability guarantees, but it ignores deadline constraints in defining Quality of Service (QoS) for short-lived jobs requiring online responses in real-time application, thus it cannot truly guarantee the QoS and long-term availability SLOs. To overcome the drawbacks of previous works, we adequately consider the fluctuations of unused resource caused by bursts of jobs\u27 resource demands, and present a cooperative opportunistic resource provisioning (CORP) scheme to dynamically allocate the resource to jobs. CORP leverages complementarity of jobs\u27 requirements on different resource types and utilizes the job packing to reduce the resource wastage and increase the resource utilization. An increasing number of large-scale data analytics frameworks move towards larger degrees of parallelism aiming at high throughput. Scheduling that assigns tasks to workers and preemption that suspends low-priority tasks and runs high-priority tasks are two important functions in such frameworks. There are many existing works on scheduling and preemption in literature to provide high throughput. However, previous works do not substantially consider dependency in increasing throughput in scheduling or preemption. Considering dependency is crucial to increase the overall throughput. Besides, extensive task evictions for preemption increase context switches, which may decrease the throughput. To address the above problems, we propose an efficient scheduling system Dependency-aware Scheduling and Preemption (DSP) to achieve high throughput in scheduling and preemption. First, we build a mathematical model to minimize the makespan with the consideration of task dependency, and derive the target workers for tasks which can minimize the makespan; second, we utilize task dependency information to determine tasks\u27 priorities for preemption; finally, we present a probabilistic based preemption to reduce the numerous preemptions, while satisfying the demands on completion time of jobs. We conduct trace driven simulations on a real-cluster and real-world experiments on Amazon S3/EC2 to demonstrate the efficiency and effectiveness of our proposed system in comparison with other systems. The experimental results show the superior performance of our proposed system. In the future, we will further consider data update frequency to reduce consistency maintenance cost, and we will consider the effects of node joining and node leaving. Also we will consider energy consumption of machines and design an optimal replication scheme to improve data availability while saving power. For resource allocation, we will consider using the greedy approach for deep learning to reduce the computation overhead caused by the deep neural network. Also, we will additionally consider the heterogeneity of jobs (i.e., short jobs and long jobs), and use a hybrid resource allocation strategy to provide SLO availability customization for different job types while increasing the resource utilization. For scheduling, we will aim to handle scheduling tasks with partial dependency, worker failures in scheduling and make our DSP fully distributed to increase its scalability. Finally, we plan to use different workloads and real-world experiment to fully test the performance of our methods and make our preliminary system design more mature

    Runtime scheduling and updating for deep learning applications

    Get PDF
    Recent decades have witnessed the breakthrough of deep learning algorithms, which have been widely used in many areas. Typically, deployment of deep learning applications consists of compute-intensive training and latency-sensitive inference. To support deep learning applications, enterprises build large-scale computing clusters composed of expensive accelerators, such as GPUs, FPGAs or other domain-specific ASICs. However, it is challenging for deep learning applications to achieve high resource utilization and maintain high accuracy in the face of dynamic workloads. On the one hand, the workload of deep learning tasks always changes over time, which leads to a gap between the required resources and statically allocated resources. On the other hand, the distribution of input data may also change over time, and the accuracy of inference could decrease before updating the model. In this thesis, we present a new deep learning system architecture which can schedule and update deep learning applications at runtime to efficiently handle dynamic workloads. We identify and study three key components. (i) PipeSwitch: A deep learning system that allows multiple deep learning applications to time-share the same GPU with the entire GPU memory and millisecond-scale switching overhead. PipeSwitch enables unused cycles of inference applications to be dynamically filled by training or other inference applications. With PipeSwitch, GPU utilization can be significantly improved without sacrificing service level objectives. (ii) DistMind: A disaggregated deep learning system that enables efficient multiplexing of deep learning applications with near-optimal resource utilization. DistMind decouples compute from host memory, and exposes the abstractions of a GPU pool and a memory pool, each of which can be independently provisioned and dynamically allocated to deep learning tasks. (iii) RegexNet: A payload-based, automated, reactive recovery system for web services under regular expression denial of service attacks. RegexNet adopts a deep learning model, which is updated constantly in a feedback loop during runtime, to classify payloads of upcoming HTTP requests. We have built system prototypes for these components, and integrated them with existing software. Our evaluation on a variety of environments and configurations shows the benefits of our solution

    Power-Aware Job Dispatching in High Performance Computing Systems

    Get PDF
    This works deals with the power-aware job dispatching problem in supercomputers; broadly speaking the dispatching consists of assigning finite capacity resources to a set of activities, with a special concern toward power and energy efficient solutions. We introduce novel optimization approaches to address its multiple aspects. The proposed techniques have a broad application range but are aimed at applications in the field of High Performance Computing (HPC) systems. Devising a power-aware HPC job dispatcher is a complex, where contrasting goals must be satisfied. Furthermore, the online nature of the problem request that solutions must be computed in real time respecting stringent limits. This aspect historically discouraged the usage of exact methods and favouring instead the adoption of heuristic techniques. The application of optimization approaches to the dispatching task is still an unexplored area of research and can drastically improve the performance of HPC systems. In this work we tackle the job dispatching problem on a real HPC machine, the Eurora supercomputer hosted at the Cineca research center, Bologna. We propose a Constraint Programming (CP) model that outperforms the dispatching software currently in use. An essential element to take power-aware decisions during the job dispatching phase is the possibility to estimate jobs power consumptions before their execution. To this end, we applied Machine Learning techniques to create a prediction model that was trained and tested on the Euora supercomputer, showing a great prediction accuracy. Then we finally develop a power-aware solution, considering the same target machine, and we devise different approaches to solve the dispatching problem while curtailing the power consumption of the whole system under a given threshold. We proposed a heuristic technique and a CP/heuristic hybrid method, both able to solve practical size instances and outperform the current state-of-the-art techniques

    Advances in Grid Computing

    Get PDF
    This book approaches the grid computing with a perspective on the latest achievements in the field, providing an insight into the current research trends and advances, and presenting a large range of innovative research papers. The topics covered in this book include resource and data management, grid architectures and development, and grid-enabled applications. New ideas employing heuristic methods from swarm intelligence or genetic algorithm and quantum encryption are considered in order to explain two main aspects of grid computing: resource management and data management. The book addresses also some aspects of grid computing that regard architecture and development, and includes a diverse range of applications for grid computing, including possible human grid computing system, simulation of the fusion reaction, ubiquitous healthcare service provisioning and complex water systems

    Design of robust scheduling methodologies for high performance computing

    Get PDF
    Scientific applications are often large, complex, computationally-intensive, and irregular. Loops are often an abundant source of parallelism in scientific applications. Due to the ever-increasing computational needs of scientific applications, high performance computing (HPC) systems have become larger and more complex, offering increased parallelism at multiple hardware levels. Load imbalance, caused by irregular computational load per task and unpredictable computing system characteristics (system variability), often degrades the performance of applications. Besides, perturbations, such as reduced computing power, network latency availability, or failures, can severely impact the performance of the applications. System variability and perturbations are only expected to increase in future extreme-scale computing systems. Extrapolating the current failure rate to Exascale would result in a failure every 20 minutes. Such failure rate and perturbations would render the computing systems unusable. This doctoral thesis improves the performance of computationally-intensive scientific applications on HPC systems via robust load balancing. Robust scheduling ensures and maintains improved load balanced execution under unpredictable application and system characteristics. A number of dynamic loop self-scheduling (DLS) techniques have been introduced and successfully used in scientific applications between the 1980s and 2000s. These DLS techniques are not fault-tolerant as they were originally introduced. In this thesis, we identify three major research questions to achieve robust scheduling (1) How to ensure that the DLS techniques employed in scientific applications today adhere to their original design goals and specifications? (2) How to select a DLS technique that will achieve improved performance under perturbations? (3) How to tolerate perturbations during execution and maintain a load balanced execution on HPC systems? To answer the first question, we reproduced the original experiments that introduced the DLS techniques to verify their present implementation. Simulation is used to reproduce experiments on systems from the past. Realistic simulation induces a similar analysis and conclusions to the analysis of the native results. To this end, we devised an approach for bridging the native and simulative executions of parallel applications on HPC systems. This simulation approach is used to reproduce scheduling experiments on past and present systems to verify the implementation of DLS techniques. Given the multiple levels of parallelism offered by the present HPC systems, we analyzed the load imbalance in scientific applications, from computer vision, astrophysics, and mathematical kernels, at both thread and process levels. This analysis revealed a significant interplay between thread level and process level load balancing. We found that dynamic load balancing at the thread level propagates to the process level and vice versa. However, the best application performance is only achieved by two-level dynamic load balancing. Next, we examined the performance of applications under perturbations. We found that the most robust DLS technique does not deliver the best performance under various perturbations. The most efficient DLS technique changes by changing the application, the system, or perturbations during execution. This signifies the algorithm selection problem in the DLS. We leveraged realistic simulations to address the algorithm selection problem of scheduling under perturbations via a simulation assisted approach (SimAS), which answers the second question. SimAS dynamically selects DLS techniques that improve the performance depending on the application, system, and perturbations during the execution. To answer the third question, we introduced a robust dynamic load balancing (rDLB) approach for the robust self-scheduling of scientific applications under failures (question 3). rDLB proactively reschedules already allocated tasks and requires no detection of perturbations. rDLB tolerates up to P −1 processor failures (P is the number of processors allocated to the application) and boosts the flexibility of applications against nonfatal perturbations, such as reduced availability of resources. This thesis is the first to provide insights into the interplay between thread and process level dynamic load balancing in scientific applications. Verified DLS techniques, SimAS, and rDLB are integrated into an MPI-based dynamic load balancing library (DLS4LB), which supports thirteen DLS techniques, for robust dynamic load balancing of scientific applications on HPC systems. Using the methods devised in this thesis, we improved the performance of scientific applications by up to 21% via two-level dynamic load balancing. Under perturbations, we enhanced their performance by a factor of 7 and their flexibility by a factor of 30. This thesis opens up the horizons into understanding the interplay of load balancing between various levels of software parallelism and lays the ground for robust multilevel scheduling for the upcoming Exascale HPC systems and beyond
    corecore