184 research outputs found

    Grid Computing: Experiment Management, Tool Integration, and Scientific Workflows

    Get PDF

    An Architecture to Stimulate Behavioral Development of Academic Cloud Users

    Get PDF
    Academic cloud infrastructures are constructed and maintained so they minimally constrain their users. Since they are free and do not limit usage patterns, academics developed such behavior that jeopardizes fair and flexible resource provisioning. For efficiency, related work either explicitly limits user access to resources, or introduce automatic rationing techniques. Surprisingly, the root cause (i.e., the user behavior) is disregarded by these approaches. This article compares academic cloud user behavior to its commercial equivalent. We deduce, that academics should behave like commercial cloud users to relieve resource provisioning. To encourage commercial like behavior, we propose an architectural extension to existing academic infrastructure clouds. First, every user's energy consumption and efficiency is monitored. Then, energy efficiency based leader boards are used to ignite competition between academics and reveal their worst practices. Leader boards are not sufficient to completely change user behavior. Thus, we introduce engaging options that encourage academics to delay resource requests and prefer resources more suitable for the infrastructure's internal provisioning. Finally, we evaluate our extensions via a simulation using real life academic resource request traces. We show a potential resource utilization reduction (by the factor of at most 2.6) while maintaining the unlimited nature of academic clouds

    HPS-HDS:High Performance Scheduling for Heterogeneous Distributed Systems

    Get PDF
    Heterogeneous Distributed Systems (HDS) are often characterized by a variety of resources that may or may not be coupled with specific platforms or environments. Such type of systems are Cluster Computing, Grid Computing, Peer-to-Peer Computing, Cloud Computing and Ubiquitous Computing all involving elements of heterogeneity, having a large variety of tools and software to manage them. As computing and data storage needs grow exponentially in HDS, increasing the size of data centers brings important diseconomies of scale. In this context, major solutions for scalability, mobility, reliability, fault tolerance and security are required to achieve high performance. More, HDS are highly dynamic in its structure, because the user requests must be respected as an agreement rule (SLA) and ensure QoS, so new algorithm for events and tasks scheduling and new methods for resource management should be designed to increase the performance of such systems. In this special issues, the accepted papers address the advance on scheduling algorithms, energy-aware models, self-organizing resource management, data-aware service allocation, Big Data management and processing, performance analysis and optimization

    An Efficient Online Prediction of Host Workloads Using Pruned GRU Neural Nets

    Full text link
    Host load prediction is essential for dynamic resource scaling and job scheduling in a cloud computing environment. In this context, workload prediction is challenging because of several issues. First, it must be accurate to enable precise scheduling decisions. Second, it must be fast to schedule at the right time. Third, a model must be able to account for new patterns of workloads so it can perform well on the latest and old patterns. Not being able to make an accurate and fast prediction or the inability to predict new usage patterns can result in severe outcomes such as service level agreement (SLA) misses. Our research trains a fast model with the ability of online adaptation based on the gated recurrent unit (GRU) to mitigate the mentioned issues. We use a multivariate approach using several features, such as memory usage, CPU usage, disk I/O usage, and disk space, to perform the predictions accurately. Moreover, we predict multiple steps ahead, which is essential for making scheduling decisions in advance. Furthermore, we use two pruning methods: L1 norm and random, to produce a sparse model for faster forecasts. Finally, online learning is used to create a model that can adapt over time to new workload patterns

    Decentralized Machine Learning for Intelligent Health Care Systems on the Computing Continuum

    Full text link
    The introduction of electronic personal health records (EHR) enables nationwide information exchange and curation among different health care systems. However, the current EHR systems do not provide transparent means for diagnosis support, medical research or can utilize the omnipresent data produced by the personal medical devices. Besides, the EHR systems are centrally orchestrated, which could potentially lead to a single point of failure. Therefore, in this article, we explore novel approaches for decentralizing machine learning over distributed ledgers to create intelligent EHR systems that can utilize information from personal medical devices for improved knowledge extraction. Consequently, we proposed and evaluated a conceptual EHR to enable anonymous predictive analysis across multiple medical institutions. The evaluation results indicate that the decentralized EHR can be deployed over the computing continuum with reduced machine learning time of up to 60% and consensus latency of below 8 seconds

    Simulation of a workflow execution as a real Cloud by adding noise

    Get PDF
    Cloud computing provides a cheap and elastic platform for executing large scientific workflow applications, but it rises two challenges in prediction of makespan (total execution time): performance instability of Cloud instances and variant scheduling of dynamic schedulers. Estimating the makespan is necessary for IT managers in order to calculate the cost of execution, for which they can use Cloud simulators. However, the ideal simulated environment produces the same output for the same workflow schedule and input parameters and thus can not reproduce the Cloud variant behavior. In this paper, we define a model and a methodology to add a noise to the simulation in order to equalise its behavior with the Clouds’ one. We propose several metrics to model a Cloud fluctuating behavior and then by injecting them within the simulator, it starts to behave as close as the real Cloud. Instead of using a normal distribution naively by using mean value and standard deviation of workflow tasks’ runtime, we inject two noises in the tasks’ runtime: noisiness of tasks within a workflow (defined as average runtime deviation) and noisiness provoked by the environment over the whole workflow (defined as average environmental deviation). In order to measure the quality of simulation by quantifying the relative difference between the simulated and measured values, we introduce the parameter inaccuracy. A series of experiments with different workflows and Cloud resources were conducted in order to evaluate our model and methodology. The results show that the inaccuracy of the makespan’s mean value was reduced up to 59 times compared to naively using the normal distribution. Additionally, we analyse the impact of particular workflow and Cloud parameters, which shows that the Cloud performance instability is simulated more correctly for small instance type (inaccuracy of up to 11.5%), instead of medium (inaccuracy of up to 35%), regardless of the workflow. Since our approach requires collecting data by executing the workflow in the Cloud in order to learn its behavior, we conduct a comprehensive sensitivity analysis. We determine the minimum amount of data that needs to be collected or minimum number of test cases that needs to be repeated for each experiment in order to get less than 12% inaccuracy for our noising parameter. Additionally, in order to reduce the number of experiments and determine the dependency of our model against Cloud resource and workflow parameters, the conducted comprehensive sensitivity analysis shows that the correctness of our model is independent of workflow parallel section size. With our sensitivity analysis, we show that we can reduce the inaccuracy of the naive approach with only 40% of total number of executions per experiment in the learning phase. In our case, 20 executions per experiment instead of 50, and only half of all experiments, which means down to 20%, i.e. 120 test cases instead of 600

    A Workload-Aware Energy Model for Virtual Machine Migration

    Get PDF
    Energy consumption has become a significant issue for data centres. Assessing their consumption requires precise and detailed models. In the latter years, many models have been proposed, but most of them either do not consider energy consumption related to virtual machine migration or do not consider the variation of the workload on (1) the virtual machines (VM) and (2) the physical machines hosting the VMs. In this paper, we show that omitting migration and workload variation from the models could lead to misleading consumption estimates. Then, we propose a new model for data centre energy consumption that takes into account the previously omitted model parameters and provides accurate energy consumption predictions for paravirtualised virtual machines running on homogeneous hosts. The new model’s accuracy is evaluated with a comprehensive set of operational scenarios. With the use of these scenarios we present a comparative analysis of our model with similar state-of-the-art models for energy consumption of VM Migration, showing an improvement up to 24% in accuracy of prediction

    PIASA: A power and interference aware resource management strategy for heterogeneous workloads in cloud data centers

    Get PDF
    Cloud data centers have been progressively adopted in different scenarios, as reflected in the execution of heterogeneous applications with diverse workloads and diverse quality of service (QoS) requirements. Virtual machine (VM) technology eases resource management in physical servers and helps cloud providers achieve goals such as optimization of energy consumption. However, the performance of an application running inside a VM is not guaranteed due to the interference among co-hosted workloads sharing the same physical resources. Moreover, the different types of co-hosted applications with diverse QoS requirements as well as the dynamic behavior of the cloud makes efficient provisioning of resources even more difficult and a challenging problem in cloud data centers. In this paper, we address the problem of resource allocation within a data center that runs different types of application workloads, particularly CPU- and network-intensive applications. To address these challenges, we propose an interference- and power-aware management mechanism that combines a performance deviation estimator and a scheduling algorithm to guide the resource allocation in virtualized environments. We conduct simulations by injecting synthetic workloads whose characteristics follow the last version of the Google Cloud tracelogs. The results indicate that our performance-enforcing strategy is able to fulfill contracted SLAs of real-world environments while reducing energy costs by as much as 21%
    • …
    corecore