5,861 research outputs found

    TOWARDS AUTONOMIC COST-AWARE ALLOCATION OF CLOUD RESOURCES

    Get PDF
    While clouds conceptually facilitate very fine-grained resource provisioning, information systems that are able to fully leverage this potential remain an open research problem. This is due to factors such as significant reconfiguration lead-times and non-trivial dependencies between software and hardware resources. In this work we address these factors explicitly and introduce an accurate workload forecasting model, based on Fourier Transformation and stochastic processes, paired with an adaptive provisioning framework. By automatically identifying the key characteristics in the workload process and estimating the residual variation, our model forecasts the workload process in the near future with very high accuracy. Our preliminary experimental evaluation results show great promise. When evaluated empirically on a real Wikipedia trace our resource provisioning framework successfully utilizes the workload forecast module to achieve superior resource utilization efficiency under constant service level objective satisfaction. More generally, this work corroborates the potential of holistic cloud management approaches that fuse domain specific solutions from areas such as workload prediction, autonomic system management, and empirical analysis

    A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing

    Full text link
    Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless Communications and Mobile Computing journa

    Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World

    Get PDF
    This report documents the program and the outcomes of GI-Dagstuhl Seminar 16394 "Software Performance Engineering in the DevOps World". The seminar addressed the problem of performance-aware DevOps. Both, DevOps and performance engineering have been growing trends over the past one to two years, in no small part due to the rise in importance of identifying performance anomalies in the operations (Ops) of cloud and big data systems and feeding these back to the development (Dev). However, so far, the research community has treated software engineering, performance engineering, and cloud computing mostly as individual research areas. We aimed to identify cross-community collaboration, and to set the path for long-lasting collaborations towards performance-aware DevOps. The main goal of the seminar was to bring together young researchers (PhD students in a later stage of their PhD, as well as PostDocs or Junior Professors) in the areas of (i) software engineering, (ii) performance engineering, and (iii) cloud computing and big data to present their current research projects, to exchange experience and expertise, to discuss research challenges, and to develop ideas for future collaborations

    Cloud Workload Prediction by Means of Simulations

    Get PDF
    Clouds hide the complexity of maintaining a physical infrastructure with a disadvantage: they also hide their internal workings. Should users need to know about these details e.g., to increase the reliability or performance of their applications, they would need to detect slight behavioural changes in the underlying system. Existing solutions for such purposes offer limited capabilities. This paper proposes a technique for predicting background workload by means of simulations that are providing knowledge of the underlying clouds to support activities like cloud orchestration or workflow enactment. We propose these predictions to select more suitable execution environments for scientific workflows. We validate the proposed prediction approach with a biochemical application

    Strategic Decision Support for Smart-Leasing Infrastructure-as-a-Service

    Get PDF
    In this work we formulate strategic decision models describing when and how many reserved instances should be bought when outsourcing workload to an IaaS provider. Current IaaS providers offer various pricing options for leasing computing resources. When decision makers are faced with the choice and most importantly with uneven workloads, the decision at which time and with which type of computing resource to work is no longer trivial. We present case studies taken from the online services industry and present solution models to solve the various use case problems and compare them. Following a thorough numerical analysis using both real, as well as augmented workload traces in simulations, we found that it is cost efficient to (1) have a balanced portfolio of resource options and (2) avoiding commitments in the form of upfront payments when faced with uncertainty. Compared to a simple IaaS benchmark, this allows cutting costs by 20%

    Taming Energy Costs of Large Enterprise Systems Through Adaptive Provisioning

    Get PDF
    One of the most pressing concerns in modern datacenter management is the rising cost of operation. Therefore, reducing variable expense, such as energy cost, has become a number one priority. However, reducing energy cost in large distributed enterprise system is an open research topic. These systems are commonly subjected to highly volatile workload processes and characterized by complex performance dependencies. This paper explicitly addresses this challenge and presents a novel approach to Taming Energy Costs of Larger Enterprise Systems (Tecless). Our adaptive provisioning methodology combines a low-level technical perspective on distributed systems with a high-level treatment of workload processes. More concretely, Tecless fuses an empirical bottleneck detection model with a statistical workload prediction model. Our methodology forecasts the system load online, which enables on-demand infrastructure adaption while continuously guaranteeing quality of service. In our analysis we show that the prediction of future workload allows adaptive provisioning with a power saving potential of up 25 percent of the total energy cost
    corecore