56 research outputs found

    AWS PredSpot: Machine Learning for Predicting the Price of Spot Instances in AWS Cloud

    Get PDF
    Elastic Cloud Compute (EC2) is one of the most well-known services provided by Amazon for provisioning cloud computing resources, also known as instances. Besides the classical on-demand scheme, where users purchase compute capacity at a fixed cost, EC2 supports so-called spot instances, which are offered following a bidding scheme, where users can save up to 90% of the cost of the on-demand instance. EC2 spot instances can be a useful alternative for attaining an important reduction in infrastructure cost, but designing bidding policies can be a difficult task, since bidding under their cost will either prevent users from provisioning instances or losing those that they already own. Towards this extent, accurate forecasting of spot instance prices can be of an outstanding interest for designing working bidding policies. In this paper, we propose the use of different machine learning techniques to estimate the future price of EC2 spot instances. These include linear, ridge and lasso regressions, multilayer perceptrons, K-nearest neighbors, extra trees and random forests. The obtained performance varies significantly between instances types, and root mean squared errors ranges between values very close to zero up to values over 60 in some of the most expensive instances. Still, we can see that for most of the instances, forecasting performance is remarkably good, encouraging further research in this field of study

    Reducing the price of resource provisioning using EC2 spot instances with prediction models

    Get PDF
    The increasing demand of computing resources has boosted the use of cloud computing providers. This has raised a new dimension in which the connections between resource usage and costs have to be considered from an organizational perspective. As a part of its EC2 service, Amazon introduced spot instances (SI) as a cheap public infrastructure, but at the price of not ensuring reliability of the service. On the Amazon SI model, hired instances can be abruptly terminated by the service provider when necessary. The interface for managing SI is based on a bidding strategy that depends on non-public Amazon pricing strategies, which makes complicated for users to apply any scheduling or resource provisioning strategy based on such (cheaper) resources. Although it is believed that the use of the EC2 SIs infrastructure can reduce costs for final users, a deep review of literature concludes that their characteristics and possibilities have not yet been deeply explored. In this work we present a framework for the analysis of the EC2 SIs infrastructure that uses the price history of such resources in order to classify the SI availability zones and then generate price prediction models adapted to each class. The proposed models are validated through a formal experimentation process. As a result, these models are applied to generate resource provisioning plans that get the optimal price when using the SI infrastructure in a real scenario. Finally, the recent changes that Amazon has introduced in the SI model and how this work can adapt to these changes is discussed

    Transiency-driven Resource Management for Cloud Computing Platforms

    Get PDF
    Modern distributed server applications are hosted on enterprise or cloud data centers that provide computing, storage, and networking capabilities to these applications. These applications are built using the implicit assumption that the underlying servers will be stable and normally available, barring for occasional faults. In many emerging scenarios, however, data centers and clouds only provide transient, rather than continuous, availability of their servers. Transiency in modern distributed systems arises in many contexts, such as green data centers powered using renewable intermittent sources, and cloud platforms that provide lower-cost transient servers which can be unilaterally revoked by the cloud operator. Transient computing resources are increasingly important, and existing fault-tolerance and resource management techniques are inadequate for transient servers because applications typically assume continuous resource availability. This thesis presents research in distributed systems design that treats transiency as a first-class design principle. I show that combining transiency-specific fault-tolerance mechanisms with resource management policies to suit application characteristics and requirements, can yield significant cost and performance benefits. These mechanisms and policies have been implemented and prototyped as part of software systems, which allow a wide range of applications, such as interactive services and distributed data processing, to be deployed on transient servers, and can reduce cloud computing costs by up to 90\%. This thesis makes contributions to four areas of computer systems research: transiency-specific fault-tolerance, resource allocation, abstractions, and resource reclamation. For reducing the impact of transient server revocations, I develop two fault-tolerance techniques that are tailored to transient server characteristics and application requirements. For interactive applications, I build a derivative cloud platform that masks revocations by transparently moving application-state between servers of different types. Similarly, for distributed data processing applications, I investigate the use of application level periodic checkpointing to reduce the performance impact of server revocations. For managing and reducing the risk of server revocations, I investigate the use of server portfolios that allow transient resource allocation to be tailored to application requirements. Finally, I investigate how resource providers (such as cloud platforms) can provide transient resource availability without revocation, by looking into alternative resource reclamation techniques. I develop resource deflation, wherein a server\u27s resources are fractionally reclaimed, allowing the application to continue execution albeit with fewer resources. Resource deflation generalizes revocation, and the deflation mechanisms and cluster-wide policies can yield both high cluster utilization and low application performance degradation

    Notes on Cloud computing principles

    Get PDF
    This letter provides a review of fundamental distributed systems and economic Cloud computing principles. These principles are frequently deployed in their respective fields, but their inter-dependencies are often neglected. Given that Cloud Computing first and foremost is a new business model, a new model to sell computational resources, the understanding of these concepts is facilitated by treating them in unison. Here, we review some of the most important concepts and how they relate to each other

    Improving Resource Efficiency in Cloud Computing

    Get PDF
    Customers inside the cloud computing market are heterogeneous in several aspects, e.g., willingness to pay and performance requirement. By taking advantage of trade-offs created by these heterogeneities, the service provider can realize a more efficient system. This thesis is concerned with methods to improve the utilization of cloud infrastructure resources, and with the role of pricing in realizing those improvements and leveraging heterogeneity. Towards improving utilization, we explore methods to optimize network usage through traffic engineering. Particularly, we introduce a novel optimization framework to decrease the bandwidth required by inter-data center networks through traffic scheduling and shaping, and then propose algorithms to improve network utilization based on the analytical results derived from the optimization. When considering pricing, we focus on elucidating conditions under which providing a mix of services can increase a service provider\u27s revenue. Specifically, we characterize the conditions under which providing a ``delayed\u27\u27 service can result in a higher revenue for the service provider, and then offer guidelines for both users and providers

    Some Essays on models in the Bond and Energy Markets

    Get PDF
    The term structure of interest rates plays a fundamental role as an indicator of economy and market trends, as well as a supporting tool for macroeconomic strategies, investment choices or hedging practices. Therefore, the availability of proper techniques to model and predict its dynamics is of crucial importance for players in the financial markets. Along this path, the dissertation initially examined the reliability of parametric and neural network models to fit and predict the term structure of interest rates in emerging markets, focusing on the Brazilian, Russian, Indian, Chines and South African (BRICS) bond markets. The focus on the BRICS is straightforward: the dynamics of their term structures make tricky the application of consolidated yield curve models. In this respect, BRICS yield curve act as stress testers. The study then examined how to apply the above cited models to energy derivatives, focusing the attention on the Natural Gas and Electricity futures, motivated by the existence of similarity. The research was carried out using ad hoc routines, such as the R package "DeRezende.Ferreira", developed by the candidate and now freely downloadable at the Comprehensive R Archive Network (CRAN) repository*, as well as by means of code written in MatLab 2021a - 2022a and Python (3.10.10) using the open-source Keras (2.4.3) library with TensorFlow (2.4.0) as backend. The dissertation consists of four chapters based on published and/or under submission materials. Chapter 1 is an excerpt of the paper • Castello, O.; Resta, M. Modeling the Yield Curve of BRICS Countries: Parametric vs. Machine Learning Techniques. Risks 2022 The work firstly offers a comprehensive analysis of the BRICS bond market and then investigates and compares the abilities of the parametric Five–Factor De Rezende–Ferreira model and Feed–Forward Neural Networks to fit the yield curves. Chapter 2 is again focused on the BRICS market but investigates a methodology to identify optimal time–varying parameters for parametric yield curve models. The work then investigates the ability of this method both for in–sample fitting and out–of–sample prediction. Various forecasting methods are examined: the Univariate Autoregressive process AR(1), the TBATS and the Autoregressive Integrated Moving Average (ARIMA) combined to Nonlinear Autoregressive Neural Networks (NAR–NN). Chapter 3 studies the term structure dynamics in the Natural Gas futures market. This chapter represents an extension of the paper • Castello, O., Resta, M. (2022). Modeling and Forecasting Natural Gas Futures Prices Dynamics: An Integrated Approach. In: Corazza, M., Perna, C., Pizzi, C., Sibillo, M. (eds) Mathematical and Statistical Methods for Actuarial Sciences and Finance. MAF 2022. After showing that the natural gas and bond markets share similar stylized facts, we exploit these findings to examine whether techniques conventionally employed on the bonds market can be effectively used also for accurate in–sample fitting and out–of–sample forecast. We worked at first in–sample and we compared the performance of three models: the Four–Factor Dynamic Nelson–Siegel–Svensson (4F-DNSS), the Five–Factor Dynamic De Rezende–Ferreira (5F–DRF) and the B–Spline. Then, we turned the attention on forecasting, and explored the effectiveness of a hybrid methodology relying on the joint use of 4F–DNSS, 5F–DRF and B–Splines with Nonlinear Autoregressive Neural Networks (NAR–NNs). Empirical study was carried on using the Dutch Title Transfer Facility (TTF) daily futures prices in the period from January 2011 to June 2022 which included also recent market turmoil to validate the overall effectiveness of the framework. Chapter 4 analyzes the predictability of the electricity futures prices term structure with Artificial Neural Networks. Prices time series and futures curves are characterized by high volatility which is a direct consequence of an inelastic demand and of the non–storable nature of the underlying commodity. We analyzed the forecasting power of several neural network models, including Nonlinear Autoregressive (NAR–NNs), NAR with Exogenous Inputs (NARX–NNs), Long Short–Term Memory (LSTM–NNs) and Encoder–Decoder Long Short–Term Memory Neural Networks (ED–LSTM–NNs). We carried out an extensive study of the models predictive capabilities using both the univariate and multivariate setting. Additionally, we explored whether incorporating various exogenous components such as Carbon Emission Certificates (CO2) spot prices, as well as Natural Gas and Coal futures prices can lead to improvements of the models performances. The data of the European Energy Exchange (EEX) power market were adopted to test the models. Chapter 4 concludes. ____________________________ * https://cran.r-project.org/web/packages/DeRezende.Ferreira/index.htm

    Growth through servitization:drivers, enablers, processes and impact (SSC2014)

    Get PDF

    Financial aspects of cloud computing business models

    Get PDF
    The purpose of the study was to explore financial aspects of cloud computing business models from information technology (IT) services provider’s perspective. The financial aspects were divided into revenue model and related pricing mechanisms and cost structure and related cost accounting mechanisms according to business model ontology. Cloud computing is a new computing paradigm and the latest megatrend in IT industry developed as a result of the convergence of numerous new and existing technologies. It is characterized by provision of rapidly scalable and measurable IT capabilities as a service on on-demand and self-service basis over the network from common resource pool. The study was carried out as a single case study in a global company offering IT services for large enterprises and public organizations and currently preparing to introduce its own cloud services. Ten semi-structured interviews were conducted with managers of the case company for exploring the financial aspects of cloud services. Qualitative data analysis was employed for processing and summarizing the findings. Findings of the study suggested that each cloud service should have a distinct business model. The business model is a mediating construct that translates the new technology to the service’s value proposition. The business model also defines appropriate pricing and cost accounting mechanism for a service. The business models are based on services provider’s position in cloud computing value chain. A cloud computing business logic framework was created to illustrate the interaction between the value chain, business models and its elements. The key cost types of services do not necessarily change much with cloud computing. Cloud computing has still potential to significantly reduce services provider’s costs through reengineering of production architecture. A cloud computing cost accounting model was created to illustrate how production costs should be aggregated and distributed. Pricing of services changes with cloud computing and pay per use and subscription-based pricing mechanisms are most typical for cloud services. The pricing should be based on customer’s perceived value instead of production costs of services. A generic cloud computing pricing mechanism that combines pay per use and subscription mechanisms was created to better balance risk sharing between services provider and customer. The main contributions of the study were the establishment of services provider focus in cloud computing literature and discussion of financial aspects of cloud computing

    Innovation in construction techniques for tall buildings

    Get PDF
    The skyline of many ‘world cities’ are defined and punctuated by tall buildings. The drivers for such dominant skylines range from land scarcity and social needs; high real estate values; commercial opportunity and corporate demand, through to metropolitan signposting. This fascination with tall buildings started with the patrician families who created the 11th Century skyline of San Gimignano by building seventy tower-houses (some up to 50m tall) as symbols of their wealth and power. This was most famously followed in the late 19th Century with the Manhattan skyline, then Dubai building the world’s highest building, then China building some eighty tall buildings completed in the last 5 years, then UK building Europe’s highest tower, the Shard and finally back to Dubai, planning a kilometre tall tower, potentially realising Ludwig Mies van der Rohe’s ‘Impossible Dream’ of the 1920’s and Frank Lloyd Wright’s 1956 ‘Mile High Illinois’. This ambition to build higher and higher continues to challenge the Architects, Engineers and Builders of tall buildings and is expected to continue into the future. The tall building format is clearly here to stay. [Continues.
    • …
    corecore