1,038 research outputs found

    Demand response approaches in a research project versus a real business

    Get PDF
    © 2023 Elsevier. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/Demand response through Demand Aggregation is part of the energy transition towards a green and distributed system. Although the market is open in most European countries, its practical implementation is not much successful yet. In the last decade, research presented different options to deal with demand response and aggregation. This paper compares the benefits and limitations of strategies implemented from a research perspective and the strategy followed by a recently created company to see which of the advances in research are currently useful from a business perspective. The study presents a novel decision matrix to evaluate demand response strategies. Results show that there are technical limitations in current Energy Management Systems that need to be taken into account when developing demand aggregation platforms. In addition, the study highlights the importance to propose a simple and scalable solution to allow consumers to participate actively in electricity markets and create a success business model.This research has been supported by the research and innovation programme Horizon 2020 of the European Union under the grant agreement nr. 731211 SABINA. C. Corchero work is supported by the grant IJCI-2015-26650 (MICINN). All researchers have been partially supported by the Generalitat de Catalunya, Spain (2017 SGR 1219). L. Canals Casals thanks the national project IAQ4EDU (PID2020-117366RB-100) for giving the opportunity to continue his work in this field.Peer ReviewedObjectius de Desenvolupament Sostenible::7 - Energia Assequible i No ContaminantPostprint (published version

    Pricing the Cloud: An Auction Approach

    Get PDF
    Cloud computing has changed the processing and service modes of information communication technology and has affected the transformation, upgrading and innovation of the IT-related industry systems. The rapid development of cloud computing in business practice has spawned a whole new field of interdisciplinary, providing opportunities and challenges for business management research. One of the critical factors impacting cloud computing is how to price cloud services. An appropriate pricing strategy has important practical means to stakeholders, especially to providers and customers. This study addressed and discussed research findings on cloud computing pricing strategies, such as fixed pricing, bidding pricing, and dynamic pricing. Another key factor for cloud computing is Quality of Service (QoS), such as availability, reliability, latency, security, throughput, capacity, scalability, elasticity, etc. Cloud providers seek to improve QoS to attract more potential customers; while, customers intend to find QoS matching services that do not exceed their budget constraints. Based on the existing study, a hybrid QoS-based pricing mechanism, which consists of subscription and dynamic auction design, is proposed and illustrated to cloud services. The results indicate that our hybrid pricing mechanism has potential to better allocate available cloud resources, aiming at increasing revenues for providers and reducing expenses for customers in practice

    Transiency-driven Resource Management for Cloud Computing Platforms

    Get PDF
    Modern distributed server applications are hosted on enterprise or cloud data centers that provide computing, storage, and networking capabilities to these applications. These applications are built using the implicit assumption that the underlying servers will be stable and normally available, barring for occasional faults. In many emerging scenarios, however, data centers and clouds only provide transient, rather than continuous, availability of their servers. Transiency in modern distributed systems arises in many contexts, such as green data centers powered using renewable intermittent sources, and cloud platforms that provide lower-cost transient servers which can be unilaterally revoked by the cloud operator. Transient computing resources are increasingly important, and existing fault-tolerance and resource management techniques are inadequate for transient servers because applications typically assume continuous resource availability. This thesis presents research in distributed systems design that treats transiency as a first-class design principle. I show that combining transiency-specific fault-tolerance mechanisms with resource management policies to suit application characteristics and requirements, can yield significant cost and performance benefits. These mechanisms and policies have been implemented and prototyped as part of software systems, which allow a wide range of applications, such as interactive services and distributed data processing, to be deployed on transient servers, and can reduce cloud computing costs by up to 90\%. This thesis makes contributions to four areas of computer systems research: transiency-specific fault-tolerance, resource allocation, abstractions, and resource reclamation. For reducing the impact of transient server revocations, I develop two fault-tolerance techniques that are tailored to transient server characteristics and application requirements. For interactive applications, I build a derivative cloud platform that masks revocations by transparently moving application-state between servers of different types. Similarly, for distributed data processing applications, I investigate the use of application level periodic checkpointing to reduce the performance impact of server revocations. For managing and reducing the risk of server revocations, I investigate the use of server portfolios that allow transient resource allocation to be tailored to application requirements. Finally, I investigate how resource providers (such as cloud platforms) can provide transient resource availability without revocation, by looking into alternative resource reclamation techniques. I develop resource deflation, wherein a server\u27s resources are fractionally reclaimed, allowing the application to continue execution albeit with fewer resources. Resource deflation generalizes revocation, and the deflation mechanisms and cluster-wide policies can yield both high cluster utilization and low application performance degradation

    Decision-making under uncertainty in short-term electricity markets

    Get PDF
    In the course of the energy transition, the share of electricity generation from renewable energy sources in Germany has increased significantly in recent years and will continue to rise. Particularly fluctuating renewables like wind and solar bring more uncertainty and volatility to the electricity system. As markets determine the unit commitment in systems with self-dispatch, many changes have been made to the design of electricity markets to meet the new challenges. Thereby, a trend towards real-time can be observed. Short-term electricity markets are becoming more important and are seen as suitable for efficient resource allocation. Therefore, it is inevitable for market participants to develop strategies for trading electricity and flexibility in these segments. The research conducted in this thesis aims to enable better decisions in short-term electricity markets. To achieve this, a multitude of quantitative methods is developed and applied: (a) forecasting methods based on econometrics and machine learning, (b) methods for stochastic modeling of time series, (c) scenario generation and reduction methods, as well as (d) stochastic programming methods. Most significantly, two- and three-stage stochastic optimization problems are formulated to derive optimal trading decisions and unit commitment in the context of short-term electricity markets. The problem formulations adequately account for the sequential structure, the characteristics and the technical requirements of the different market segments, as well as the available information regarding uncertain generation volumes and prices. The thesis contains three case studies focusing on the German electricity markets. Results confirm that, based on appropriate representations of the uncertainty of market prices and renewable generation, the optimization approaches allow to derive sound trading strategies across multiple revenue streams, with which market participants can effectively balance the inevitable trade-off between expected profit and associated risk. By considering coherent risk metrics and flexibly adaptable risk attitudes, the trading strategies allow to substantially reduce risk with only moderate expected profit losses. These results are significant, as improving trading decisions that determine the allocation of resources in the electricity system plays a key role in coping with the uncertainty from renewables and hence contributes to the ultimate success of the energy transition

    System Support for Managing Risk in Cloud Computing Platforms

    Get PDF
    Cloud platforms sell computing to applications for a price. However, by precisely defining and controlling the service-level characteristics of cloud servers, they expose applications to a number of implicit risks throughout the application’s lifecycle. For example, user’s request for a server may be denied, leading to rejection risk; an allocated resource may be withdrawn, resulting in revocation risk; an acquired cloud server’s price may rise relative to others, causing price risk; a cloud server’s performance may vary due to external factors, triggering valuation risk. Though these risks are implicit, the costs they bear on the applications are not. While some risks exist in all Infrastructure-as-a-Service offerings, they are most pronounced in an emerging category called transient cloud servers. Since transient servers are carved out of instantaneous idle cloud capacity, they exhibit two distinct features: (i) revocations that are intentional, frequent and come with advanced warning, and (ii) prices that are low in average but vary across time and location. Thus, despite enabling inexpensive access to at-scale computing, transient cloud servers expose applications to risks, the scale of which were unseen in the past platforms. Unfortunately, the current generation system software are not designed to handle these risks, which in turn results in inconsistent performances, unexpected failures, missed savings, and slower adoption. In this dissertation, we elevate risk management to a first-class system design principle. Our goal is to identify the risks, quantify their costs, and explicitly manage them for applications deployed on cloud platforms. Towards that goal, we adapt and extend concepts from finance and economics to propose a new system design approach called financializing cloud computing. By treating cloud resources as investments, and by quantifying the cost of their risks, financialization enables system software to manage the risk-reward trade-offs, explicitly and autonomously. We demonstrate the utility of our approach via four contributions: (i) mitigating revocation risk with insurance policy, (ii) reducing price risk through active trading, (iii) eliminating uncertainty risk by index tracking, and (iv) minimizing server’s valuation risk via asset pricing. We conclude by observing that diversity and asymmetry in the creation and consumption of cloud compute resources is on the rise, and that financialization can be effectively employed to manage its complexity and risks

    Sanchari: Moving Up the Value Chain Through Telecommunication Services

    Get PDF
    The year 2009 was a critical year in the development of Sanchari, a state-owned telecommunication infrastructure (TI) service provider in India. Over the past few years, Sanchari had successfully developed and delivered on-demand infrastructure services to customers in the state of Karnataka, India. Sanchari’s management team wanted to move their business up the value chain to take advantage of the rapidly growing telecommunication industry in India. In the middle of 2009, Sanchari was approached by the state government of Karnataka to lead the development of a state-wide-area-network (SWAN) under the e-government initiative. This e-government project could give Sanchari an opportunity to move up the value chain. Sanchari needed to decide whether it would take the sole responsibility of this project as the government agent or form a partnership with a private company to execute it. This decision, however, would depend on whether Sanchari wanted to develop into an infrastructure or software service provider or maintain its status quo as a TI service provider in the long term. The teaching case provides a challenging decision-making situation for students and urges them to analyze the benefits and risks of moving up the telecommunication value chain

    HEPCloud, a New Paradigm for HEP Facilities: CMS Amazon Web Services Investigation

    Full text link
    Historically, high energy physics computing has been performed on large purpose-built computing systems. These began as single-site compute facilities, but have evolved into the distributed computing grids used today. Recently, there has been an exponential increase in the capacity and capability of commercial clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing nterest among the cloud providers to demonstrate the capability to perform large-scale scientific computing. In this paper, we discuss results from the CMS experiment using the Fermilab HEPCloud facility, which utilized both local Fermilab resources and virtual machines in the Amazon Web Services Elastic Compute Cloud. We discuss the planning, technical challenges, and lessons learned involved in performing physics workflows on a large-scale set of virtualized resources. In addition, we will discuss the economics and operational efficiencies when executing workflows both in the cloud and on dedicated resources.Comment: 15 pages, 9 figure
    • …
    corecore