2,989 research outputs found

    Rational bidding using reinforcement learning: an application in automated resource allocation

    Get PDF
    The application of autonomous agents by the provisioning and usage of computational resources is an attractive research field. Various methods and technologies in the area of artificial intelligence, statistics and economics are playing together to achieve i) autonomic resource provisioning and usage of computational resources, to invent ii) competitive bidding strategies for widely used market mechanisms and to iii) incentivize consumers and providers to use such market-based systems. The contributions of the paper are threefold. First, we present a framework for supporting consumers and providers in technical and economic preference elicitation and the generation of bids. Secondly, we introduce a consumer-side reinforcement learning bidding strategy which enables rational behavior by the generation and selection of bids. Thirdly, we evaluate and compare this bidding strategy against a truth-telling bidding strategy for two kinds of market mechanisms – one centralized and one decentralized

    Q-Strategy: A Bidding Strategy for Market-Based Allocation of Grid Services

    Get PDF
    The application of autonomous agents by the provisioning and usage of computational services is an attractive research field. Various methods and technologies in the area of artificial intelligence, statistics and economics are playing together to achieve i) autonomic service provisioning and usage of Grid services, to invent ii) competitive bidding strategies for widely used market mechanisms and to iii) incentivize consumers and providers to use such market-based systems. The contributions of the paper are threefold. First, we present a bidding agent framework for implementing artificial bidding agents, supporting consumers and providers in technical and economic preference elicitation as well as automated bid generation by the requesting and provisioning of Grid services. Secondly, we introduce a novel consumer-side bidding strategy, which enables a goal-oriented and strategic behavior by the generation and submission of consumer service requests and selection of provider offers. Thirdly, we evaluate and compare the Q-strategy, implemented within the presented framework, against the Truth-Telling bidding strategy in three mechanisms – a centralized CDA, a decentralized on-line machine scheduling and a FIFO-scheduling mechanisms

    A survey on intelligent computation offloading and pricing strategy in UAV-Enabled MEC network: Challenges and research directions

    Get PDF
    The lack of resource constraints for edge servers makes it difficult to simultaneously perform a large number of Mobile Devices’ (MDs) requests. The Mobile Network Operator (MNO) must then select how to delegate MD queries to its Mobile Edge Computing (MEC) server in order to maximize the overall benefit of admitted requests with varying latency needs. Unmanned Aerial Vehicles (UAVs) and Artificial Intelligent (AI) can increase MNO performance because of their flexibility in deployment, high mobility of UAV, and efficiency of AI algorithms. There is a trade-off between the cost incurred by the MD and the profit received by the MNO. Intelligent computing offloading to UAV-enabled MEC, on the other hand, is a promising way to bridge the gap between MDs' limited processing resources, as well as the intelligent algorithms that are utilized for computation offloading in the UAV-MEC network and the high computing demands of upcoming applications. This study looks at some of the research on the benefits of computation offloading process in the UAV-MEC network, as well as the intelligent models that are utilized for computation offloading in the UAV-MEC network. In addition, this article examines several intelligent pricing techniques in different structures in the UAV-MEC network. Finally, this work highlights some important open research issues and future research directions of Artificial Intelligent (AI) in computation offloading and applying intelligent pricing strategies in the UAV-MEC network

    Automated Bidding in Computing Service Markets. Strategies, Architectures, Protocols

    Get PDF
    This dissertation contributes to the research on Computational Mechanism Design by providing novel theoretical and software models - a novel bidding strategy called Q-Strategy, which automates bidding processes in imperfect information markets, a software framework for realizing agents and bidding strategies called BidGenerator and a communication protocol called MX/CS, for expressing and exchanging economic and technical information in a market-based scheduling system

    Iris: Deep Reinforcement Learning Driven Shared Spectrum Access Architecture for Indoor Neutral-Host Small Cells

    Get PDF
    We consider indoor mobile access, a vital use case for current and future mobile networks. For this key use case, we outline a vision that combines a neutral-host based shared small-cell infrastructure with a common pool of spectrum for dynamic sharing as a way forward to proliferate indoor small-cell deployments and open up the mobile operator ecosystem. Towards this vision, we focus on the challenges pertaining to managing access to shared spectrum (e.g., 3.5GHz US CBRS spectrum). We propose Iris, a practical shared spectrum access architecture for indoor neutral-host small-cells. At the core of Iris is a deep reinforcement learning based dynamic pricing mechanism that efficiently mediates access to shared spectrum for diverse operators in a way that provides incentives for operators and the neutral-host alike. We then present the Iris system architecture that embeds this dynamic pricing mechanism alongside cloud-RAN and RAN slicing design principles in a practical neutral-host design tailored for the indoor small-cell environment. Using a prototype implementation of the Iris system, we present extensive experimental evaluation results that not only offer insight into the Iris dynamic pricing process and its superiority over alternative approaches but also demonstrate its deployment feasibility

    Context-aware distribution of fog applications using deep reinforcement learning

    Get PDF
    Fog computing is an emerging paradigm that aims to meet the increasing computation demands arising from the billions of devices connected to the Internet. Offloading services of an application from the Cloud to the edge of the network can improve the overall latency of the application since it can process data closer to user devices. Diverse Fog nodes ranging from Wi-Fi routers to mini-clouds with varying resource capabilities makes it challenging to determine which services of an application need to be offloaded. In this paper, a context-aware mechanism for distributing applications across the Cloud and the Fog is proposed. The mechanism dynamically generates (re)deployment plans for the application to maximise the performance efficiency of the application by taking operational conditions, such as hardware utilisation and network state, and running costs into account. The mechanism relies on deep Q-networks to generate a distribution plan without prior knowledge of the available resources on the Fog node, the network condition, and the application. The feasibility of the proposed context-aware distribution mechanism is demonstrated on two use-cases, namely a face detection application and a location-based mobile game. The benefits are increased utility of dynamic distribution by 50% and 20% for the two use-cases respectively when compared to a static distribution approach used in existing research.Publisher PDFPeer reviewe

    Privacy-Aware Load Balancing in Fog Networks: A Reinforcement Learning Approach

    Full text link
    Fog Computing has emerged as a solution to support the growing demands of real-time Internet of Things (IoT) applications, which require high availability of these distributed services. Intelligent workload distribution algorithms are needed to maximize the utilization of such Fog resources while minimizing the time required to process these workloads. These load balancing algorithms are critical in dynamic environments with heterogeneous resources and workload requirements along with unpredictable traffic demands. In this paper, load balancing is provided using a Reinforcement Learning (RL) algorithm, which optimizes the system performance by minimizing the waiting delay of IoT workloads. Unlike previous studies, the proposed solution does not require load and resource information from Fog nodes, which makes the algorithm dynamically adaptable to possible environment changes over time. This also makes the algorithm aware of the privacy requirements of Fog service providers, who might like to hide such information to prevent competing providers from calculating better pricing strategies. The proposed algorithm is interactively evaluated on a Discrete-event Simulator (DES) to mimic a practical deployment of the solution in real environments. In addition, we evaluate the algorithm's generalization ability on simulations longer than what it was trained on, which, to the best of our knowledge, has never been explored before. The results provided in this paper show how our proposed approach outperforms baseline load balancing methods under different workload generation rates.Comment: 9 pages, 9 figures, 1 tabl

    Cloud Provisioning and Management with Deep Reinforcement Learning

    Get PDF
    The first web applications appeared in the early nineteen nineties. These applica- tions were entirely hosted in house by companies that developed them. In the mid 2000s the concept of a digital cloud was introduced by the then CEO of google Eric Schmidt. Now in the current day most companies will at least partially host their applications on proprietary servers hosted at data-centers or commercial clouds like Amazon Web Services (AWS) or Heroku. This arrangement seems like a straight forward win-win for both parties, the customer gets rid of the hassle of maintaining a live server for their applications and the cloud gets the customer’s business. However running a cloud or data-center can get expensive. A large amount of electricity is used to power the blades that inhabit the racks of the data-center as well as the air-conditioning that prevents those blades and their human operators from overheating. Further complications are added if a customer hosts in multiple locations. Where should incoming jobs be allocated too? What is the minimum number of machines that can run at each location that ensures profitability and customer satisfaction. The goal of this paper is to answer those questions using deep reinforcement learning. A collection of DRL agents will take data from an artificial cloud environment and decide where to send tasks as well as how to provision resources. These agents will fall under two categories: task scheduling and resource provisioning.We will compare the results of each of these agents and record how they work with each other. This paper will show the feasibility of implementing DRL solutions for task scheduling and resource provisioning problems
    • …
    corecore