11,951 research outputs found

    Modeling Profit of Sliced 5G Networks for Advanced Network Resource Management and Slice Implementation

    Full text link
    The core innovation in future 5G cellular networksnetwork slicing, aims at providing a flexible and efficient framework of network organization and resource management. The revolutionary network architecture based on slices, makes most of the current network cost models obsolete, as they estimate the expenditures in a static manner. In this paper, a novel methodology is proposed, in which a value chain in sliced networks is presented. Based on the proposed value chain, the profits generated by different slices are analyzed, and the task of network resource management is modeled as a multiobjective optimization problem. Setting strong assumptions, this optimization problem is analyzed starting from a simple ideal scenario. By removing the assumptions step-by-step, realistic but complex use cases are approached. Through this progressive analysis, technical challenges in slice implementation and network optimization are investigated under different scenarios. For each challenge, some potentially available solutions are suggested, and likely applications are also discussed

    Deep Reinforcement Learning from Self-Play in Imperfect-Information Games

    Get PDF
    Many real-world applications can be described as large-scale games of imperfect information. To deal with these challenging domains, prior work has focused on computing Nash equilibria in a handcrafted abstraction of the domain. In this paper we introduce the first scalable end-to-end approach to learning approximate Nash equilibria without prior domain knowledge. Our method combines fictitious self-play with deep reinforcement learning. When applied to Leduc poker, Neural Fictitious Self-Play (NFSP) approached a Nash equilibrium, whereas common reinforcement learning methods diverged. In Limit Texas Holdem, a poker game of real-world scale, NFSP learnt a strategy that approached the performance of state-of-the-art, superhuman algorithms based on significant domain expertise.Comment: updated version, incorporating conference feedbac

    Ms Pac-Man versus Ghost Team CEC 2011 competition

    Get PDF
    Games provide an ideal test bed for computational intelligence and significant progress has been made in recent years, most notably in games such as Go, where the level of play is now competitive with expert human play on smaller boards. Recently, a significantly more complex class of games has received increasing attention: real-time video games. These games pose many new challenges, including strict time constraints, simultaneous moves and open-endedness. Unlike in traditional board games, computational play is generally unable to compete with human players. One driving force in improving the overall performance of artificial intelligence players are game competitions where practitioners may evaluate and compare their methods against those submitted by others and possibly human players as well. In this paper we introduce a new competition based on the popular arcade video game Ms Pac-Man: Ms Pac-Man versus Ghost Team. The competition, to be held at the Congress on Evolutionary Computation 2011 for the first time, allows participants to develop controllers for either the Ms Pac-Man agent or for the Ghost Team and unlike previous Ms Pac-Man competitions that relied on screen capture, the players now interface directly with the game engine. In this paper we introduce the competition, including a review of previous work as well as a discussion of several aspects regarding the setting up of the game competition itself. © 2011 IEEE
    • …
    corecore