1,135 research outputs found

    Deep Reinforcement Learning for Resource Management in Network Slicing

    Full text link
    Network slicing is born as an emerging business to operators, by allowing them to sell the customized slices to various tenants at different prices. In order to provide better-performing and cost-efficient services, network slicing involves challenging technical issues and urgently looks forward to intelligent innovations to make the resource management consistent with users' activities per slice. In that regard, deep reinforcement learning (DRL), which focuses on how to interact with the environment by trying alternative actions and reinforcing the tendency actions producing more rewarding consequences, is assumed to be a promising solution. In this paper, after briefly reviewing the fundamental concepts of DRL, we investigate the application of DRL in solving some typical resource management for network slicing scenarios, which include radio resource slicing and priority-based core network slicing, and demonstrate the advantage of DRL over several competing schemes through extensive simulations. Finally, we also discuss the possible challenges to apply DRL in network slicing from a general perspective.Comment: The manuscript has been accepted by IEEE Access in Nov. 201

    Raamistik mobiilsete asjade veebile

    Get PDF
    Internet on oma arengus läbi aastate jõudnud järgmisse evolutsioonietappi - asjade internetti (ingl Internet of Things, lüh IoT). IoT ei tähista ühtainsat tehnoloogiat, see võimaldab eri seadmeil - arvutid, mobiiltelefonid, autod, kodumasinad, loomad, virtuaalsensorid, jne - omavahel üle Interneti suhelda, vajamata seejuures pidevat inimesepoolset seadistamist ja juhtimist. Mobiilseadmetest nagu näiteks nutitelefon ja tahvelarvuti on saanud meie igapäevased kaaslased ning oma mitmekülgse võimekusega on nad motiveerinud teadustegevust mobiilse IoT vallas. Nutitelefonid kätkevad endas võimekaid protsessoreid ja 3G/4G tehnoloogiatel põhinevaid internetiühendusi. Kuid kui kasutada seadmeid järjepanu täisvõimekusel, tühjeneb mobiili aku kiirelt. Doktoritöö esitleb energiasäästlikku, kergekaalulist mobiilsete veebiteenuste raamistikku anduriandmete kogumiseks, kasutades kergemaid, energiasäästlikumaid suhtlustprotokolle, mis on IoT keskkonnale sobilikumad. Doktoritöö käsitleb põhjalikult energia kokkuhoidu mobiilteenuste majutamisel. Töö käigus loodud raamistikud on kontseptsiooni tõestamiseks katsetatud mitmetes juhtumiuuringutes päris seadmetega.The Internet has evolved, over the years, from just being the Internet to become the Internet of Things (IoT), the next step in its evolution. IoT is not a single technology and it enables about everything from computers, mobile phones, cars, appliances, animals, virtual sensors, etc. that connect and interact with each other over the Internet to function free from human interaction. Mobile devices like the Smartphone and tablet PC have now become essential to everyday life and with extended capabilities have motivated research related to the mobile Internet of Things. Although, the recently developed Smartphones enjoy the high performance and high speed 3G/4G mobile Internet data transmission services, such high speed performances quickly drain the battery power of the mobile device. This thesis presents an energy efficient lightweight mobile Web service provisioning framework for mobile sensing utilizing the protocols that were designed for the constrained IoT environment. Lightweight protocols provide an energy efficient way of communication. Finally, this thesis highlights the energy conservation of the mobile Web service provisioning, the developed framework, extensively. Several case studies with the use of the proposed framework were implemented on real devices and has been thoroughly tested as a proof-of-concept.https://www.ester.ee/record=b522498

    A machine learning resource allocation solution to improve video quality in remote education

    Get PDF
    The current global pandemic crisis has unquestionably disrupted the higher education sector, forcing educational institutions to rapidly embrace technology-enhanced learning. However, the COVID-19 containment measures that forced people to work or stay at home, have determined a significant increase in the Internet traffic that puts tremendous pressure on the underlying network infrastructure. This affects negatively content delivery and consequently user perceived quality, especially for video-based services. Focusing on this problem, this paper proposes a machine learning-based resource allocation solution that improves the quality of video services for increased number of viewers. The solution is deployed and tested in an educational context, demonstrating its benefit in terms of major quality of service parameters for various video content, in comparison with existing state of the art. Moreover, a discussion on how the technology is helping to mitigate the effects of massively increasing internet traffic on the video quality in an educational context is also presented

    SDN-enabled Resource Provisioning Framework for Geo-Distributed Streaming Analytics

    Get PDF
    Geographically distributed (geo-distributed) datacenters for stream data processing typically comprise multiple edges and core datacenters connected through Wide-Area Network (WAN) with a master node responsible for allocating tasks to worker nodes. Since WAN links significantly impact the performance of distributed task execution, the existing task assignment approach is unsuitable for distributed stream data processing with low latency and high throughput demand. In this paper, we propose SAFA, a resource provisioning framework using the Software-Defined Networking (SDN) concept with an SDN controller responsible for monitoring the WAN, selecting an appropriate subset of worker nodes, and assigning tasks to the designated worker nodes. We implemented the data plane of the framework in P4 and the control plane components in Python. We tested the performance of the proposed system on Apache Spark, Apache Storm, and Apache Flink using the Yahoo! streaming benchmark on a set of custom topologies. The results of the experiments validate that the proposed approach is viable for distributed stream processing and confirm that it can improve at least 1.64× the processing time of incoming events of the current stream processing systems.</p

    GAN-powered Deep Distributional Reinforcement Learning for Resource Management in Network Slicing

    Full text link
    Network slicing is a key technology in 5G communications system. Its purpose is to dynamically and efficiently allocate resources for diversified services with distinct requirements over a common underlying physical infrastructure. Therein, demand-aware resource allocation is of significant importance to network slicing. In this paper, we consider a scenario that contains several slices in a radio access network with base stations that share the same physical resources (e.g., bandwidth or slots). We leverage deep reinforcement learning (DRL) to solve this problem by considering the varying service demands as the environment state and the allocated resources as the environment action. In order to reduce the effects of the annoying randomness and noise embedded in the received service level agreement (SLA) satisfaction ratio (SSR) and spectrum efficiency (SE), we primarily propose generative adversarial network-powered deep distributional Q network (GAN-DDQN) to learn the action-value distribution driven by minimizing the discrepancy between the estimated action-value distribution and the target action-value distribution. We put forward a reward-clipping mechanism to stabilize GAN-DDQN training against the effects of widely-spanning utility values. Moreover, we further develop Dueling GAN-DDQN, which uses a specially designed dueling generator, to learn the action-value distribution by estimating the state-value distribution and the action advantage function. Finally, we verify the performance of the proposed GAN-DDQN and Dueling GAN-DDQN algorithms through extensive simulations
    corecore