9 research outputs found
A Light-weight Content Distribution Scheme for Cooperative Caching in Telco-CDNs
A key technique to reduce the rapid growing of video-on-demandâs traffic is a cooperative caching strategy aggregating multiple cache storages. Many internet service providers have considered the use of cache servers on their networks as a solution to reduce the traffic. Existing schemes often periodically calculate a sub-optimal allocation of the content caches in the network. However, such approaches require a large computational overhead that cannot be amortized in a presence of frequent changes of the contentsâ popularities. This paper proposes a light-weight scheme for a cooperative caching that obtains a sub-optimal distribution of the contents by focusing on their popularities. This was made possible by adding color tags to both cache servers and contents. In addition, we propose a hybrid caching strategy based on Least Frequently Used (LFU) and Least Recently Used (LRU) schemes, which efficiently manages the contents even with a frequent change in the popularity. Evaluation results showed that our light-weight scheme could considerably reduce the traffic, reaching a sub-optimal result. In addition, the performance gain is obtained with a computation overhead of just a few seconds. The evaluation results also showed that the hybrid caching strategy could follow the rapid variation of the popularity. While a single LFU strategy drops the hit ratio by 13.9%, affected by rapid popularity changes, our proposed hybrid strategy could limit the degradation to only 2.3%
A control and management architecture supporting autonomic NFV services
The proposed control, orchestration and management (COM) architecture is presented from a high-level point of view; it enables the dynamic provisioning of services such as network data connectivity or generic network slicing instances based on virtual network functions (VNF). The COM is based on Software Defined Networking (SDN) principles and is hierarchical, with a dedicated controller per technology domain. Along with the SDN control plane for the provisioning of connectivity, an ETSI NFV management and orchestration system is responsible for the instantiation of Network Services, understood in this context as interconnected VNFs. A key, novel component of the COM architecture is the monitoring and data analytics (MDA) system, able to collect monitoring data from the network, datacenters and applications which outputs can be used to proactively reconfigure resources thus adapting to future conditions, like load or degradations. To illustrate the COM architecture, a use case of a Content Delivery Network service taking advantage of the MDA ability to collect and deliver monitoring data is experimentally demonstrated.Peer ReviewedPostprint (author's final draft
Big Data-backed video distribution in the telecom cloud
Telecom operators are starting the deployment of Content Delivery Networks (CDN) to better control and manage video contents injected into the network. Cache nodes placed close to end users can manage contents and adapt them to users' devices, while reducing video traffic in the core. By adopting the standardized MPEG-DASH technique, video contents can be delivered over HTTP. Thus, HTTP servers can be used to serve contents, while packagers running as software can prepare live contents. This paves the way for virtualizing the CDN function. In this paper, a CDN manager is proposed to adapt the virtualized CDN function to current and future demand. A Big Data architecture, fulfilling the ETSI NFV guide lines, allows controlling virtualized components while collecting and pre-processing data. Optimization problems minimize CDN costs while ensuring the highest quality. Re-optimization is triggered based on threshold violations; data stream mining sketches transform collected into modeled data and statistical linear regression and machine learning techniques are proposed to produce estimation of future scenarios. Exhaustive simulation over a realistic scenario reveals remarkable costs reduction by dynamically reconfiguring the CDN.Peer ReviewedPostprint (author's final draft
Popularity-Based Adaptive Content Delivery Scheme with In-Network Caching
To solve the increasing popularity of video streaming services over the Internet, recent research activities have addressed the locality of content delivery from a network edge by introducing a storage module into a router. To employ in-network caching and persistent request routing, this paper introduces a hybrid content delivery network (CDN) system combining novel content routers in an underlay together with a traditional CDN server in an overlay. This system first selects the most suitable delivery scheme (that is, multicast or broadcast) for the content in question and then allocates an appropriate number of channels based on a consideration of the contentâs popularity. The proposed scheme aims to minimize traffic volume and achieve optimal delivery cost, since the most popular content is delivered through broadcast channels and the least popular through multicast channels. The performance of the adaptive scheme is clearly evaluated and compared against both the multicast and broadcast schemes in terms of the optimal in-network caching size and number of unicast channels in a content router to observe the significant impact of our proposed scheme
Echo State Networks for Proactive Caching in Cloud-Based Radio Access Networks with Mobile Users
In this paper, the problem of proactive caching is studied for cloud radio
access networks (CRANs). In the studied model, the baseband units (BBUs) can
predict the content request distribution and mobility pattern of each user,
determine which content to cache at remote radio heads and BBUs. This problem
is formulated as an optimization problem which jointly incorporates backhaul
and fronthaul loads and content caching. To solve this problem, an algorithm
that combines the machine learning framework of echo state networks with
sublinear algorithms is proposed. Using echo state networks (ESNs), the BBUs
can predict each user's content request distribution and mobility pattern while
having only limited information on the network's and user's state. In order to
predict each user's periodic mobility pattern with minimal complexity, the
memory capacity of the corresponding ESN is derived for a periodic input. This
memory capacity is shown to be able to record the maximum amount of user
information for the proposed ESN model. Then, a sublinear algorithm is proposed
to determine which content to cache while using limited content request
distribution samples. Simulation results using real data from Youku and the
Beijing University of Posts and Telecommunications show that the proposed
approach yields significant gains, in terms of sum effective capacity, that
reach up to 27.8% and 30.7%, respectively, compared to random caching with
clustering and random caching without clustering algorithm.Comment: Accepted in the IEEE Transactions on Wireless Communication
åç»é 信網ã«ãããã¢ã¯ã»ã¹å€åäºæž¬ãçšããåæ£å調ãã£ãã·ã¥å¶åŸ¡
åç»é
ä¿¡ãµãŒãã¹ã®æ®åïŒå°äžæ³¢ã®ã€ã³ã¿ãŒãããæŸéã«äŒŽããã¡ã€ã«ãµã€ãºã®å€§ããªããŒã¿ã®ããåããè¡ãããŠãããæµéããã€ã³ã¿ãŒãããéä¿¡éã®8å²ãåç»ãã¡ã€ã«ã§ããããšã«çç®ãïŒãã®éä¿¡ãåæžããããšã§ãªãªãžã³ãµãŒãã®è² è·è»œæžã«åçµãããå
è¡ç 究ã®è²ã¿ã°æ
å ±ãçšããåæ£å調ãã£ãã·ã¥ææ³ã§ã¯ïŒãŠãŒã¶ããã®èŠæ±åŸåã倧å¹
ã«å€åããããšãæ³å®ããŠãããïŒIPTV ãµãŒãã¹ã調æ»ããè«æãåèã«ã·ãã¥ã¬ãŒã·ã§ã³å®éšãè¡ããšéä¿¡éåæžå¹æãæžå°ããŠããŸããæ ªäŸ¡ãªã©ã®æšç§»äºæž¬ãå¿çšããã³ã³ãã³ã人æ°æšç§»äºæž¬ææ³ã§ã¯ïŒåäžãã£ãã·ã¥ãµãŒãã§ã®éä¿¡éåæžå¹æã調æ»ããŠãããåæ£å調ãã£ãã·ã¥ãžã®æ¡åŒµã¯æ€èšãããŠããªãããæ¬ç 究ã§ã¯åç»é
ä¿¡ãµãŒãã¹ã®å°æ¥ã®èŠæ±åŸåãäºæž¬ãïŒäºåã«å¹æçãªãã£ãã·ã¥é
眮ææ³ã2çš®é¡æ€èšããããšã§éä¿¡éåæžã«åãçµããã1ã€ç®ã¯ïŒèŠæ±ã®åããäºæž¬ãïŒã³ã³ãã³ãã®èŠæ±ã®åããäºæž¬ãå¹æçãªãã£ãã·ã¥é
眮ãé©çšããä»çµã¿ãææ¡ããããã®ææ³ã¯ã³ã³ãã³ãèŠæ±ã®åãã®æéå€åããã¿ãŒã³åãããŠããããšã«çç®ãïŒæé垯æ¯ã«å¹æçãªãã£ãã·ã¥é
眮ãèšå®ãããã®ã§ããããã®ææ³ã§ã¯æ¥æ¬ã®ãããã¯ãŒã¯ããããžã«ããã·ãã¥ã¬ãŒã·ã§ã³å®éšãè¡ã£ãçµæïŒå
è¡ç 究ãšæ¯èŒãæ倧58ïŒ
åç»éä¿¡éã®äœæžã確èªãããããããªããïŒãŠãŒã¶ã®åç»èŠèŽèŠæ±æ°ãè©äŸ¡ã«è¿œå ãããšïŒèŠæ±æ°ãå€ãæé垯ã®éä¿¡éã倧ããåæžããããšã¯ã§ããªãã£ããã2ã€ç®ã¯ïŒæ°èŠã³ã³ãã³ãã®äººæ°ãæŸéäºæ¥è
ã®çµéšåããäºæž¬ãïŒè²ã¿ã°æ
å ±ãçšããåæ£å調ãã£ãã·ã¥å¶åŸ¡ãå°å
¥ããŠãããã£ãã·ã¥ãµãŒãã«äºåã«ã³ã³ãã³ããé
åžããææ³ãææ¡ãããæ¬ææ³ã¯å¹ççã«éä¿¡éãåæžããããïŒãŠãŒã¶ããã®èŠæ±æ°ãæ·±å€ã®çŽ40åã«ããªããŽãŒã«ãã³ã¿ã€ã ã®æé垯ã®éä¿¡ãåæžã§ããããã®çµæããŒã¯æã«ãããåç»éä¿¡éãçŽ15-30%åæžãããé»æ°é信倧åŠ201
åç»é ä¿¡ãµãŒãã¹ã®ããã®è»œéåæ£å調ãã£ãã·ã¥åºç€ã®ç 究
ãæå»ã«é¢ä¿ãªãåç»ãèŠèŽã§ãããªã³ããã³ãåç»é
ä¿¡(Video-on-Demand; VoD) ãµãŒãã¹ã®æ®åã«äŒŽãïŒã€ã³ã¿ãŒãããéä¿¡éã¯æ¥æ¿ã«å¢å€§ããŠããïŒã€ã³ã¿ãŒãããéä¿¡éã¯5幎ã§3åã®é床ã§å¢å ãããšèŠèŸŒãŸããŠããïŒ2020幎ã«ã¯ã€ã³ã¿ãŒãããéä¿¡éã®80%以äžãåç»éä¿¡ãå ãããšäºæž¬ãããŠããïŒå¢å€§ããéä¿¡éã¯ã«ãŒã¿ãã¹ã€ãããªã©ã®éä¿¡æ©æã®å¢èšãå·æ°ã§å¯Ÿå¿ã§ãããïŒéä¿¡éã®å¢å ã«äŒŽãç¶ç¶çãªå¢èšãæ±ããããããïŒçµæžçã§ã¯ãªãïŒ ãVoDãµãŒãã¹äºæ¥è
ã¯äžè¬ã«ïŒã³ã³ãã³ãé
ä¿¡ãããã¯ãŒã¯(Content Delivery Network;CDN) äºæ¥è
ã«å€§å®¹éã³ã³ãã³ãã®é
ä¿¡ãå§èšããŠããïŒCDNäºæ¥è
ã¯ïŒäžçäžã®ãŠãŒã¶ããè¿ãäœçœ®ã«ãã£ãã·ã¥ãµãŒããèšçœ®ãïŒå€§èŠæš¡ãªãã£ãã·ã¥ãããã¯ãŒã¯ãæ§ç¯ããŠããïŒåãã£ãã·ã¥ãµãŒãã¯ã³ã³ãã³ãããããã¯ãŒã¯ãééããéã«ããŒã¿ãã³ããŒããŠããïŒåå©çšããããšã§ïŒéä¿¡éãåæžããŠããïŒåç»ã³ã³ãã³ãã¯äžåºŠã¢ããããŒãããããšæ»
å€ã«æŽæ°ãããããšããªãããïŒãã£ãã·ã¥ãµãŒãã¯éè€ããéä¿¡ãåæžã§ããïŒãã®ããã«ããããšã§ïŒãã£ãã·ã¥ãµãŒãã¯é ãã®é
ä¿¡ãµãŒããšã®éä¿¡åæ°ãåæžãïŒå¹çããã€ã³ã¿ãŒãããéä¿¡éãåæžããïŒããããªããïŒãã£ãã·ã¥å®¹éã«ã¯äžéãããïŒåç»ã³ã³ãã³ãã¯ç¶ç¶çã«è¿œå ãããããïŒ1å°ã®ãã£ãã·ã¥ãµãŒãã«ãã¹ãŠã®åç»ã³ã³ãã³ããä¿æãããããšã¯çŸå®çã§ã¯ãªãïŒãŸãïŒCDNäºæ¥è
ã®ãã£ãã·ã¥ãµãŒãã¯èšçœ®æ ç¹ãéãããŠããããïŒãã£ãã·ã¥ãµãŒãã®éä¿¡éãïŒãã£ãã·ã¥ãµãŒããŸã§ã®çµè·¯ãæäŸããã€ã³ã¿ãŒããããµãŒãã¹ãããã€ã(Internet Service Provider; ISP) å
ã®éä¿¡éãåæžã§ããªãïŒãã®çµæïŒéä¿¡è·¯ã®æ··éãéä¿¡éã®å¢å€§ãæããŠããŸãïŒãCDNäºæ¥è
ã¯ïŒè€æ°ã®ãã£ãã·ã¥ãµãŒãã§ãã£ãã·ã¥ãããã³ã³ãã³ããå
±æããããšã§å®å¹ãã£ãã·ã¥å®¹éãæ¡å€§ãïŒéä¿¡éåæžãšè² è·åæ£ãå³ãããšããŠããïŒãã®ãããªæ段ã¯ããŒã¿è»¢éè·¯ãå¶åŸ¡ãããã©ãã£ãã¯ãšã³ãžãã¢ãªã³ã°ãããšã«å®çŸããããïŒCDNäºæ¥è
ã¯ISPã®ç©çãããã¯ãŒã¯åœ¢ç¶ããªã³ã¯åž¯åçã«é¢ããç¥èããããªãããïŒå¹çãããã£ãã·ã¥ãµãŒããå調ãããããšãé£ããïŒãã®ããïŒè¿å¹Žã¯ISPäºæ¥è
ãèªç€Ÿã®ãããã¯ãŒã¯äžã«ãã£ãã·ã¥ãµãŒããé
眮ãïŒãã©ãã£ãã¯ãšã³ãžãã¢ãªã³ã°ãé§äœ¿ããŠå調åäœãããããšã§ïŒåæ£å調ãã£ãã·ã¥ãããã¯ãŒã¯ã®æ§ç¯ãæ€èšããŠããïŒãã®ãããªæ¹æ³ããšãããšã§ïŒãããã¯ãŒã¯ãšãã£ãã·ã¥ãµãŒãã®äž¡æ¹ãåäžã®äºæ¥è
ã管çããããïŒå¹çã®è¯ãéä¿¡éåæžãå®çŸã§ããïŒISPã管çããISPå
ã®ãã£ãã·ã¥ãããã¯ãŒã¯ã¯Telco-CDNãšåŒã°ããïŒãæè¿ã®ç 究ã§ã¯ïŒTelco-CDNã®ãã£ãã·ã¥ãµãŒããå¹çãã管çããæ¹æ³ãææ¡ãããŠããïŒå
žåçã«ã¯ïŒè€æ°ã®ãã£ãã·ã¥ãµãŒãã§ä¿æããã³ã³ãã³ãã®èŠåãèšå®ããŠããïŒåãã£ãã·ã¥ãµãŒãã§ç°ãªãã³ã³ãã³ããä¿æãããããšã§ïŒå®å¹ãã£ãã·ã¥å®¹éãæ¡å€§ãïŒéä¿¡éã®åæžãå®çŸãããŠããïŒããããªããïŒãã®ãããªæ¹æ³ã¯åã³ã³ãã³ãã®ã¢ã¯ã»ã¹é »åºŠæ
å ±ãèæ
®ããªãããïŒäººæ°äžäœã®ã³ã³ãã³ããä¿æããæ°å°ã®ãã£ãã·ã¥ãµãŒãã«è² è·ãéäžããŠããŸãïŒãŸãïŒå¹çã®è¯ãã³ã³ãã³ãé
眮ãæ±ããæé©ååé¡ãèšå®ããŠèšç®ããããšã§ïŒéä¿¡éåæžå¹æã®é«ãåæ£å調ãã£ãã·ã¥ãå®çŸããç 究ãè¡ãããŠããïŒããããªããïŒæé©ååé¡ã®èšç®ã«ã¯é·æéã®èšç®ãèŠããäžæ¹ã§ïŒVoDãµãŒãã¹ã®åç»ã¢ã¯ã»ã¹ãã¿ãŒã³ã¯1æéã§20-40%çšåºŠå€åããŠããŸãããïŒèšç®ãçµäºããæç¹ã§æé©ãªã³ã³ãã³ãé
眮ãšã®ä¹é¢ãçãŸãïŒéä¿¡éåæžå¹æãäœæžããŠããŸãïŒãæ¬è«æã§ã¯ïŒVoDãµãŒãã¹ã®ã¢ã¯ã»ã¹åŸåãå¹çãããã£ãã·ã¥ãã2çš®é¡ã®ãã£ãã·ã¥å¶åŸ¡ã¢ã«ãŽãªãºã ãææ¡ãïŒçµã¿åãããŠå©çšããããšã§ïŒéä¿¡éã®åæžãå³ãïŒãŸã第1ã«ïŒ2çš®é¡ã®ç°ãªããã£ãã·ã¥ã¢ã«ãŽãªãºã ãçµã¿åããããã€ããªãããã£ãã·ã¥ã¢ã«ãŽãªãºã ãææ¡ããïŒãã®ã¢ã«ãŽãªãºã ã¯ïŒç°ãªããã£ãã·ã¥ã¢ã«ãŽãªãºã ããããã¯ãŒã¯äžã«æ··åšããããïŒ1å°ã®ãã£ãã·ã¥ãµãŒãã®ã¹ãã¬ãŒãžé åãåå²ããŠã¢ã«ãŽãªãºã ãæ··åããŠå©çšããããšã§ïŒæ¥æ¿ã«å€åããåç»ã¢ã¯ã»ã¹ãå¹çãããã£ãã·ã¥ããŠïŒé«ãéä¿¡éåæžå¹æãç¶æããïŒã¢ã¯ã»ã¹é »åºŠã®é«ãã³ã³ãã³ããä¿æããLeast Frequently Used (LFU)ããŒã¹ã®ã¢ã«ãŽãªãºã ã§é«ãéä¿¡éåæžå¹æãå®çŸãïŒæè¿ã¢ã¯ã»ã¹ãããã³ã³ãã³ããåªå
çã«ä¿æããLeast Recently Used (LRU)ããŒã¹ã®ã¢ã«ãŽãªãºã ã§æ¥æ¿ãªã¢ã¯ã»ã¹åŸåã®å€åã«è¿œåŸããïŒã第2ã«ïŒè²ã¿ã°æ
å ±ãçšããåæ£å調ãã£ãã·ã¥å¶åŸ¡ææ³ãææ¡ãïŒãã£ãã·ã¥ãããã¯ãŒã¯äžã®ã³ã³ãã³ãé
眮ãå¹çããå¶åŸ¡ããïŒãã®æ¹æ³ã¯ïŒã³ã³ãã³ããšãã£ãã·ã¥ãµãŒãã®äž¡æ¹ã«è²ã¿ã°ãèšå®ãïŒè²ããããããå Žåã«ãã£ãã·ã¥ããããå¶åŸ¡ããããšã§ïŒã³ã³ãã³ããåæ£é
眮ãïŒå®å¹ãã£ãã·ã¥å®¹éãæ¡å€§ããïŒå
·äœçã«ã¯ïŒå
ã«è¿°ã¹ããã€ããªãããã£ãã·ã¥ã®LFUé åã«è²ã¿ã°ãèšå®ãïŒå€§å®¹éãªåæ£å調é åãšããŠå©çšãããšãšãã«ïŒå°å®¹éãªLRUé åã§ã¯ã¿ã°æ
å ±ã«ãããããã³ã³ãã³ãããã£ãã·ã¥ãããããšã§ïŒåç»ã¢ã¯ã»ã¹åŸåã®å€åã«è¿œåŸããïŒã¢ã¯ã»ã¹é »åºŠã®é«ãã³ã³ãã³ãã»ã©å€æ°ã®è²ãå²ãåœãŠãããšã§ãŠãŒã¶ããã®ãããæ°ãççž®ãïŒã³ã³ãã³ãé
ä¿¡ãµãŒãã ãã§ãªãISPãããã¯ãŒã¯å
éšã®éä¿¡éãå¹çããåæžããïŒè²ã¿ã°æ
å ±ã®è»œé管çææ³ãšïŒè²ã¿ã°æ
å ±ã掻çšããçµè·¯å¶åŸ¡ã¢ã«ãŽã«ãºã ãåãããŠææ¡ãïŒè»œéãªèšç®ãªãŒããããã§é«ãéä¿¡éåæžå¹æãå®çŸããïŒããã€ããªãããã£ãã·ã¥ã¢ã«ãŽãªãºã ã®è©äŸ¡ã§ã¯ïŒæ°èŠã«ã¢ã¯ã»ã¹é »åºŠã®é«ãã³ã³ãã³ããè¿œå ããŠãïŒLFUããŒã¹ã®ãã£ãã·ã¥é åã§é«ãéä¿¡éåæžå¹æãéæãã€ã€ïŒLRUããŒã¹ã®ãã£ãã·ã¥é åã§éä¿¡éãç¶æã§ããããšã瀺ãããïŒãŸãïŒè²ã¿ã°æ
å ±ã«åºã¥ãåæ£å調ãã£ãã·ã¥ã¢ã«ãŽãªãºã ã¯ïŒéºäŒçã¢ã«ãŽãªãºã ã§èšç®ããæºæé©å¶åŸ¡ã«è¿ãéä¿¡éåæžå¹æãå®çŸãïŒãã®èšç®ãªãŒãããããå°ããæããããããšã確èªããïŒãŸãïŒè²ã¿ã°æ
å ±ã掻çšããçµè·¯å¶åŸ¡ã¢ã«ãŽãªãºã ã¯ïŒæççµè·¯å¶åŸ¡ãšæ¯èŒããŠ31ïŒ9%ã®éä¿¡éåæžå¹æãåŸãããããšã確èªããïŒé»æ°é信倧åŠ201
Geographically Distributed Database Management at the Cloud's Edge
Request latency resulting from the geographic separation between clients and remote application servers is a challenge for cloud-hosted web and mobile applications. Numerous studies have shown the importance of low latency to the end user experience. Small response time increases on the order of a few hundred milliseconds directly translate to reduced user satisfaction and loss of revenue that persist even after a low latency environment is restored. One way to address this challenge in geo-distributed settings is to push all or part of the application, along with the data it requires, to the edge of the cloud - closer to application clients. This thesis explores the idea of taking advantage of clients' proximity to the edge of the network in order to reduce request latencies.
SpearDB is a prototype replicated distributed database system which operates in a star network topology, with a core site and a large number of edge sites that are close to clients. Clients access the nearest edge, which holds replicas of locally relevant portions of the database. SpearDB's edge sites coordinate through the core to provide a global transactional consistency guarantee (parallel snapshot isolation or PSI), while handling as much work locally as possible. SpearDB provides full general purpose transactional semantics with ACID guarantees. Experiments show that SpearDB is effective at reducing workload latencies for applications whose access patterns are geographically localizable. Many applications fit this criteria: bulletin boards (e.g., Craigslist, Kijiji), local commerce or services (e.g., Groupon, Uber), booking and ticketing (e.g., OpenTable, StubHub), location based services (mapping, directions, augmented reality), local news outlets and client-centric services (e-mail, rss feeds, gaming). SpearDB introduces protocols for executing application transactions in a geo-distributed setting under strong consistency guarantees. These protocols automatically hide the complexity as well as much of the latency introduced by geo-distribution from applications.
The effectiveness of SpearDB depends on the placement of primary and secondary replicas at core and edge sites. The secondary replica placement problem is shown to be NP-hard. Several algorithms for automatic data partitioning and replication are presented to provide approximate solutions. These algorithms work in a geo-distributed core-edge setting under partial replication. Their goal is to bring data closer to clients in order to lower request latencies. Experimental comparisons of the resulting placements' latency impact show good results. Surprisingly however, the placements produced by the simplest of the proposed algorithms are comparable in quality to those produced by more complex approaches