9 research outputs found

    Multicast routing from a set of data centers in elastic optical networks

    Get PDF
    This paper introduces the Multi-Server Multicast (MSM) approach for Content Delivery Networks (CDNs) delivering services offered by a set of Data Centers (DCs). All DCs offer the same services. The network is an Elastic Optical Network (EON) and for a good performance, routing is performed directly at the optical layer. Optical switches have heterogeneous capacities, that is, light splitting is not available in all switches. Moreover, frequency slot conversion is not possible in any of them. We account for the degradation that optical signals suffer both in the splitting nodes, as well as across fiber links to compute their transmission reach. The optimal solution of the MSM is a set of light-hierarchies. This multicast route contains a light trail from one of the DCs to each of the destinations with respect to the optical constraints while optimizing an objective (e.g., minimizing a function). Finding such a structure is often an NP-hard problem. The light-hierarchies initiated from different DCs permit delivering the multicast session to all end-users with a better utilization of the optical resources, while also reducing multicast session latencies, as contents can be delivered from such DCs closer to end-users. We propose an Integer Linear Programming (ILP) formulation to optimally decide on which light-hierarchies should be setup. Simulation results illustrate the benefits of MSM in two reference backbone networks.Peer ReviewedPostprint (author's final draft

    Distributed resource allocation for data center networks: a hierarchical game approach

    Get PDF
    The increasing demand of data computing and storage for cloud-based services motivates the development and deployment of large-scale data centers. This paper studies the resource allocation problem for the data center networking system when multiple data center operators (DCOs) simultaneously serve multiple service subscribers (SSs). We formulate a hierarchical game to analyze this system where the DCOs and the SSs are regarded as the leaders and followers, respectively. In the proposed game, each SS selects its serving DCO with preferred price and purchases the optimal amount of resources for the SS's computing requirements. Based on the responses of the SSs' and the other DCOs', the DCOs decide their resource prices so as to receive the highest profit. When the coordination among DCOs is weak, we consider all DCOs are noncooperative with each other, and propose a sub-gradient algorithm for the DCOs to approach a sub-optimal solution of the game. When all DCOs are sufficiently coordinated, we formulate a coalition game among all DCOs and apply Kalai-Smorodinsky bargaining as a resource division approach to achieve high utilities. Both solutions constitute the Stackelberg Equilibrium. The simulation results verify the performance improvement provided by our proposed approaches

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    Data and computer center prediction of usage and cost: an interpretable machine learning approach

    Get PDF
    In recent years, Cloud computing usage has considerably increased and, nowadays, it is the backbone of many emerging applications. However, behind cloud structures, we have physical infrastructures (data centers) for which managing is difficult due to un- predictable utilization patterns. To address the constraints of reactive auto-scaling, data centers are widely adopting predictive cloud resource management mechanisms. How- ever, predictive methods rely on application workloads and are typically pre-optimized for specific patterns, which can cause under/over-provisioning of resources. Accurate workload forecasts are necessary to gain efficiency, save money, and provide clients with better and faster services. Working with real data from a Portuguese bank, we propose Ensemble Adaptive Model with Drift detector (EAMDrift). This novel method combines forecasts from multi- ple individual predictors by giving weights to each individual model prediction according to a performance metric. EAMDrift automatically retrains when needed and identifies the most appropriate models to use at each moment through interpretable mechanisms. We tested our novel methodology in a real data problem, by studying the influence of external signals (mass and social media) on data center workloads. As we are working with real data from a bank, we hypothesize that users can increase or decrease the usage of some applications depending on external factors such as controversies or news about economics. For this study, EAMDrift was projected to allow multiple past covariates. We evaluated EAMDrift in different workloads and compared the results with sev- eral baseline methods models. The experimental evaluation shows that EAMDrift out- performs individual baseline models in 15% to 25%. Compared to the best black-box ensemble model, our model has a comparable error (increased in 1-3%). Thus, this work suggests that interpretable models are a viable solution for data center workload predic- tion.Nos últimos anos, a computação em nuvem tem tido um aumento considerável e, hoje pode ser vista como a espinha dorsal de muitas aplicações que estão a emergir. Contudo, por detrás das conhecidas nuvens, existem estruturas físicas (centro de dados) nas quais, a gestão tem se revelado uma tarefa bastante difícil devido à imprevisibilidade de utilização dos serviços. Para lidar com as restrições do auto-scalling reativo, os mecanismos de gestão dos centros de dados começaram a adotar algoritmos preditivos. No entanto, os algoritmos preditivos são treinados com base nas cargas de utilização das aplicações e geralmente não estão otimizados para todos os padrões, causando sub/sobre provisionamento dos recursos. Através da utilização de dados reais do centro de dados de um banco português, pro- pomos o Ensemble Adaptive Model with Drift detector (EAMDrift). Este novo método combina previsões de vários modelos individuais através de uma métrica de desempe- nho. O EAMDrift possui mecanismos interpretáveis que permitem detetar os melhores modelos em cada previsão, bem como detetar momentos para ser retreinado. A nossa metodologia foi testada num problema com dados reais, e foi estudada a influência de fatores externos (notícias relacionadas com o banco) com a sua utilização. Sendo estes dados de um banco, é possível que os utilizadores aumentem ou diminuam o uso de algumas aplicações com base em fatores externos (polêmicas ou notícias sobre economia). Para isto, o EAMDrift permite o uso de outras variáveis (covariadas). O modelo proposto neste trabalho foi avaliado em diferentes conjuntos de dados e os resultados foram comparados entre vários modelos de base. O EAMDrift superou todos os modelos de base em 15% a 25%. Quando comparado com o melhor modelo que também combina várias previsões mas de forma não interpretável, o nosso modelo obteve um erro comparável (maior em 1 a 3%). Assim, este trabalho sugere que modelos interpretáveis podem ser uma solução viável para a gestão dos centros de dados

    Hybrid SDN Evolution: A Comprehensive Survey of the State-of-the-Art

    Full text link
    Software-Defined Networking (SDN) is an evolutionary networking paradigm which has been adopted by large network and cloud providers, among which are Tech Giants. However, embracing a new and futuristic paradigm as an alternative to well-established and mature legacy networking paradigm requires a lot of time along with considerable financial resources and technical expertise. Consequently, many enterprises can not afford it. A compromise solution then is a hybrid networking environment (a.k.a. Hybrid SDN (hSDN)) in which SDN functionalities are leveraged while existing traditional network infrastructures are acknowledged. Recently, hSDN has been seen as a viable networking solution for a diverse range of businesses and organizations. Accordingly, the body of literature on hSDN research has improved remarkably. On this account, we present this paper as a comprehensive state-of-the-art survey which expands upon hSDN from many different perspectives

    Towards an efficient indexing and searching model for service discovery in a decentralised environment.

    Get PDF
    Given the growth and outreach of new information, communication, computing and electronic technologies in various dimensions, the amount of data has explosively increased in the recent years. Centralised systems suffer some limitations to dealing with this issue due to all data is stored in central data centres. Thus, decentralised systems are getting more attention and increasing in popularity. Moreover, efficient service discovery mechanisms have naturally become an essential component in both large-scale and small-scale decentralised systems and. This research study is aimed at modelling a novel efficient indexing and searching model for service discovery in decentralised environments comprising numerous repositories with massive stored services. The main contributions of this research study can be summarised in three components: a novel distributed multilevel indexing model, an optimised searching algorithm and a new simulation environment. Indexing model has been widely used for efficient service discovery. For instance; the inverted index is one of the popular indexing models used for service retrieval in consistent repositories. However, redundancies are inevitable in the inverted index which is significantly time-consuming in the service discovery and retrieval process. This theeis proposes a novel distributed multilevel indexing model (DM-index), which offers an efficient solution for service discovery and retrieval in distributed service repositories comprising massive stored services. The architecture of the proposed indexing model encompasses four hierarchical levels to eliminate redundancy information in service repositories, to narrow the searching space and to reduce the number of traversed services whilst discovering services. Distributed Hash Tables have been widely used to provide data lookup services with logarithmic message costs which only require maintenance of limited amounts of routing states. This thesis develops an optimised searching algorithm, named Double-layer No-redundancy Enhanced Bi-direction Chord (DNEB-Chord), to handle retrieval requests in distributed destination repositories efficiently. This DNEB-Chord algorithm achieves faster routing performances with the double-layer routing mechanism and optimal routing index. The efficiency of the developed indexing and searching model is evaluated through theoretical analysis and experimental evaluation in a newly developed simulation environment, named Distributed Multilevel Bi-direction Simulator (DMBSim), which can be used as cost efficient tool for exploring various service configurations, user retrieval requirements and other parameter settings. Both the theoretical validation and experimental evaluations demonstrate that the service discovery efficiency of the DM-index outperforms the sequential index and inverted index configurations. Furthermore, the experimental evaluation results demostrate that the DNEB-Chord algorithm performs better than the Chord in terms of reducing the incurred hop counts. Finally, simulation results demonstrate that the proposed indexing and searching model can achieve better service discovery performances in large-scale decentralised environments comprising numerous repositories with massive stored services.N/

    Flow Delegation: Flow Table Capacity Bottleneck Mitigation for Software-defined Networks

    Get PDF
    This dissertation introduces flow delegation, a novel concept to deal with flow table capacity bottlenecks in Software-defined Networks (SDNs). Such bottlenecks occur when SDN switches provide insufficient flow table capacity which can lead to performance degradation and/or network failures. Flow delegation addresses this well-known problem by automatically relocating flow rules from a bottlenecked switch to neighboring switches with spare capacity. Different from existing work, this new approach can be used on-demand in a transparent fashion, i.e., without changes to the network applications or other parts of the infrastructure. The thesis presents a system design and architecture capable of dealing with the numerous practical challenges associated with flow delegation, introduces suitable algorithms to efficiently mitigate bottlenecks taking future knowledge and multiple objectives into account and studies feasibility, performance, overhead, and scalability of the new approach covering different scenarios

    Transparenz aus Kundensicht: Bausteine zum Monitoring von Cloud-Umgebungen

    Get PDF
    corecore