33 research outputs found

    HFOS <sub>L</sub>:hyper scale fast optical switch-based data center network with L-level sub-network

    Get PDF
    The ever-expanding growth of internet traffic enforces deployment of massive Data Center Networks (DCNs) supporting high performance communications. Optical switching is being studied as a promising approach to fulfill the surging requirements of large scale data centers. The tree-based optical topology limits the scalability of the interconnected network due to the limitations in the port count of optical switches and the lack of optical buffers. Alternatively, buffer-less Fast Optical Switch (FOS) was proposed to realize the nanosecond switching of optical DCNs. Although FOSs provide nanosecond optical switching, they still suffer from port count limitations to scale the DCN. To address the issue of scaling DCNs to more than two million servers, we propose the hyper scale FOS-based L-level DCNs (HFOSL) which is capable of building large networks with small radix switches. The numerical analysis shows L of 4 is the optimal level for HFOSL to obtain the lowest cost and power consumption. Specifically, under a network size of 160,000 servers, HFOS4 saves 36.2% in cost compared with the 2-level FOS-based DCN, while achieves 60% improvement for cost and 26.7% improvement for power consumption compared with Fat tree. Moreover, a wide range of simulations and analyses demonstrate that HFOS4 outperforms state-of-art FOS-based DCNs by up to 40% end-to-end latency under DCN size of 81920 servers.</p

    HFOS <sub>L</sub>:hyper scale fast optical switch-based data center network with L-level sub-network

    Get PDF
    The ever-expanding growth of internet traffic enforces deployment of massive Data Center Networks (DCNs) supporting high performance communications. Optical switching is being studied as a promising approach to fulfill the surging requirements of large scale data centers. The tree-based optical topology limits the scalability of the interconnected network due to the limitations in the port count of optical switches and the lack of optical buffers. Alternatively, buffer-less Fast Optical Switch (FOS) was proposed to realize the nanosecond switching of optical DCNs. Although FOSs provide nanosecond optical switching, they still suffer from port count limitations to scale the DCN. To address the issue of scaling DCNs to more than two million servers, we propose the hyper scale FOS-based L-level DCNs (HFOSL) which is capable of building large networks with small radix switches. The numerical analysis shows L of 4 is the optimal level for HFOSL to obtain the lowest cost and power consumption. Specifically, under a network size of 160,000 servers, HFOS4 saves 36.2% in cost compared with the 2-level FOS-based DCN, while achieves 60% improvement for cost and 26.7% improvement for power consumption compared with Fat tree. Moreover, a wide range of simulations and analyses demonstrate that HFOS4 outperforms state-of-art FOS-based DCNs by up to 40% end-to-end latency under DCN size of 81920 servers.</p

    Building a Digital Twin for network optimization using graph neural networks

    Get PDF
    Network modeling is a critical component of Quality of Service (QoS) optimization. Current networks implement Service Level Agreements (SLA) by careful configuration of both routing and queue scheduling policies. However, existing modeling techniques are not able to produce accurate estimates of relevant SLA metrics, such as delay or jitter, in networks with complex QoS-aware queueing policies (e.g., strict priority, Weighted Fair Queueing, Deficit Round Robin). Recently, Graph Neural Networks (GNNs) have become a powerful tool to model networks since they are specifically designed to work with graph-structured data. In this paper, we propose a GNN-based network model able to understand the complex relationship between the queueing policy (scheduling algorithm and queue sizes), the network topology, the routing configuration, and the input traffic matrix. We call our model TwinNet, a Digital Twin that can accurately estimate relevant SLA metrics for network optimization. TwinNet can generalize to its input parameters, operating successfully in topologies, routing, and queueing configurations never seen during training. We evaluate TwinNet over a wide variety of scenarios with synthetic traffic and validate it with real traffic traces. Our results show that TwinNet can provide accurate estimates of end-to-end path delays in 106 unseen real-world topologies, under different queuing configurations with a Mean Absolute Percentage Error (MAPE) of 3.8%, as well as a MAPE of 6.3% error when evaluated with a real testbed. We also showcase the potential of the proposed model for SLA-driven network optimization and what-if analysis.This publication is part of the Spanish I+D+i project TRAINER-A (ref.PID2020-118011GB-C21), funded by MCIN/ AEI/, Spain10.13039/501100011033. This work is also partially funded by the Catalan Institution for Research and Advanced Studies (ICREA), Spain and the Secretariat for Universities and Research of the Ministry of Business and Knowledge of the Government of Catalonia, Spain and the European Social Fund.Peer ReviewedPostprint (published version

    NFV Platforms: Taxonomy, Design Choices and Future Challenges

    Get PDF
    Due to the intrinsically inefficient service provisioning in traditional networks, Network Function Virtualization (NFV) keeps gaining attention from both industry and academia. By replacing the purpose-built, expensive, proprietary network equipment with software network functions consolidated on commodity hardware, NFV envisions a shift towards a more agile and open service provisioning paradigm. During the last few years, a large number of NFV platforms have been implemented in production environments that typically face critical challenges, including the development, deployment, and management of Virtual Network Functions (VNFs). Nonetheless, just like any complex system, such platforms commonly consist of abounding software and hardware components and usually incorporate disparate design choices based on distinct motivations or use cases. This broad collection of convoluted alternatives makes it extremely arduous for network operators to make proper choices. Although numerous efforts have been devoted to investigating different aspects of NFV, none of them specifically focused on NFV platforms or attempted to explore their design space. In this paper, we present a comprehensive survey on the NFV platform design. Our study solely targets existing NFV platform implementations. We begin with a top-down architectural view of the standard reference NFV platform and present our taxonomy of existing NFV platforms based on what features they provide in terms of a typical network function life cycle. Then we thoroughly explore the design space and elaborate on the implementation choices each platform opts for. We also envision future challenges for NFV platform design in the incoming 5G era. We believe that our study gives a detailed guideline for network operators or service providers to choose the most appropriate NFV platform based on their respective requirements. Our work also provides guidelines for implementing new NFV platforms

    Results and achievements of the ALLIANCE Project: New network solutions for 5G and beyond

    Get PDF
    Leaving the current 4th generation of mobile communications behind, 5G will represent a disruptive paradigm shift integrating 5G Radio Access Networks (RANs), ultra-high-capacity access/metro/core optical networks, and intra-datacentre (DC) network and computational resources into a single converged 5G network infrastructure. The present paper overviews the main achievements obtained in the ALLIANCE project. This project ambitiously aims at architecting a converged 5G-enabled network infrastructure satisfying those needs to effectively realise the envisioned upcoming Digital Society. In particular, we present two networking solutions for 5G and beyond 5G (B5G), such as Software Defined Networking/Network Function Virtualisation (SDN/NFV) on top of an ultra-high-capacity spatially and spectrally flexible all-optical network infrastructure, and the clean-slate Recursive Inter-Network Architecture (RINA) over packet networks, including access, metro, core and DC segments. The common umbrella of all these solutions is the Knowledge-Defined Networking (KDN)-based orchestration layer which, by implementing Artificial Intelligence (AI) techniques, enables an optimal end-to-end service provisioning. Finally, the cross-layer manager of the ALLIANCE architecture includes two novel elements, namely the monitoring element providing network and user data in real time to the KDN, and the blockchain-based trust element in charge of exchanging reliable and confident information with external domains.This work has been partially funded by the Spanish Ministry of Economy and Competitiveness under contract FEDER TEC2017-90034-C2 (ALLIANCE project) and by the Generalitat de Catalunya under contract 2017SGR-1037 and 2017SGR-605.Peer ReviewedPostprint (published version

    Understanding (Mis)Behavior on the EOSIO Blockchain

    Get PDF
    © 2020 Copyright is held by the owner/author(s). EOSIO has become one of the most popular blockchain platforms since its mainnet launch in June 2018. In contrast to the traditional PoW-based systems (e.g., Bitcoin and Ethereum), which are limited by low throughput, EOSIO is the first high throughput Delegated Proof of Stake system that has been widely adopted by many decentralized applications. Although EOSIO has millions of accounts and billions of transactions, little is known about its ecosystem, especially related to security and fraud. In this paper, we perform a large-scale measurement study of the EOSIO blockchain and its associated DApps. We gather a large-scale dataset of EOSIO and characterize activities including money transfers, account creation and contract invocation. Using our insights, we then develop techniques to automatically detect bots and fraudulent activity. We discover thousands of bot accounts (over 30% of the accounts in the platform) and a number of real-world attacks (301 attack accounts). By the time of our study, 80 attack accounts we identified have been confirmed by DApp teams, causing 828,824 EOS tokens losses (roughly $2.6 million) in total

    BECA: A Blockchain-Based Edge Computing Architecture for Internet of Things Systems

    Get PDF
    The scale of Internet of Things (IoT) systems has expanded in recent times and, in tandem with this, IoT solutions have developed symbiotic relationships with technologies, such as edge Computing. IoT has leveraged edge computing capabilities to improve the capabilities of IoT solutions, such as facilitating quick data retrieval, low latency response, and advanced computation, among others. However, in contrast with the benefits offered by edge computing capabilities, there are several detractors, such as centralized data storage, data ownership, privacy, data auditability, and security, which concern the IoT community. This study leveraged blockchain&rsquo;s inherent capabilities, including distributed storage system, non-repudiation, privacy, security, and immutability, to provide a novel, advanced edge computing architecture for IoT systems. Specifically, this blockchain-based edge computing architecture addressed centralized data storage, data auditability, privacy, data ownership, and security. Following implementation, the performance of this solution was evaluated to quantify performance in terms of response time and resource utilization. The results show the viability of the proposed and implemented architecture, characterized by improved privacy, device data ownership, security, and data auditability while implementing decentralized storage

    Is machine learning ready for traffic engineering optimization?

    Get PDF
    Traffic Engineering (TE) is a basic building block of the Internet. In this paper, we analyze whether modern Machine Learning (ML) methods are ready to be used for TE optimization. We address this open question through a comparative analysis between the state of the art in ML and the state of the art in TE. To this end, we first present a novel distributed system for TE that leverages the latest advancements in ML. Our system implements a novel architecture that combines Multi-Agent Reinforcement Learning (MARL) and Graph Neural Networks (GNN) to minimize network congestion. In our evaluation, we compare our MARL+GNN system with DEFO, a network optimizer based on Constraint Programming that represents the state of the art in TE. Our experimental results show that the proposed MARL+GNN solution achieves equivalent performance to DEFO in a wide variety of network scenarios including three real-world network topologies. At the same time, we show that MARL+GNN can achieve significant reductions in execution time (from the scale of minutes with DEFO to a few seconds with our solution).This work was supported by the Spanish MINECO under contract TEC2017-90034-C2-1-R (ALLIANCE), the Catalan Institution for Research and Advanced Studies (ICREA) and the Secretariat for Universities and Research of the Ministry of Business and Knowledge of the Government of Catalonia as well as the European Social Fund.Peer ReviewedPostprint (author's final draft
    corecore