129 research outputs found

    Enabling multipath optical routing with hybrid differential delay compensation

    Get PDF
    Historically, Internet traffic has been routed over the shortest path: that was convenient for best-effort data traffic, but it is not always suitable for today’s scenario where applications can require bandwidth higher than what is available in a single link, even when provided by an optical wavelength-channel. Multi-path (MP) routing is a network functionality that provides more capacity, reduces the probability of link congestion and increases the availability of the transport service. This paper elaborates on techniques to mitigate the differential delay in all optical networks, recognized as the main problem of multi-path (MP) routing. This work shows how hybrid differential delay compensation (H-DDC) can greatly reduce the use of expensive reconstruction buffers in all optical networks implementing MP optical routing. A novel mixed integer linear programming formulation is proposed for the novel wavelength + H-DDC assignment problem: distributed fiber delay lines (FDL)s combined with electronic reconstruction buffers collocated at optical regeneration points. Numerical results based on commercially available (and rack mountable) FDLs demonstrate the effectiveness of H-DDC in medium size transport networks

    Delay Aware Survivable Routing with Network Coding in Software Defined Networks

    Get PDF
    It was demonstrated in transport networks that network (diversity) coding can provide sufficient redundancy to ensure instantaneous single link failure recovery, while nearoptimal bandwidth efficiency can be reached. However, in the resulting multi-path routing problem the end-to-end delays were not considered. On the other hand, even in a European-scale network the delay difference of the paths has severe effect on the Quality-of-Service of application scenarios, such as video streaming. Thus, in this paper we thoroughly investigate survivable routing in Software Defined Networks (SDNs) with several additional delay bounds to the bandwidth cost minimization problem. We build on the fact that, if the user data is split into at most two parts, then the minimum cost coding solution has a well-defined acyclic structure of subsequent paths and disjoint path-pairs between the communication end-points. Complexity analysis and integer linear programs are provided to solve these delay aware survivable routing problems in SDNs

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    Low-latency Networking: Where Latency Lurks and How to Tame It

    Full text link
    While the current generation of mobile and fixed communication networks has been standardized for mobile broadband services, the next generation is driven by the vision of the Internet of Things and mission critical communication services requiring latency in the order of milliseconds or sub-milliseconds. However, these new stringent requirements have a large technical impact on the design of all layers of the communication protocol stack. The cross layer interactions are complex due to the multiple design principles and technologies that contribute to the layers' design and fundamental performance limitations. We will be able to develop low-latency networks only if we address the problem of these complex interactions from the new point of view of sub-milliseconds latency. In this article, we propose a holistic analysis and classification of the main design principles and enabling technologies that will make it possible to deploy low-latency wireless communication networks. We argue that these design principles and enabling technologies must be carefully orchestrated to meet the stringent requirements and to manage the inherent trade-offs between low latency and traditional performance metrics. We also review currently ongoing standardization activities in prominent standards associations, and discuss open problems for future research

    A Comprehensive Survey of the Tactile Internet: State of the art and Research Directions

    Get PDF
    The Internet has made several giant leaps over the years, from a fixed to a mobile Internet, then to the Internet of Things, and now to a Tactile Internet. The Tactile Internet goes far beyond data, audio and video delivery over fixed and mobile networks, and even beyond allowing communication and collaboration among things. It is expected to enable haptic communication and allow skill set delivery over networks. Some examples of potential applications are tele-surgery, vehicle fleets, augmented reality and industrial process automation. Several papers already cover many of the Tactile Internet-related concepts and technologies, such as haptic codecs, applications, and supporting technologies. However, none of them offers a comprehensive survey of the Tactile Internet, including its architectures and algorithms. Furthermore, none of them provides a systematic and critical review of the existing solutions. To address these lacunae, we provide a comprehensive survey of the architectures and algorithms proposed to date for the Tactile Internet. In addition, we critically review them using a well-defined set of requirements and discuss some of the lessons learned as well as the most promising research directions

    Machine Learning Meets Communication Networks: Current Trends and Future Challenges

    Get PDF
    The growing network density and unprecedented increase in network traffic, caused by the massively expanding number of connected devices and online services, require intelligent network operations. Machine Learning (ML) has been applied in this regard in different types of networks and networking technologies to meet the requirements of future communicating devices and services. In this article, we provide a detailed account of current research on the application of ML in communication networks and shed light on future research challenges. Research on the application of ML in communication networks is described in: i) the three layers, i.e., physical, access, and network layers; and ii) novel computing and networking concepts such as Multi-access Edge Computing (MEC), Software Defined Networking (SDN), Network Functions Virtualization (NFV), and a brief overview of ML-based network security. Important future research challenges are identified and presented to help stir further research in key areas in this direction

    Non-Terrestrial Networks in the 6G Era: Challenges and Opportunities

    Full text link
    Many organizations recognize non-terrestrial networks (NTNs) as a key component to provide cost-effective and high-capacity connectivity in future 6th generation (6G) wireless networks. Despite this premise, there are still many questions to be answered for proper network design, including those associated to latency and coverage constraints. In this paper, after reviewing research activities on NTNs, we present the characteristics and enabling technologies of NTNs in the 6G landscape and shed light on the challenges in the field that are still open for future research. As a case study, we evaluate the performance of an NTN scenario in which satellites use millimeter wave (mmWave) frequencies to provide access connectivity to on-the-ground mobile terminals as a function of different networking configurations.Comment: 8 pages, 4 figures, 2 tables, submitted for publication to the IEE

    On the Orchestration and Provisioning of NFV-enabled Multicast Services

    Get PDF
    The paradigm of network function virtualization (NFV) with the support of software-defined networking has emerged as a prominent approach to foster innovation in the networking field and reduce the complexity involved in managing modern-day conventional networks. Before NFV, functions, which can manipulate the packet header and context of traffic flow, used to be implemented at fixed locations in the network substrate inside proprietary physical devices (called middlewares). With NFV, such functions are softwarized and virtualized. As such, they can be deployed in commodity servers as demanded. Hence, the provisioning of a network service becomes more agile and abstract, thereby giving rise to the next-generation service-customized networks which have the potential to meet new demands and use cases. In this thesis, we focus on three complementary research problems essential to the orchestration and provisioning of NFV-enabled multicast network services. An NFV-enabled multicast service connects a source with a set of destinations. It specifies a set of NFs that should be executed at the chosen routes from the source to the destinations, with some resources and ordering relationships that should be satisfied in wired core networks. In Problem I, we investigate a static joint traffic routing and virtual NF placement framework for accommodating multicast services over the network substrate. We develop optimal formulations and efficient heuristic algorithms that jointly handle the static embedding of one or multiple service requests over the network substrate with single-path and multipath routing. In Problem II, we study the online orchestration of NFV-enabled network services. We consider both unicast and multicast NFV-enabled services with mandatory and best-effort NF types. Mandatory NFs are strictly necessary for the correctness of a network service, whereas best-effort NFs are preferable yet not necessary. Correspondingly, we propose a primal-dual based online approximation algorithm that allocates both processing and transmission resources to maximize a profit function that is proportional to the throughput. The online algorithm resembles a joint admission mechanism and an online composition, routing, and NF placement framework. In the core network, traffic patterns exhibit time-varying characteristics that can be cumbersome to model. Therefore, in Problem III, we develop a dynamic provisioning approach to allocate processing and transmission resources based on the traffic pattern of the embedded network service using deep reinforcement learning (RL). Notably, we devise a model-assisted exploration procedure to improve the efficiency and consistency of the deep RL algorithm
    • …
    corecore