1,566 research outputs found

    Feedback for increased robustness of forwarding graphs in the cloud

    Get PDF
    Cloud computing technology provides the means to share physical resources among multiple users and data center tenants by exposing them as virtual resources. There is a strong industrial drive to use similar technology and concepts to provide timing sensitive services. One such domain is a chain of connected virtual network functions. This allows the capacity of each function to be scaled up and down by adding or removing virtual resources. In this work, we develop a model of such service chain and pose the dynamic allocation of resources as an optimization problem. We design and present a set of strategies to allow virtual network nodes to be controlled in an optimal fashion subject to latency and buffer constraints. Furthermore, we derive a feedback-law for dynamically adjusting the amount of resources given to each functions in order to ensure that the system remains in the desired state even if there are modeling errors or for a stochastic input

    Dynamic control of NFV forwarding graphs with end-to-end deadline constraints

    Get PDF
    There is a strong industrial drive to use cloud computing technologies and concepts for providing timing sensitive services in the networking domain since it would provide the means to share the physical resources among multiple users and thus increase the elasticity and reduce the costs. In this work, we develop a mathematical model for user-stateless virtual network functions forming a forwarding graph. The model captures uncertainties of the performance of these virtual resources as well as the time-overhead needed to instantiate them. The model is used to derive a service controller for horizontal scaling of the virtual resources as well as an admission controller that guarantees that packets exiting the forwarding graph meet their end-to-end deadline. The Automatic Service and Admission Controller (AutoSAC) developed in this work uses feedback and feedforward making it robust against uncertainties of the underlying infrastructure. Also, it has a fast reaction time to changes in the input

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    Resource Tuned Optimal Random Network Coding for Single Hop Multicast future 5G Networks

    Get PDF
    Optimal random network coding is reduced complexity in computation of coding coefficients, computation of encoded packets and coefficients are such that minimal transmission bandwidth is enough to transmit coding coefficient to the destinations and decoding process can be carried out as soon as encoded packets are started being received at the destination and decoding process has lower computational complexity. But in traditional random network coding, decoding process is possible only after receiving all encoded packets at receiving nodes. Optimal random network coding also reduces the cost of computation. In this research work, coding coefficient matrix size is determined by the size of layers which defines the number of symbols or packets being involved in coding process. Coding coefficient matrix elements are defined such that it has minimal operations of addition and multiplication during coding and decoding process reducing computational complexity by introducing sparseness in coding coefficients and partial decoding is also possible with the given coding coefficient matrix with systematic sparseness in coding coefficients resulting lower triangular coding coefficients matrix. For the optimal utility of computational resources, depending upon the computational resources unoccupied such as memory available resources budget tuned windowing size is used to define the size of the coefficient matrix

    ANCHOR: logically-centralized security for Software-Defined Networks

    Get PDF
    While the centralization of SDN brought advantages such as a faster pace of innovation, it also disrupted some of the natural defenses of traditional architectures against different threats. The literature on SDN has mostly been concerned with the functional side, despite some specific works concerning non-functional properties like 'security' or 'dependability'. Though addressing the latter in an ad-hoc, piecemeal way, may work, it will most likely lead to efficiency and effectiveness problems. We claim that the enforcement of non-functional properties as a pillar of SDN robustness calls for a systemic approach. As a general concept, we propose ANCHOR, a subsystem architecture that promotes the logical centralization of non-functional properties. To show the effectiveness of the concept, we focus on 'security' in this paper: we identify the current security gaps in SDNs and we populate the architecture middleware with the appropriate security mechanisms, in a global and consistent manner. Essential security mechanisms provided by anchor include reliable entropy and resilient pseudo-random generators, and protocols for secure registration and association of SDN devices. We claim and justify in the paper that centralizing such mechanisms is key for their effectiveness, by allowing us to: define and enforce global policies for those properties; reduce the complexity of controllers and forwarding devices; ensure higher levels of robustness for critical services; foster interoperability of the non-functional property enforcement mechanisms; and promote the security and resilience of the architecture itself. We discuss design and implementation aspects, and we prove and evaluate our algorithms and mechanisms, including the formalisation of the main protocols and the verification of their core security properties using the Tamarin prover.Comment: 42 pages, 4 figures, 3 tables, 5 algorithms, 139 reference

    Congestion avoidance in overlay networks through multipath routing

    Get PDF
    Overlay networks relying on traditional multicast routing approaches use only a single path between a sender and a receiver. This path is selected based on latency, with the goal of achieving fast delivery. Content is routed through links with low latency, ignoring slower links of the network which remain unused. With the increasing size of content on the Internet, this leads to congestion, messages are dropped and have to be retransmitted. A multicast multipath congestion-avoidance routing scheme which uses multiple bottleneck-disjoint paths between senders and receivers was developed, as was a linear programming model of the network to distribute messages intelligently across these paths according to two goals: minimum network usage and load-balancing. The former aims to use as few links as possible to perform routing, while the latter spreads messages across as many links as possible, evenly distributing the traffic. Another technique, called message splitting, was also used. This allows nodes to send a single copy of a message with multiple receivers, which will then be duplicated by a node closer to the receivers and sent along separate paths only when required. The model considers all of the messages in the network and is a global optimisation. Nevertheless, it can be solved quickly for large networks and workloads, with the cost of routing remaining almost entirely the cost of finding multiple paths between senders and receivers. The Gurobi linear programming solver was used to find solutions to the model. This routing approach was implemented in the NS-3 network simulator. The work is presented as a messaging middleware scheme, which can be applied to any overlay messaging network.Open Acces
    corecore