2,085 research outputs found

    A novel load-balancing scheme for cellular-WLAN heterogeneous systems with cell-breathing technique

    Get PDF
    This paper proposes a novel load-balancing scheme for an operator-deployed cellular-wireless local area network (WLAN) heterogeneous network (HetNet), where the user association is controlled by employing a cell-breathing technique for the WLAN network. This scheme eliminates the complex coordination and additional signaling overheads between the users and the network by allowing the users to simply associate with the available WLAN networks similar to the traditional WLAN-first association, without making complex association decisions. Thus, this scheme can be easily implemented in an existing operator-deployed cellular-WLAN HetNet. The performance of the proposed scheme is evaluated in terms of load distribution between cellular and WLAN networks, user fairness, and system throughput, which demonstrates the superiority of the proposed scheme in load distribution and user fairness, while optimizing the system throughput. In addition, a cellular-WLAN interworking architecture and signaling procedures are proposed for implementing the proposed load-balancing schemes in an operator-deployed cellular-WLAN HetNet

    Traffic engineering in ambient networks: challenges and approaches

    Get PDF
    The focus of this paper is on traffic engineering in ambient networks. We describe and categorize different alternatives for making the routing more adaptive to the current traffic situation and discuss the challenges that ambient networks pose on traffic engineering methods. One of the main objectives of traffic engineering is to avoid congestion by controlling and optimising the routing function, or in short, to put the traffic where the capacity is. The main challenge for traffic engineering in ambient networks is to cope with the dynamics of both topology and traffic demands. Mechanisms are needed that can handle traffic load dynamics in scenarios with sudden changes in traffic demand and dynamically distribute traffic to benefit from available resources. Trade-offs between optimality, stability and signaling overhead that are important for traffic engineering methods in the fixed Internet becomes even more critical in a dynamic ambient environment

    Network overload avoidance by traffic engineering and content caching

    Get PDF
    The Internet traffic volume continues to grow at a great rate, now driven by video and TV distribution. For network operators it is important to avoid congestion in the network, and to meet service level agreements with their customers. This thesis presents work on two methods operators can use to reduce links loads in their networks: traffic engineering and content caching. This thesis studies access patterns for TV and video and the potential for caching. The investigation is done both using simulation and by analysis of logs from a large TV-on-Demand system over four months. The results show that there is a small set of programs that account for a large fraction of the requests and that a comparatively small local cache can be used to significantly reduce the peak link loads during prime time. The investigation also demonstrates how the popularity of programs changes over time and shows that the access pattern in a TV-on-Demand system very much depends on the content type. For traffic engineering the objective is to avoid congestion in the network and to make better use of available resources by adapting the routing to the current traffic situation. The main challenge for traffic engineering in IP networks is to cope with the dynamics of Internet traffic demands. This thesis proposes L-balanced routings that route the traffic on the shortest paths possible but make sure that no link is utilised to more than a given level L. L-balanced routing gives efficient routing of traffic and controlled spare capacity to handle unpredictable changes in traffic. We present an L-balanced routing algorithm and a heuristic search method for finding L-balanced weight settings for the legacy routing protocols OSPF and IS-IS. We show that the search and the resulting weight settings work well in real network scenarios

    A Novel Placement Algorithm for the Controllers Of the Virtual Networks (COVN) in SD-WAN with Multiple VNs

    Get PDF
    The escalation of communication demands and the emergence of new telecommunication concepts such as 5G cellular system and smart cities requires the consolidation of a flexible and manageable backbone network. These requirements motivated the researcher to come up with a new placement algorithm for the Controller of Virtual Network (COVN). This is because SDN and network virtualisation techniques (NFV and NV), are integrated to produce multiple virtual networks running on a single SD-WAN infrastructure, which serves the new backbone. One of the significant challenges of SD-WAN is determining the number and the locations of its controllers to optimise the network latency and reliability. This problem is fairly investigated and solved by several controller placement algorithms where the focus is only on physical controllers. The advent of the sliced SD-WAN produces a new challenge, which necessitates the SDWAN controllers (physical controller/hosted server) to run multiple instances of controllers (virtual controllers). Every virtual network is managed by its virtual controllers. This calls for an algorithm to determine the number and the positions of physical and virtual controllers of the multiple virtual SD-WANs. According to the literature review and to the best of the author knowledge, this problem is neither examined nor yet solved. To address this issue, the researcher designed a novel COVN placement algorithm to compute the controller placement of the physical controllers, then calculate the controller placement of every virtual SD-WAN independently, taking into consideration the controller placement of other virtual SD-WANs. COVN placement does not partition the SD-WAN when placing the physical controllers, unlike all previous placement algorithms. Instead, it identifies the nodes of the optimal reliability and latency to all switches of the network. Then, it partitions every VN separately to create its independent controller placement. COVN placement optimises the reliability and the latency according to the desired weights. It also maintains the load balancing and the optimal resources utilisation. Moreover, it supports the recovering of the controller failure. This novel algorithm is intensively evaluated using the produced COVN simulator and the developed Mininet emulator. The results indicate that COVN placement achieves the required optimisations mentioned above. Also, the implementations disclose that COVN placement can compute the controller placement for a large network ( 754 switches) in very small computation time (49.53 s). In addition, COVN placement is compared to POCO algorithm. The outcome reveals that COVN placement provides better reliability in about 30.76% and a bit higher latency in about 1.38%. Further, it surpasses POCO by constructing the balanced clusters according to the switch loads and offering the more efficient placement to recover controller-failure

    Simple and stable dynamic traffic engineering for provider scale ethernet

    Get PDF
    Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia InformáticaThe high speeds and decreasing costs of Ethernet solutions has motivated providers’ interest in using Ethernet as the link layer technology in their backbone and aggregation networks. Provider scale Ethernet offers further advantages, providing not only an easy to manage solution for multicast traffic, but also transparent interconnection between clients’ LANs. These Ethernet deployments face altogether different design issues, requiring support for a significantly higher number of hosts. This support relies on hierarquization, separating address and virtual network spaces of customers and providers. In addition, large scale Ethernet solutions need to grant forwarding optimality. This can be achieved using traffic engineering approaches. Traffic engineering defines the set of engineering methods and techniques used to optimize the flow of network traffic. Static traffic engineering approaches enjoy widespread use in provider networks, but their performance is greatly penalized by sudden load variations. On the other hand, dynamic traffic engineering is tailored to adapt to load changes. However, providers are skeptical to adopt dynamic approaches as these induce problems such as routing instability, and as a result, network performance decreases. This dissertation presents a Simple and Stable Dynamic Traffic Engineering framework (SSD-TE), which addresses these concerns in a provider scale Ethernet scenario. The validation results show that SSD-TE achieves better or equal performance to static traffic engineering approaches, whilst remaining both stable and responsive to load variations

    TCP flow aware adaptive path switching in diffserv enabled MPLS networks

    Get PDF
    Cataloged from PDF version of article.We propose an adaptive flow-level multi-path routing-based traffic engineering solution for an IP backbone network carrying TCP/IP traffic. Incoming TCP flows are switched between two explicitly routed paths, namely the primary and secondary paths (PP and SP), for resilience and potential goodput improvement at the TCP layer. In the proposed architecture, PPs receive a preferential treatment over SPs using differentiated services mechanisms. The reason for this choice is not for service differentiation but for coping with the detrimental knock-on effect stemming from the use of longer SP that is well known for conventional network load balancing algorithms. Moreover, both paths are congestion-controlled using Explicit Congestion Notification marking at the core and Additive Increase Multiplicative Decrease rate adjustment at the ingress nodes. The delay difference between PP and SP is estimated using two per-egress rate-controlling buffers maintained at the ingress nodes for each path, and this delay difference is used to determine the path over which a new TCP flow will be routed. We perform extensive simulations using ns-2 in order to demonstrate the viability of the proposed distributed adaptive multi-path routing method in terms of per-flow TCP goodput. The proposed solution consistently outperforms the single-path routing policy and provides substantial per-flow goodput gains under poor PP conditions. Moreover, highest goodput improvements under the proposed scheme are achieved by flows that receive the lowest goodputs with single-path routing, while the performance of the flows with high goodputs with single-path routing does not deteriorate with the proposed path switching technique. Copyright # 2011 John Wiley & Sons, Ltd

    A traffic engineering system for DiffServ/MPLS networks

    Get PDF
    This thesis presents an approach to traffic engineering that uses DiffServ and MPLS technologies to provide QoS guarantees over an IP network. The specific problem described here is how best to route traffic within the network such that the demands can be carried with the requisite QoS while balancing the load on the network. A traffic engineering algorithm that determines QoS guaranteed label-switched paths (LSPs) between specified ingress-egress pairs is proposed and a system that uses such an algorithm is outlined. The algorithm generates a solution for the QoS routing problem of finding a path with a number of constraints (delay, jitter, loss) while trying to make best of resource utilisation. The key component of the system is a central resource manager responsible for monitoring and managing resources within the network and making all decisions to route traffic according to QoS requirements. The algorithm for determining QoS-constrained routes is based on the notion of effective bandwidth and cost functions for load balancing. The network simulation of the proposed system is presented here and simulation results are discussed

    Auto-bandwidth control in dynamically reconfigured hybrid-SDN MPLS networks

    Get PDF
    The proposition of this work is based on the steady evolution of bandwidth demanding technology, which currently and more so in future, requires operators to use expensive infrastructure capability smartly to maximise its use in a very competitive environment. In this thesis, a traffic engineering control loop is proposed that dynamically adjusts the bandwidth and route of flows of Multi-Protocol Label Switching (MPLS) tunnels in response to changes in traffic demand. Available bandwidth is shifted to where the demand is, and where the demand requirement has dropped, unused allocated bandwidth is returned to the network. An MPLS network enhanced with Software-defined Networking (SDN) features is implemented. The technology known as hybrid SDN combines the programmability features of SDN with the robust MPLS label switched path features along with traffic engineering enhancements introduced by routing protocols such as Border Gateway Patrol-Traffic Engineering (BGP-TE) and Open Shortest Path First-Traffic Engineering (OSPF-TE). The implemented mixed-integer linear programming formulation using the minimisation of maximum link utilisation and minimum link cost objective functions, combined with the programmability of the hybrid SDN network allows for source to destination demand fluctuations. A key driver to this research is the programmability of the MPLS network, enhanced by the contributions that the SDN controller technology introduced. The centralised view of the network provides the network state information needed to drive the mathematical modelling of the network. The path computation element further enables control of the label switched path's bandwidths, which is adjusted based on current demand and optimisation method used. The hose model is used to specify a range of traffic conditions. The most important benefit of the hose model is the flexibility that is allowed in how the traffic matrix can change if the aggregate traffic demand does not exceed the hose maximum bandwidth specification. To this end, reserved hose bandwidth can now be released to the core network to service demands from other sites
    • …
    corecore