446 research outputs found

    Evaluating several implementations for the AS Minimum Bandwidth Egress Link Scheduler

    Get PDF
    Abstract-The relevance of the provision of QoS is taken into account in the definition of the new network technologies like for example Advanced Switching (AS). AS is a new fabricinterconnect technology that further enhances the capabilities of PCI Express, which is the next PCI generation. In this paper we discuss the aspects that must be considered for implementing a specific mechanism for the AS minimum bandwidth egress link scheduler, or just MinBW scheduler. We also propose several implementations for this scheduler, analyze their computational complexity, and compare their performance by simulation. The main differentiating aspect from other interconnection technologies that must be taken into account when implementing the AS MinBW scheduler is that both the link-level flow control and the scheduling are made at a Virtual Channel (VC) level. This means that the scheduler must have the ability to enable or disable the selection of a given VC based on the flow control information

    Improving the Flexibility of the Deficit Table Scheduler

    Full text link
    Abstract. A key component for networks with Quality of Service (QoS) support is the egress link scheduler. The table-based schedulers are simple to implement and can offer good latency bounds. Some of the latest proposals of network technologies, like Advanced Switching and Infini-Band, define in their specifications one of these schedulers. However, these schedulers do not work properly with variable packet sizes and face the problem of bounding the bandwidth and latency assignments. We have proposed a new table-based scheduler, the Deficit Table (DTable) scheduler, that works properly with variable packet sizes. Moreover, we have proposed a methodology to configure this table-based scheduler that partially decouples the bandwidth and latency assignments. In this paper we propose a method to improve the flexibility of the decoupling methodology. Moreover, we compare the latency performance of this strategy with two well-known scheduling algorithms: the Self-Clocked Weighted Fair Queuing (SCFQ) and the Deficit Round Robin (DRR) algorithms.

    Performance Evaluation of MPLS in a Virtualized Service Provider Core (with/without Class of Service)

    Get PDF
    The last decade has witnessed a major change in the types of traffic scaling the Internet. With the development of real-time applications several challenges were faced within traditional IP networks. Some of these challenges are delay, increased costs faced by the service provider and customer, limited scalability, separate infrastructure costs and high administrative overheads to manage large networks etc. To combat these challenges, researchers have steered towards finding alternate solutions. Over the recent years, we have seen an introduction of a number of virtualized platforms and solutions being offered in the networking industry. Virtual load balancers, virtual firewalls, virtual routers, virtual intrusion detection and preventions systems are just a few examples within the Network Function Virtualization world! Service Providers are trying to find solutions where they could reduce operational expenses while at the same time meet the growing bandwidth demands of their customers. The main aim of this thesis is to evaluate the performance of voice, data and video traffic in a virtualized service provider core. Observations are made on how these traffic types perform on congested vs uncongested links and how Quality of Service treats traffic in a virtualized Service Provider Core using Round Trip Time as a performance metric. This thesis also tries to find if resiliency features such as Fast Reroute provide an additional advantage in failover scenarios within virtualized service provider cores. Juniper Networks vSRX are used to replicate virtual routers in a virtualized service provider core. Twenty-Four tests are carried out to gain a better understanding of how real-time applications and resiliency methods perform in virtualized networks. It is observed that a trade-off exists when introducing QoS on congested primary and secondary label switched paths. What can be observed thru the graphs is having Quality of Service enabled drops more packets however gives us the advantage of lower Round Trip Time for in-profile traffic. On the hand, having Quality of Service disabled, permits more traffic but leads to bandwidth contention between the three traffic classes leading to higher Round-Trip Times. The true benefit of QoS is seen in traffic congestion scenarios. The test bed built in this thesis, shows us that Fast Reroute does not add a significant benefit to aid in the reduction of packet loss during failover scenarios between primary and secondary paths. However, in certain scenarios fast reroute does seem to reduce packet loss specifically for data traffic

    Providing guaranteed QoS in the hose-modeled VPN

    Get PDF
    With the development of the Internet, Internet service providers (ISPs) are required to offer revenue-generating and value-added services instead of only providing bandwidth and access services. Virtual Private Network (VPN) is one of the most important value-added services for ISPs. The classical VPN service is provided by implementing layer 2 technologies, either Frame Relay (FR) or Asynchronous Transfer Mode (ATM). With FR or ATM, virtual circuits are created before data delivery. Since the bandwidth and buffers are reserved, the QoS requirements can be naturally guaranteed. In the past few years, layer 3 VPN technologies are widely deployed due to the desirable performance in terms of flexibility, scalability and simplicity. Layer 3 VPNs are built upon IP tunnels, e.g., by using PPTP, L2TP or IPSec. Since IP is best-of-effort in nature, the QoS requirement cannot be guaranteed in layer 3 VPNs. Actually, layer 3 VPN service can only provide secure connectivity, i.e., protecting and authenticating IP packets between gateways or hosts in a VPN. Without doubt, with more applications on voice, audio and video being used in the Internet, the provision of QoS is one of the most important parts of the emerging services provided by ISPs. An intriguing question is: Is it possible to obtain the best of both layer 2 and 3 VPN? Is it possible to provide guaranteed or predictable QoS, as in layer 2 VPNs, while maintaining the flexibility and simplicity in layer 3 VPN? This question is the starting point of this study. The recently proposed hose model for VPN possesses desirable properties in terms of flexibility, scalability and multiplexing gain. However, the classic fair bandwidth allocation schemes and weighted fair queuing schemes raise the issue of low overall utilization in this model. A new fluid model for provider-provisioned virtual private network (PPVPN) is proposed in this dissertation. Based on the proposed model, an idealized fluid bandwidth allocation scheme is developed. This scheme is proven, analytically, to have the following properties: 1) maximize the overall throughput of the VPN without compromising fairness; 2) provide a mechanism that enables the VPN customers to allocate the bandwidth according to their requirements by assigning different weights to different hose flows, and thus obtain the predictable QoS performance; and 3) improve the overall throughput of the ISPs\u27 network. To approximate the idealized fluid scheme in the real world, the 2-dimensional deficit round robin (2-D DRR and 2-D DRR+) schemes are proposed. The integration of the proposed schemes with the best-effort traffic within the framework of virtual-router-based VPN is also investigated. The 2-D DRR and 2-D DER-+ schemes can be extended to multi-dimensional schemes to be employed in those applications which require a hierarchical scheduling architecture. To enhance the scalability, a more scalable non-per-flow-based scheme for output queued switches is developed as well, and the integration of this scheme within the framework of the MPLS VPN and applications for multicasting traffics is discussed. The performance and properties of these schemes are analyzed

    Modelação e simulação de equipamentos de rede para Indústria 4.0

    Get PDF
    Currently, the industrial sector has increasingly opted for digital technologies in order to automate all its processes. This development comes from notions like Industry 4.0 that redefines the way these systems are designed. Structurally, all the components of these systems are connected in a complex network known as the Industrial Internet of Things. Certain requirements arise from this concept regarding industrial communication networks. Among them, the need to ensure real-time communications, as well as support for dynamic resource management, are extremely relevant. Several research lines pursued to develop network technologies capable of meeting such requirements. One of these protocols is the Hard Real-Time Ethernet Switch (HaRTES), an Ethernet switch with support for real-time communications and dynamic resource management, requirements imposed by Industry 4.0. The process of designing and implementing industrial networks can, however, be quite time consuming and costly. These aspects impose limitations on testing large networks, whose level of complexity is higher and requires the usage of more hardware. The utilization of network simulators stems from the necessity to overcome such restrictions and provide tools to facilitate the development of new protocols and evaluation of communications networks. In the scope of this dissertation a HaRTES switch model was developed in the OMNeT++ simulation environment. In order to demonstrate a solution that can be employed in industrial real-time networks, this dissertation presents the fundamental aspects of the implemented model as well as a set of experiments that compare it with an existing laboratory prototype, with the objective of validating its implementation.Atualmente o setor industrial tem vindo cada vez mais a optar por tecnologias digitais de forma a automatizar todos os seus processos. Este desenvolvimento surge de noções como Indústria 4.0, que redefine o modo de como estes sistemas são projetados. Estruturalmente, todos os componentes destes sistemas encontram-se conectados numa rede complexa conhecida como Internet Industrial das Coisas. Certos requisitos advêm deste conceito, no que toca às redes de comunicação industriais, entre os quais se destacam a necessidade de garantir comunicações tempo-real bem como suporte a uma gestão dinâmica dos recursos, os quais são de extrema importância. Várias linhas de investigação procuraram desenvolver tecnologias de rede capazes de satisfazer tais exigências. Uma destas soluções é o "Hard Real-Time Ethernet Switch" (HaRTES), um switch Ethernet com suporte a comunicações de tempo-real e gestão dinâmica de Qualidade-de-Serviço (QoS), requisitos impostos pela Indústria 4.0. O processo de projeto e implementação de redes industriais pode, no entanto, ser bastante moroso e dispendioso. Tais aspetos impõem limitações no teste de redes de largas dimensões, cujo nível de complexidade é mais elevado e requer o uso de mais hardware. Os simuladores de redes permitem atenuar o impacto de tais limitações, disponibilizando ferramentas que facilitam o desenvolvimento de novos protocolos e a avaliação de redes de comunicações. No âmbito desta dissertação desenvolveu-se um modelo do switch HaRTES no ambiente de simulação OMNeT++. Com um objetivo de demonstrar uma solução que possa ser utilizada em redes de tempo-real industriais, esta dissertação apresenta os aspetos fundamentais do modelo implementado bem como um conjunto de experiências que o comparam com um protótipo laboratorial já existente, no âmbito da sua validação.Mestrado em Engenharia Eletrónica e Telecomunicaçõe

    A Study of Application-awareness in Software-defined Data Center Networks

    Get PDF
    A data center (DC) has been a fundamental infrastructure for academia and industry for many years. Applications in DC have diverse requirements on communication. There are huge demands on data center network (DCN) control frameworks (CFs) for coordinating communication traffic. Simultaneously satisfying all demands is difficult and inefficient using existing traditional network devices and protocols. Recently, the agile software-defined Networking (SDN) is introduced to DCN for speeding up the development of the DCNCF. Application-awareness preserves the application semantics including the collective goals of communications. Previous works have illustrated that application-aware DCNCFs can much more efficiently allocate network resources by explicitly considering applications needs. A transfer application task level application-aware software-defined DCNCF (SDDCNCF) for OpenFlow software-defined DCN (SDDCN) for big data exchange is designed. The SDDCNCF achieves application-aware load balancing, short average transfer application task completion time, and high link utilization. The SDDCNCF is immediately deployable on SDDCN which consists of OpenFlow 1.3 switches. The Big Data Research Integration with Cyberinfrastructure for LSU (BIC-LSU) project adopts the SDDCNCF to construct a 40Gb/s high-speed storage area network to efficiently transfer big data for accelerating big data related researches at Louisiana State University. On the basis of the success of BIC-LSU, a coflow level application-aware SD- DCNCF for OpenFlow-based storage area networks, MinCOF, is designed. MinCOF incorporates all desirable features of existing coflow scheduling and routing frame- works and requires minimal changes on hosts. To avoid the architectural limitation of the OpenFlow SDN implementation, a coflow level application-aware SDDCNCF using fast packet processing library, Coflourish, is designed. Coflourish exploits congestion feedback assistances from SDN switches in the DCN to schedule coflows and can smoothly co-exist with arbitrary applications in a shared DCN. Coflourish is implemented using the fast packet processing library on an SDN switch, Open vSwitch with DPDK. Simulation and experiment results indicate that Coflourish effectively shortens average application completion time
    corecore