13 research outputs found

    Performance Management in ATM Networks

    Get PDF
    ATM is representative of the connection-oriented resource provisioning classof protocols. The ATM network is expected to provide end-to-end QoS guaranteesto connections in the form of bounds on delays, errors and/or losses. Performancemanagement involves measurement of QoS parameters, and application of controlmeasures (if required) to improve the QoS provided to connections, or to improvethe resource utilization at switches. QoS provisioning is very important for realtimeconnections in which losses are irrecoverable and delays cause interruptionsin service. QoS of connections on a node is a direct function of the queueing andscheduling on the switch. Most scheduling architectures provide static allocationof resources (scheduling priority, maximum buffer) at connection setup time. Endto-end bounds are obtainable for some schedulers, however these are precluded forheterogeneously composed networks. The resource allocation does not adapt to theQoS provided on connections in real time. In addition, mechanisms to measurethe QoS of a connection in real-time are scarce.In this thesis, a novel framework for performance management is proposed. Itprovides QoS guarantees to real time connections. It comprises of in-service QoSmonitoring mechanisms, a hierarchical scheduling algorithm based on dynamicpriorities that are adaptive to measurements, and methods to tune the schedulers atindividual nodes based on the end-to-end measurements. Also, a novel scheduler isintroduced for scheduling maximum delay sensitive traffic. The worst case analysisfor the leaky bucket constrained traffic arrivals is presented for this scheduler. Thisscheduler is also implemented on a switch and its practical aspects are analyzed.In order to understand the implementability of complex scheduling mechanisms,a comprehensive survey of the state-of-the-art technology used in the industry isperformed. The thesis also introduces a method of measuring the one-way delayand jitter in a connection using in-service monitoring by special cells

    Real-time communication in packet-switched networks

    Full text link

    Real-Time Communication in Packet-Switched Networks

    Get PDF
    Abstract The dramatically increased bandwidths and processing capabilities of future high-speed networks make possible many distributed real-time applications, such as sensor-based applications and multimedia services. Since these applications will have tra c characteristics and performance requirements that di er dramatically from those of current data-oriented applications, new communication network architectures and protocols will be required. In this paper we discuss the performance requirements and tra c characteristics of various real-time applications, survey recent developments in the areas of network architecture and protocols for supporting real-time services, and develop frameworks in which these, and future, research e orts can be considered

    Real-time communications over switched Ethernet supporting dynamic QoS management

    Get PDF
    Doutoramento em Engenharia InformáticaDurante a última década temos assistido a um crescente aumento na utilização de sistemas embutidos para suporte ao controlo de processos, de sistemas robóticos, de sistemas de transportes e veículos e até de sistemas domóticos e eletrodomésticos. Muitas destas aplicações são críticas em termos de segurança de pessoas e bens e requerem um alto nível de determinismo com respeito aos instantes de execução das respectivas tarefas. Além disso, a implantação destes sistemas pode estar sujeita a limitações estruturais, exigindo ou beneficiando de uma configuração distribuída, com vários subsistemas computacionais espacialmente separados. Estes subsistemas, apesar de espacialmente separados, são cooperativos e dependem de uma infraestrutura de comunicação para atingir os objectivos da aplicação e, por consequência, também as transacções efectuadas nesta infraestrutura estão sujeitas às restrições temporais definidas pela aplicação. As aplicações que executam nestes sistemas distribuídos, chamados networked embedded systems (NES), podem ser altamente complexas e heterogéneas, envolvendo diferentes tipos de interacções com diferentes requisitos e propriedades. Um exemplo desta heterogeneidade é o modelo de activação da comunicação entre os subsistemas que pode ser desencadeada periodicamente de acordo com uma base de tempo global (time-triggered), como sejam os fluxos de sistemas de controlo distribuído, ou ainda ser desencadeada como consequência de eventos assíncronos da aplicação (event-triggered). Independentemente das características do tráfego ou do seu modelo de activação, é de extrema importância que a plataforma de comunicações disponibilize as garantias de cumprimento dos requisitos da aplicação ao mesmo tempo que proporciona uma integração simples dos vários tipos de tráfego. Uma outra propriedade que está a emergir e a ganhar importância no seio dos NES é a flexibilidade. Esta propiedade é realçada pela necessidade de reduzir os custos de instalação, manutenção e operação dos sistemas. Neste sentido, o sistema é dotado da capacidade para adaptar o serviço fornecido à aplicação aos respectivos requisitos instantâneos, acompanhando a evolução do sistema e proporcionando uma melhor e mais racional utilização dos recursos disponíveis. No entanto, maior flexibilidade operacional é igualmente sinónimo de maior complexidade derivada da necessidade de efectuar a alocação dinâmica dos recursos, acabando também por consumir recursos adicionais no sistema. A possibilidade de modificar dinâmicamente as caracteristicas do sistema também acarreta uma maior complexidade na fase de desenho e especificação. O aumento do número de graus de liberdade suportados faz aumentar o espaço de estados do sistema, dificultando a uma pre-análise. No sentido de conter o aumento de complexidade são necessários modelos que representem a dinâmica do sistema e proporcionem uma gestão optimizada e justa dos recursos com base em parâmetros de qualidade de serviço (QdS). É nossa tese que as propriedades de flexibilidade, pontualidade e gestão dinâmica de QdS podem ser integradas numa rede switched Ethernet (SE), tirando partido do baixo custo, alta largura de banda e fácil implantação. Nesta dissertação é proposto um protocolo, Flexible Time-Triggered communication over Switched Ethernet (FTT-SE), que suporta as propriedades desejadas e que ultrapassa as limitações das redes SE para aplicações de tempo-real tais como a utilização de filas FIFO, a existência de poucos níveis de prioridade e a pouca capacidade de gestão individualizada dos fluxos. O protocolo baseia-se no paradigma FTT, que genericamente define a arquitectura de uma pilha protocolar sobre o acesso ao meio de uma rede partilhada, impondo desta forma determinismo temporal, juntamente com a capacidade para reconfiguração e adaptação dinâmica da rede. São ainda apresentados vários modelos de distribuição da largura de banda da rede de acordo com o nível de QdS especificado por cada serviço utilizador da rede. Esta dissertação expõe a motivação para a criação do protocolo FTT-SE, apresenta uma descrição do mesmo, bem como a análise de algumas das suas propiedades mais relevantes. São ainda apresentados e comparados modelos de distribuição da QdS. Finalmente, são apresentados dois casos de aplicações que sustentam a validade da tese acima mencionada.During the last decade we have witnessed a massive deployment of embedded systems on a wide applications range, from industrial automation to process control, avionics, cars or even robotics. Many of these applications have an inherently high level of criticality, having to perform tasks within tight temporal constraints. Additionally, the configuration of such systems is often distributed, with several computing nodes that rely on a communication infrastructure to cooperate and achieve the application global goals. Therefore, the communications are also subject to the same temporal constraints set by the application requirements. Many applications relying on such networked embedded systems (NES) are complex and heterogeneous, comprehending different activities with different requirements and properties. For example, the communication between subsystems may follow a strict temporal synchronization with respect to a global time-base (time-triggered), like in a distributed feedback control loop, or it may be issued asynchronously upon the occurrence of events (eventtriggered). Regardless of the traffic characteristics and its activation model, it is of paramount importance having a communication framework that provides seamless integration of heterogeneous traffic sources while guaranteeing the application requirements. Another property that has been emerging as important for NES design and operation is flexibility. The need to reduce installation and operational costs, while facilitating maintenance is promoting a more rational use of the available resources at run-time, exploring the ability to tune service parameters as the system evolves. However, such operational flexibility comes with the cost of increasing the complexity of the system to handle the dynamic resource management, which on the other hand demands the allocation of additional system resources. Moreover, the capacity to dynamically modify the system properties also causes a higher complexity when designing and specifying the system, since the operational state-space increases with the degrees of flexibility of the system. Therefore, in order to bound this complexity appropriate operational models are needed to handle the system dynamics and carry on an efficient and fair resource management strategy based on quality of service (QoS) metrics. This thesis states that the properties of flexibility and timeliness as needed for dynamic QoS management can be provided to switched Ethernet based systems. Switched Ethernet, although initially designed for general purpose Internet access and file transfers, is becoming widely used in NES-based applications. However, COTS switched Ethernet is insufficient regarding the needs for real-time predictability and for supporting the aforementioned properties due the use of FIFO queues too few priority levels and for stream-level management capabilities. In this dissertation we propose a protocol to overcome those limitations, namely the Flexible Time-Triggered communication over Switched Ethernet (FTT-SE). The protocol is based on the FTT paradigm that generically defines a protocol architecture suitable to enforce real-time determinism on a communication network supporting the desired flexibility properties. This dissertation addresses the motivation for FTT-SE, describing the protocol as well as its schedulability analysis. It additionally covers the resource distribution topic, where several distribution models are proposed to manage the resource capacity among the competing services and while considering the QoS level requirements of each service. A couple of application cases are shown that support the aforementioned thesis

    Resource management in in-home digital networks using Dantzig-Wolfe decomposition

    Get PDF
    In een digitaal huisnetwerk zijn in het huis de verschillende digitale consumentenelektronica apparaten met elkaar verbonden, zoals een set-top-box, tv-scherm of harde schijf. Dit maakt nieuwe applicaties mogelijk, zoals het kunnen bekijken van een film op elke mogelijke plek in huis op elk gewenst moment zonder dat men precies weet waar deze film is opgeslagen. Deze nieuwe applicaties leiden echter tot nieuwe `resource management' problemen met als doel de `resources',zoals processoren, opslagapparatuur en communicatieverbindingen, zo efficient en effectief mogelijk te gebruiken.In dit proefschrift beschouwen we een enkele bus (communicatieverbinding) met beperkte bandbreedte, waarmee meerdere apparaten zijn verbonden. Tussen elk apparaat en de bus bevindt zich een buffer met beperkte capaciteit. Verder is er een verzameling video stromen gegeven waarbij elke stroom over de bus van het verzendend apparaat naar het ontvangend apparaat verzonden moet worden. Hierbij willen we voor iedere stroom een vast deel van de bandbreedte en betreffende buffers reserveren. We maken onderscheid tussen twee type stromen, te weten volledig gespecificeerde stromen en `leaky bucket' gereguleerde stromen. Van een volledig gespecificeerde stroom weten we exact hoeveel data er wanneer wordt aangeboden en gevraagd bij de buffers van zijn verzendend respectievelijk ontvangend apparaat. Van een `leaky bucket' gereguleerde stroom kennen we alleen de parameters van de `leaky buckets' die de data-aanvoer van de stroom reguleren.Met deze parameters kunnen we een bovengrens voor de data-aanvoer gedurende elk mogelijk tijdsinterval geven.Allereerst definieren wij het Multiple Streams Smoothing Problem (MSSP). In een instantie van MSSP is een verzameling volledig gespecificeerde stromengegeven, de bandbreedte van de bus en de groottes van de verschillende buffers.Voor elke stroom moet een vast deel van de bandbreedte en de buffergroottes worden bepaald alsmede een verzendschema waarmee alle data voor de stroom op tijd kan worden verzonden. We modelleren MSSP als een lineair programmeringsprobleem en laten zien hoe Dantzig-Wolfe decompositie hierop kan worden toegepast. Dit leidt tot een hoofdprobleem en voor iedere stroom een subprobleem. Het subprobleem voor een stroom bestaat uit het minimaliseren van de kosten van de gereserveerde bandbreedte en buffergroottes, waarbij de kostencoefficienten volgen uit het geoptimaliseerde hoofdprobleem. Voor elke mogelijke combinatie van positievekostencoefficienten beschrijven we voor dit subprobleem een efficiente methode om een optimale oplossing te bepalen. Voor het minimaliseren van enkel de bandbreedte of enkel de buffergrootte van ´e´en van beide buffers passen wij hiervoor bestaande methoden aan. Voor het minimaliseren van beide buffergroottes laten we zien dat een optimale oplossing wordt verkregen door eerst de duurste buffer te minimaliseren en daarna de goedkoopste. Voor het afwegen van de bandbreedtetegen ´e´en buffergrootte beschrijven we een specifieke inruilmethode. Voor het afwegen van de bandbreedte tegen beide buffergroottes herleiden we het subprobleem eerst tot het vinden van het minimum van een stuksgewijs lineaire, convexe functie op de bandbreedte. Vervolgens beschrijven we twee efficente zoekmethoden om het minimum van deze functie met bijbehorende bandbreedte en buffergroottes te bepalen. Met behulp van experimentele resultaten geven we voor problemen van realistische grootte een indicatie van de rekentijd en van de benuttingsgraad van de bepaalde bandbreedte- en bufferreserveringen.Voor de `leaky bucket' gereguleerde stromen definieren wij het MultipleLeaky-Bucket Streams Smoothing Problem (MLBSSP). In een instantie van MLBSSP is een verzameling `leaky bucket' gereguleerde stromen gegeven, waarvoor een vast deel van de bandbreedte en buffergroottes moet worden bepaald als medeverzendstrategien waarmee alle data op tijd kan worden verstuurd. Ook MLBSSP modelleren we als een lineair programmeringsprobleem. Verder tonen we aan dat MLBSSP te reduceren is tot MSSP door de bovengrens op de data-aanvoer als daadwerkelijke data-aanvoer te gebruiken voor iedere stroom. Deze bovengrens heeft een paar specifieke kenmerken, nl. concaviteit en stuksgewijs lineariteit, die we gebruiken om voor `leaky bucket' gereguleerde stromen de subproblemen nog efficienter op te lossen. Hiervoor leiden we vier nieuwe, noodzakelijke en voldoende voorwaarden voor de bandbreedte- en bufferreserveringen van een stroom af. Met behulp van deze voorwaarden is de tijd om een subprobleem op te lossen lineair afhankelijk van het aantal `leaky buckets' i.p.v. de lengte van een stroom,zoals voor volledig gespecificeerde stromen. Een oplossing kan nu binnen eenf ractie van een seconde bepaald worden. Om experimenten uit te voeren voor deze methode voor MLBSSP, genereren we verschillende `leaky bucket' beschrijvingen voor iedere volledig gespecificeerde stroom die gebruikt was in de resultaten voor MSSP. De resultaten van deze experimenten zijn voor stromen die zijn beschreven door hun maximale aantal benodigde `leaky buckets', gelijk aan de resultaten voor de volledig gespecificeerde stromen. Behalve de bovengenoemde `off-line' varianten van MSSP en MLBSSP beschouwen we ook `on-line' varianten van deze problemen. In de `on-line' varianten zijn de starttijden van stromen onbekend en zijn de kenmerken van een stroom pas bekend op het moment dat deze wil starten. Een oplossing voor een`on-line' variant kan worden bepaald door elke keer dat een nieuwe stroom start,de methode voor het `off-line' probleem te gebruiken om nieuwe bandbreedte- en bufferreserveringen te bepalen. Indien de reserveringen van bestaande stromen dan mogen worden aangepast, dient er bij het oplossen van de subproblemen voor deze stromen rekening gehouden te worden met de hoeveelheid data die er in totaal al verzonden is. Verdere toevoegingen aan de `off-line' methode die we beschouwen en die kunnen leiden tot een hoger aantal toegelaten stromen, zijn doelfuncties zoals het minimaliseren van de totale gereserveerde bandbreedte of buffergrootte van een specifieke buffer. Ook laten we zien hoe de maximale relatieve `resource' reservering geminimaliseerd kan worden. Tenslotte beschrijven we een aanpak voor de verzending van data van een stroom, waarbij data pas uit de buffers verwijderd wordt als dat nodig is om ruimte te maken voor nieuw aangeleverde data. Numerieke experimenten laten zien dat verschillende van deze aanpassingen inderdaad tot betere resultaten kunnen leiden. Het aantal toegelaten stromen in deze experimenten is voor een `on-line' variant met bepaalde toevoegingen net zo hoog als voor de `off-line' variant

    Delay-aware Link Scheduling and Routing in Wireless Mesh Networks

    Get PDF
    Resource allocation is a critical task in computer networks because of their capital-intensive nature. In this thesis we apply operations research tools and technologies to model, solve and analyze resource allocation problems in computer networks with real-time traffic. We first study Wireless Mesh Networks, addressing the problem of link scheduling with end-to-end delay constraints. Exploiting results obtained with the Network Calculus framework, we formulate the problem as an integer non-linear optimization problem. We show that the feasibility of a link schedule does depend on the aggregation framework. We also address the problem of jointly solving the routing and link scheduling problem optimally, taking into account end-to-end delay guarantees. We provide guidelines and heuristics. As a second contribution, we propose a time division approach in CSMA MAC protocols in the context of 802.11 WLANs. By grouping wireless clients and scheduling time slots to these groups, not only the delay of packet transmission can be decreased, but also the goodput of multiple WLANs can be largely increased. Finally, we address a resource allocation problem in wired networks for guaranteed-delay traffic engineering. We formulate and solve the problem under different latency models. Global optimization let feasible schedules to be computed with instances where local resource allocation schemes would fail. We show that this is the case even with a case-study network, and at surprisingly low average loads

    Dynamic Resource Management of Network-on-Chip Platforms for Multi-stream Video Processing

    Get PDF
    This thesis considers resource management in the context of parallel multiple video stream decoding, on multicore/many-core platforms. Such platforms have tens or hundreds of on-chip processing elements which are connected via a Network-on-Chip (NoC). Inefficient task allocation configurations can negatively affect the communication cost and resource contention in the platform, leading to predictability and performance issues. Efficient resource management for large-scale complex workloads is considered a challenging research problem; especially when applications such as video streaming and decoding have dynamic and unpredictable workload characteristics. For these type of applications, runtime heuristic-based task mapping techniques are required. As the application and platform size increase, decentralised resource management techniques are more desirable to overcome the reliability and performance bottlenecks in centralised management. In this work, several heuristic-based runtime resource management techniques, targeting real-time video decoding workloads are proposed. Firstly, two admission control approaches are proposed; one fully deterministic and highly predictable; the other is heuristic-based, which balances predictability and performance. Secondly, a pair of runtime task mapping schemes are presented, which make use of limited known application properties, communication cost and blocking-aware heuristics. Combined with the proposed deterministic admission controller, these techniques can provide strict timing guarantees for hard real-time streams whilst improving resource usage. The third contribution in this thesis is a distributed, bio-inspired, low-overhead, task re-allocation technique, which is used to further improve the timeliness and workload distribution of admitted soft real-time streams. Finally, this thesis explores parallelisation and resource management issues, surrounding soft real-time video streams that have been encoded using complex encoding tools and modern codecs such as High Efficiency Video Coding (HEVC). Properties of real streams and decoding trace data are analysed, to statistically model and generate synthetic HEVC video decoding workloads. These workloads are shown to have complex and varying task dependency structures and resource requirements. To address these challenges, two novel runtime task clustering and mapping techniques for Tile-parallel HEVC decoding are proposed. These strategies consider the workload communication to computation ratio and stream-specific characteristics to balance predictability improvement and communication energy reduction. Lastly, several task to memory controller port assignment schemes are explored to alleviate performance bottlenecks, resulting from memory traffic contention

    Enhancement of The IEEE 802.15.4 Standard By Energy Efficient Cluster Scheduling

    Get PDF
    The IEEE 802.15.4 network is gaining popularity due to its wide range of application in Industries and day to day life. Energy Conservation in IEEE 802.15.4 nodes is always a concern for the designers as the life time of a network depends mainly on minimizing the energy consumption in the nodes. In ZigBee cluster-tree network, the existing literature does not provide combined solution for co-channel interference and power efficient scheduling. In addition, the technique that prevents network collision has not been provided. Delay and reliability issues are not addressed in the QoS-aware routing. Congestion is one of the major challenges in IEEE 802.15.4 Network. This network also has issues in admitting real time flows. The aim of the present research is to overcome the issues mentioned above by designing Energy Efficient Cluster Scheduling and Interference Mitigation, QoS Aware Inter-Cluster Routing Protocol and Adaptive Data Rate Control for Clustered Architecture for IEEE 802.15.4 Networks. To overcome the issue of Energy efficiency and network collision energy efficient cluster scheduling and interference mitigation for IEEE 802.15.4 Network is proposed. It uses a time division cluster scheduling technique that offers energy efficiency in the cluster-tree network. In addition, an interference mitigation technique is demonstrated which detects and mitigates the channel interference based on packet-error detection and repeated channel-handoff command transmission. For the issues of delay and reliability in cluster network, QoS aware intercluster routing protocol for IEEE 802.15.4 Networks is proposed. It consists of some modules like reliability module, packet classifier, hello protocol module, routing service module. Using the Packet classifier, the packets are classified into the data and hello packets. The data packets are classified based on the priority. Neighbour table is constructed to maintain the information of neighbour nodes reliabilities by Hello protocol module. Moreover, routing table is built using the routing service module. The delay in the route is controlled by delay metrics, which is a sum of queuing delay and transmission delay. For the issues of congestion and admit real-time flows an Adaptive data rate control for clustered architecture in IEEE 802.15.4 Networks is proposed. A network device is designed to regulate its data rate adaptively using the feedback message i.e. Congestion Notification Field (CNF) in beacon frame received from the receiver side. The network device controls or changes its data rate based on CNF value. Along with this scalability is considered by modifying encoding parameters using Particle Swarm Optimization (PSO) to balance the target output rate for supporting high data rate. Simulation results show that the proposed techniques significantly reduce the energy consumption by 17% and the network collision, enhance the performance, mitigate the effect of congestion, and admit real-time flows

    Packet scheduling in satellite HSDPA networks.

    Get PDF
    Thesis (M.Sc.Eng.)-University of KwaZulu-Natal, Durban, 2010.The continuous growth in wireless networks is not showing any sign of slowing down as new services, new technologies and new mobile users continue to emerge. Satellite networks are expected to complement the terrestrial network and be a valid option to provide broadband communications services to both fixed and mobile users in scenarios where terrestrial networks cannot be used due to technical and economical viability. In the current emerging satellite networks, where different users with varying traffic demands ranging from multimedia, voice to data and with limited capacity, Radio Resource Management (RRM) is considered as one of the most significant and challenging aspect needed to provide acceptable quality of service that will meet the requirements of the different mobile users. This dissertation considers Packet Scheduling in the Satellite High Speed Downlink Packet Access (S-HSDPA) network. The main focus of this dissertation is to propose a new cross-layer designed packet scheduling scheme, which is one of the functions of RRM, called Queue Aware Channel Based (QACB) Scheduler. The proposed scheduler, which, attempts to sustain the quality of service requirements of different traffic requests, improves the system performance compared to the existing schedulers. The performance analysis comparison of the throughput, delay and fairness is determined through simulations. These metrics have been chosen they are three major performance indices used in wireless communications. Due to long propagation delay in HSDPA via GEO satellite, there is misalignment between the instantaneous channel condition of the mobile user and the one reported to the base station (Node B) in S-HSDPA. This affects effectiveness of the channel based packet schedulers and leads to either under utilization of resource or loss of packets. Hence, this dissertation investigates the effect of the introduction of a Signal-to-Noise (SNR) Margin which is used to mitigate the effect of the long propagation delay on performance of S-HSDPA, and the appropriate SNR margin to be used to achieve the best performance is determined. This is determined using both a semi-analytical and a simulation approach. The results show that the SNR margin of 1.5 dB produces the best performance. Finally, the dissertation investigates the effect of the different Radio Link Control (RLC) Transmission modes which are Acknowledged Mode (AM) and Unacknowledged Mode (UM) as it affects different traffic types and schedulers in S-HSDPA. Proportional fair (PF) scheduler and our proposed, QACB, scheduler have been considered as the schedulers for this investigation. The results show that traffic types are sensitive to the transmitting RLC modes and that the QACB scheduler provides better performance compared to PF scheduler in the two RLC modes considered

    Architecture for Guaranteed Delay Service in High Speed Networks

    Get PDF
    The increasing importance of network connections coupled with the lack of abundant link capacity suggests that the day when service guarantees are required by individual connections is not far off. In this dissertation we describe a networking architecture that can efficiently provide end-to-end delay guarantees on a per- connection basis. In order to provide any kind of service guarantee it is imperative for the source traffic to be accurately characterized at the ingress to the network. Furthermore, this characterization should be enforceable through the use of a traffic shaper (or similar device). We go one step further and assume an extensive use of traffic shapers at each of the network elements. Reshaping makes the traffic at each node more predictable and therefore simplifies the task of providing efficient delay guarantees to individual connections. The use of per-connection reshapers to regulate traffic at each hop in the network is referred to as a Rate Controlled Service (RCS) discipline. By exploiting some properties of traffic shapers we demonstrate how the per-hop reshaping does not increase the bound on the end-to-end delay experienced by a connection. In particular, we show that an appropriate choice of traffic shaper parameters enables the RCS discipline to provide better end-to- end delay guarantees than any other service discipline known today. The RCS discipline can provide efficient end-to-end delay guarantees to a connection; however, by definition it is not work-conserving. This fact may increase the average delay that is observed by a connection even if there is no congestion in the network. We outline a mechanism by which an RCS discipline can be modified to be work-conserving without sacrificing the efficient end-to-end delay guarantees that can be provided to individual connections. Using the notion of service curves to bound the service process at each network element, we are able to provide an upper bound on the buffers required to ensure zero loss at the network element. Finally, we examine how the RCS discipline can be used in the context of the Guaranteed Services specification that is currently in the process of being standardized by the Internet Engineering Task Force
    corecore