475 research outputs found

    Traffic Engineering in G-MPLS networks with QoS guarantees

    Get PDF
    In this paper a new Traffic Engineering (TE) scheme to efficiently route sub-wavelength requests with different QoS requirements is proposed for G-MPLS networks. In most previous studies on TE based on dynamic traffic grooming, the objectives were to minimize the rejection probability by respecting the constraints of the optical node architecture, but without considering service differentiation. In practice, some high-priority (HP) connections can instead be characterized by specific constraints on the maximum tolerable end-to-end delay and packet-loss ratio. The proposed solution consists of a distributed two-stage scheme: each time a new request arrives, an on-line dynamic grooming scheme finds a route which fulfills the QoS requirements. If a HP request is blocked at the ingress router, a preemption algorithm is executed locally in order to create room for this traffic. The proposed preemption mechanism minimizes the network disruption, both in term of number of rerouted low-priority connections and new set-up lightpaths, and the signaling complexity. Extensive simulation experiments are performed to demonstrate the efficiency of our scheme

    Rerouting Technique for Faster Restoration of Preempted Calls

    Get PDF
    In a communication network where resources are shared between instantaneous request (IR) and book-ahead (BA) connections, activation of future BA connections causes preemption of many on-going IR connections upon resource scarcity. A solution to this problem is to reroute the preempted calls via alternative feasible paths, which often does not ensure acceptably low disruption of service. In this paper, a new rerouting strategy is proposed that uses the destination node to initiate the rerouting and thereby reduces the rerouting time, which ultimately improves the service disruption time. Simulations on a widely used network topology suggest that the proposed rerouting scheme achieves more successful rerouting rate with lower service disruption time, while not compromising other network performance metrics like utilization and call blocking rate

    RMD-QOSM: The NSIS Quality-of-Service Model for Resource Management in Diffserv

    Get PDF
    This document describes a Next Steps in Signaling (NSIS) Quality-of- Service (QoS) Model for networks that use the Resource Management in Diffserv (RMD) concept. RMD is a technique for adding admission control and preemption function to Differentiated Services (Diffserv) networks. The RMD QoS Model allows devices external to the RMD network to signal reservation requests to Edge nodes in the RMD network. The RMD Ingress Edge nodes classify the incoming flows into traffic classes and signals resource requests for the corresponding traffic class along the data path to the Egress Edge nodes for each flow. Egress nodes reconstitute the original requests and continue forwarding them along the data path towards the final destination. In addition, RMD defines notification functions to indicate overload situations within the domain to the Edge nodes

    End to End Inter-domain Quality of Service Provisioning

    Get PDF

    Real-time bandwidth encapsulation for IP/MPLS Protection Switching

    Get PDF
    Bandwidth reservation and bandwidth allocation are needed to guarantee the protection of voice traffic during network failure. Since voice calls have a time constraint of 50 ms within which the traffic must be recovered, a real-time bandwidth management scheme is required. Such bandwidth allocation scheme that prioritizes voice traffic will ensure that the voice traffic is guaranteed the necessary bandwidth during the network failure. Additionally, a mechanism is also required to provide the bandwidth to voice traffic when the reserved bandwidth is insufficient to accommodate voice traffic. This mechanism must be able to utilise the working bandwidth or bandwidth reserved for lower priority applications and allocate it to the voice traffic when a network failure occurs

    A GMPLS/OBS network architecture enabling QoS-aware end-to-end burst transport

    Get PDF
    This paper introduces a Generalized Multi-Protocol Label Switching (GMPLS)-enabled Optical Burst Switched (OBS) network architecture featuring end-to-end QoS-aware burst transport services. This is achieved by setting up burst Label Switched Paths (LSPs) properly dimensioned to match specific burst drop probability requirements. These burst LSPs are used for specific guaranteed QoS levels, whereas the remaining network capacity can be left for best-effort burst support. Aiming to ensure the requested burst drop probability figures even under bursty traffic patterns, burst LSPs’ performance is continuously monitored. Therefore, GMPLS-driven capacity reconfigurations can be dynamically triggered whether unfavorable network conditions are detected. Through the paper, the GMPLS/OBS architecture is firstly detailed, followed by the presentation of the optimized methods used for the initial burst LSP dimensioning. The successful network performance is finally illustrated by simulations on several network scenarios.Preprin

    An Intelligent Model To Control Preemption Rate Of Instantaneous Request Calls In Networks With Book-Ahead Reservation

    Get PDF
    Resource sharing between book-ahead (BA) and instantaneous request (IR) reservation often results in high preemption rate of on-going IR calls. High IR call preemption rate causes interruption to service continuity which is considered as detrimental in a QoS-enabled network. A number of call admission control models have been proposed in literature to reduce the preemption rate of on-going IR calls. Many of these models use a tuning parameter to achieve certain level of preemption rate. This paper presents an artificial neural network (ANN) model to dynamically control the preemption rate of on-going calls in a QoS-enabled network. The model maps network traffic parameters and desired level of preemption rate into appropriate tuning parameter. Once trained, this model can be used to automatically estimate the tuning parameter value necessary to achieve the desired level of preemption rate. Simulation results show that the preemption rate attained by the model closely matches with the target rate

    RMD-QOSM - The Resource Management in Diffserv QoS model

    Get PDF
    This document describes an NSIS QoS Model for networks that use the Resource Management in Diffserv (RMD) concept. RMD is a technique for adding admission control and preemption function to Differentiated Services (Diffserv) networks. The RMD QoS Model allows devices external to the RMD network to signal reservation requests to edge nodes in the RMD network. The RMD Ingress edge nodes classify the incoming flows into traffic classes and signals resource requests for the corresponding traffic class along the data path to the Egress edge nodes for each flow. Egress nodes reconstitute the original requests and continue forwarding them along the data path towards the final destination. In addition, RMD defines notification functions to indicate overload situations within the domain to the edge nodes

    Software-Defined Cloud Computing: Architectural Elements and Open Challenges

    Full text link
    The variety of existing cloud services creates a challenge for service providers to enforce reasonable Software Level Agreements (SLA) stating the Quality of Service (QoS) and penalties in case QoS is not achieved. To avoid such penalties at the same time that the infrastructure operates with minimum energy and resource wastage, constant monitoring and adaptation of the infrastructure is needed. We refer to Software-Defined Cloud Computing, or simply Software-Defined Clouds (SDC), as an approach for automating the process of optimal cloud configuration by extending virtualization concept to all resources in a data center. An SDC enables easy reconfiguration and adaptation of physical resources in a cloud infrastructure, to better accommodate the demand on QoS through a software that can describe and manage various aspects comprising the cloud environment. In this paper, we present an architecture for SDCs on data centers with emphasis on mobile cloud applications. We present an evaluation, showcasing the potential of SDC in two use cases-QoS-aware bandwidth allocation and bandwidth-aware, energy-efficient VM placement-and discuss the research challenges and opportunities in this emerging area.Comment: Keynote Paper, 3rd International Conference on Advances in Computing, Communications and Informatics (ICACCI 2014), September 24-27, 2014, Delhi, Indi
    • …
    corecore