28 research outputs found
Recommended from our members
Performance modelling of wormhole-routed hypercubes with bursty traffice and finite buffers
An open queueing network model (QNM) is proposed for wormhole-routed hypercubes with finite
buffers and deterministic routing subject to a compound Poisson arrival process (CPP) with geometrically
distributed batches or, equivalently, a generalised exponential (GE) interarrival time distribution. The GE/G/1/K
queue and appropriate GE-type flow formulae are adopted, as cost-effective building blocks, in a queue-by-queue
decomposition of the entire network. Consequently, analytic expressions for the channel holding time, buffering
delay, contention blocking and mean message latency are determined. The validity of the analytic approximations
is demonstrated against results obtained through simulation experiments. Moreover, it is shown that the wormholerouted
hypercubes suffer progressive performance degradation with increasing traffic variability (burstiness)
An Adaptive Scheme for Admission Control in ATM Networks
This paper presents a real time front-end admission control scheme for ATM networks. A call management scheme which uses the burstiness associated with traffic sources in a heterogeneous ATM environment to effect dynamic assignment of bandwidth is presented. In the proposed scheme, call acceptance is based on an on-line evaluation of the upper bound on cell loss probability which is derived from the estimated distribution of the number of calls arriving. Using this scheme, the negotiated quality of service will be assured when there is no estimation error. The control mechanism is effective when the number of calls is large, and tolerates loose bandwidth enforcement and loose policing control. The proposed approach is very effective in the connection oriented transport of ATM networks where the decision to admit new traffic is based on thea priori knowledge of the state of the route taken by the traffic
Recommended from our members
Analysis of a discrete-time single-server queue with bursty imputs for traffic control in ATM networks
Due to a large number of bursty traffic sources that an ATM network is expected to support, controlling network traffic becomes essential to provide a desirable level of network performance with its users. Admission control and traffic smoothing are among the most promising control techniques for an ATM network. To evaluate the performance of an ATM network when it is subject to admission control or traffic smoothing, we build a discrete-time single-server queueing model where a new call joins the existing calls.In our model. it is assumed that the cell arrivals from a new call follow a general distribution. It is also assumed that the aggregated arrivals of cells from the existing calls form batch arrivals with a general distribution for the batch size and a geometric distribution for the interarrival times of batches. We consider both finite and infinite buffer cases, and analytically obtain the waiting time distribution and cell loss probability for a new call and for existing calls. Our analysis is an exact one. Through numerical examples, we investigate how the network performance depends on the statistics of a new call (burstiness, time that a call stays in active or inactive state, etc.). We also demonstrate the effectiveness of traffic smoothing to reduce network congestion
Performance modelling and analysis of software defined networking
Software Defined Networking (SDN) is an emerging architecture for the next-generation Internet, providing unprecedented network programmability to handle the explosive growth of Big Data driven by the popularisation of smart mobile devices and the pervasiveness of content-rich multimedia applications. In order to quantitatively investigate the performance characteristics of SDN networks, several research efforts from both simulation experiments and analytical modelling have been reported in the current literature. Among those studies, analytical modelling has demonstrated its superiority in terms of cost-effectiveness in the evaluation of large-scale networks. However, for analytical tractability and simplification, existing analytical models are derived based on the unrealistic assumptions that the network traffic follows the Poisson process which is suitable to model non-bursty text data and the data plane of SDN is modelled by one simplified Single Server Single Queue (SSSQ) system. Recent measurement studies have shown that, due to the features of heavy volume and high velocity, the multimedia big data generated by real-world multimedia applications reveals the bursty and correlated nature in the network transmission. With the aim of the capturing such features of realistic traffic patterns and obtaining a comprehensive and deeper understanding of the performance behaviour of SDN networks, this paper presents a new analytical model to investigate the performance of SDN in the presence of the bursty and correlated arrivals modelled by Markov Modulated Poisson Process (MMPP). The Quality-of-Service performance metrics in terms of the average latency and average network throughput of the SDN networks are derived based on the developed analytical model. To consider realistic multi-queue system of forwarding elements, a Priority-Queue (PQ) system is adopted to model SDN data plane. To address the challenging problem of obtaining the key performance metrics, e.g., queue length distribution of PQ system with a given service capacity, a versatile methodology extending the Empty Buffer Approximation (EBA) method is proposed to facilitate the decomposition of such a PQ system to two SSSQ systems. The validity of the proposed model is demonstrated through extensive simulation experiments. To illustrate its application, the developed model is then utilised to study the strategy of the network configuration and resource allocation in SDN networksThis work is supported by the EU FP7 âQUICKâ Project (Grant NO. PIRSES-GA-2013-612652) and the
National Natural Science Foundation of China (Grant NO. 61303241)
Performance Modelling and Resource Allocation of the Emerging Network Architectures for Future Internet
With the rapid development of information and communications technologies, the traditional network architecture has approached to its performance limit, and thus is unable to meet the requirements of various resource-hungry applications. Significant infrastructure improvements to the network domain are urgently needed to guarantee the continuous network evolution and innovation. To address this important challenge, tremendous research efforts have been made to foster the evolution to Future Internet. Long-term Evolution Advanced (LTE-A), Software Defined Networking (SDN) and Network Function Virtualisation (NFV) have been proposed as the key promising network architectures for Future Internet and attract significant attentions in the network and telecom community. This research mainly focuses on the performance modelling and resource allocations of these three architectures. The major contributions are three-fold:
1) LTE-A has been proposed by the 3rd Generation Partnership Project (3GPP) as a promising candidate for the evolution of LTE wireless communication. One of the major features of LTE-A is the concept of Carrier Aggregation (CA). CA enables the network operators to exploit the fragmented spectrum and increase the peak transmission data rate, however, this technical innovation introduces serious unbalanced loads among in the radio resource allocation of LTE-A. To alleviate this problem, a novel QoS-aware resource allocation scheme, termed as Cross-CC User Migration (CUM) scheme, is proposed in this research to support real-time services, taking into consideration the system throughput, user fairness and QoS constraints.
2) SDN is an emerging technology towards next-generation Internet. In order to improve the performance of the SDN network, a preemption-based packet-scheduling scheme is firstly proposed in this research to improve the global fairness and reduce the packet loss rate in SDN data plane. Furthermore, in order to achieve a comprehensive and deeper understanding of the performance behaviour of SDN network, this work develops two analytical models to investigate the performance of SDN in the presence of Poisson Process and Markov Modulated Poisson Process (MMPP) respectively.
3) NFV is regarded as a disruptive technology for telecommunication service providers to reduce the Capital Expenditure (CAPEX) and Operational Expenditure (OPEX) through decoupling individual network functions from the underlying hardware devices. While NFV faces a significant challenging problem of Service-Level-Agreement (SLA) guarantee during service provisioning. In order to bridge this gap, a novel comprehensive analytical model based on stochastic network calculus is proposed in this research to investigate end-to-end performance of NFV network.
The resource allocation strategies proposed in this study significantly improve the network performance in terms of packet loss probability, global allocation fairness and throughput per user in LTE-A and SDN networks; the analytical models designed in this study can accurately predict the network performances of SDN and NFV networks. Both theoretical analysis and simulation experiments are conducted to demonstrate the effectiveness of the proposed algorithms and the accuracy of the designed models. In addition, the models are used as practical and cost-effective tools to pinpoint the performance bottlenecks of SDN and NFV networks under various network conditions
Recommended from our members
Performance analysis of an ATM network with multimedia traffic: a simulation study
Traffic and congestion control are important in enabling ATM networks to maintain the Quality of Service (QoS) required by end users. A Call Admission Control (CAC) strategy ensures that the network has sufficient resources available at the start of each call, but this does not prevent a traffic source from violating the negotiated contract. A policing strategy (User Parameter Control (UPC)) is also required to enforce the negotiated rates for a particular connection and to protect conforming users from network overload.
The aim of this work is to investigate traffic policing and bandwidth management at the User to Network Interface (UNI). A policing function is proposed which is based on the leaky bucket (LB) which offers improved performance for both real time (RT) traffic such as speech and video and non-real time (non-RT) traffic, mainly data by taking into account the QoS requirements. A video cell in violation of the negotiated bit rate causes the remainder of the slice to be discarded. This 'tail clipping' provides protection for the decoder from damaged video slices. Speech cells are coded using a frequency domain coder, which places the most significant bits of a double speech sample into a high priority cell and the least significant bits into a high priority cell. In the case of congestion, the low priority cell can be discarded with little impact on the intelligibility of the received speech. However, data cells require loss-free delivery and are buffered rather than being discarded or tagged for subsequent deletion. This triple strategy is termed the super leaky bucket (SLB).
Separate queues for RT and non-RT traffic, are also proposed at the multiplexer, with non pre-emptive priority service for RT traffic if the queue exceeds a predetermined threshold. If the RT queue continues to grow beyond a second threshold, then all low priority cells (mainly speech) are discarded. This scheme protects non-RT traffic from being tagged and subsequently discarded, by queueing the cells and also by throttling back non-RT sources during periods of congestion. It also prevents the RT cells from being delayed excessively in the multiplexer queue.
A simulation model has been designed and implemented to test the proposal. Realistic sources have been incorporated into the model to simulate the types of traffic which could be expected on an ATM network.
The results show that the S-LB outperforms the standard LB for video cells. The number of cells discarded and the resulting number of damaged video slices are significantly reduced. Dual queues with cyclic service at the multiplexer also reduce the delays experienced by RT cells. The QoS for all categories of traffic is preserved
Simulation of ATM multiplexer for bursty sources
Asynchronous transfer mode ( ATM ) is a promising
multiplexing and switching technique for implementing an
integrated access as well as transport network and has been
adopted by CCITT as a basis for the future broadband
integrated services digital network ( BISDN ). The ATM
technique allows digital communication of any type to share
common transmission links and switching devices on a
statistical multiplexing basis. Information is transmitted
in the form of constant length cells. In an ATM network,
the major parameters to cause ATM network performance
deterioration are the cell loss and the cell delay at the
buffer queue in the ATM multiplexer. Therefore, the
performance parameters of an ATM multiplexer are
specifically focused on the cell loss probability, the cell
delay, and the distribution of queueing length at buffer m
this study.
The performance of an ATM multiplexer is studied,
whose input consists of the superposition of homogeneous
bursty ( ON/OFF ) sources, 1e , all the superposed sources
are characterized by the bursty sources of the same
parameter values. The cell loss probability and the
distribution of queuing length at buffer under different
offered load and buffer size conditions are evaluated.
An ATM multiplexer with three priority classes is
simulated using the priority assignment control method of
[15]. Under the priority assignment period P and the
priority assignment ratio WD in this method have been
defined, the relationship between the traffic balance of
classes and buffer size of each is studied. The cell loss
probability and delay time of each class ( same sources and
different sources between classes ) are evaluated. The
results are useful to design a economic and effective ATM
multiplexer