57 research outputs found
Statistical multiplexing of video sources for packet switching networks
Communication networks are fast evolving towards truly integrated networks handling all types of traffic. They employ integrated switching technologies for voice. video and data. Statistical or asynchronous time division multiplexing of full motion video sources is an initial step towards packetized video networks. The main goal is to utilize the common communication channel efficiently, without loosing quality at the receiver. This work discusses the concept of using statistical multiplexing for packet video communications. The topology of a single internal packet network to support ISDN services has been adopted. Simulations have been carried out to demonstrate the statistical smoothing effect of packetized video in the networks having high speed links. Results indicate that the channel rate per source decreased in an exponential manner as the number of sources increased. An expression for the average usage time t of the channel has been derived in terms of channel rate per source and the number of sources multiplexed. Also the average usage time of the channel is higher for buffered data than that of the multiplexed data. The high speed communication links in the internal network are lightly loaded, which indicates that these links can accommodate more data
Recommended from our members
Entropy Maximisation and Queues With or Without Balking. An investigation into the impact of generalised maximum entropy solutions on the study of queues with or without arrival balking and their applications to congestion management in communication networks.
An investigation into the impact of generalised maximum entropy solutions on the study of queues with or without arrival balking and their applications to congestion management in communication networks
Keywords: Queues, Balking, Maximum Entropy (ME) Principle, Global Balance (GB), Queue Length Distribution (QLD), Generalised Geometric (GGeo), Generalised Exponential (GE), Generalised Discrete Half Normal (GdHN), Congestion Management, Packet Dropping Policy (PDP)
Generalisations to links between discrete least biased (i.e. maximum entropy (ME)) distribution inferences and Markov chains are conjectured towards the performance modelling, analysis and prediction of general, single server queues with or without arrival balking. New ME solutions, namely the generalised discrete Half Normal (GdHN) and truncated GdHN (GdHNT) distributions are characterised, subject to appropriate mean value constraints, for inferences of stationary discrete state probability distributions. Moreover, a closed form global balance (GB) solution is derived for the queue length distribution (QLD) of the M/GE/1/K queue subject to extended Morse balking, characterised by a Poisson prospective arrival process, i.i.d. generalised exponential (GE) service times and finite capacity, K. In this context, based on comprehensive numerical experimentation, the latter GB solution is conjectured to be a special case of the GdHNT ME distribution.
ii
Owing to the appropriate operational properties of the M/GE/1/K queue subject to extended Morse balking, this queueing system is applied as an ME performance model of Internet Protocol (IP)-based communication network nodes featuring static or dynamic packet dropping congestion management schemes. A performance evaluation study in terms of the model’s delay is carried out. Subsequently, the QLD’s of the GE/GE/1/K censored queue subject to extended Morse balking under three different composite batch balking and batch blocking policies are solved via the technique of GB. Following comprehensive numerical experimentation, the latter QLD’s are also conjectured to be special cases of the GdHNT. Limitations of this work and open problems which have arisen are included after the conclusion
Recommended from our members
Performance analysis of error recovery and congestion control in high-speed networks
In the past few years, Broadband Integrated Services Digital Network (B-ISDN) has received increasing attention as a communication architecture capable of supporting multimedia applications. Among the techniques proposed to implement B-ISDN, Asynchronous Transfer Mode (ATM) is considered to be the most promising transfer technique because of its efficiency and flexibility.In ATM networks, the performance bottleneck of the network, which was once the channel transmission speed, is shifted to the processing speed at the network switching nodes and the propagation delay of the channel. This shift is because the high-speed channel increases the ratio of processing time to packet transmission time and also the ratio of propagation delay to packet transmission time. The increased processing overhead makes it difficult to implement hop-by-hop schemes, which may impose prohibitably high processing at each switching node. The increased propagation delay overhead makes traffic control in ATM a challenge since a large number of packets can be in transit between two ATM switching nodes. Because of these fundamental changes, control schemes developed for traditional networks may not perform efficiently, and thus, new network architectures (congestion control schemes, error control schemes, etc.) are required in ATM networks.In this dissertation, we first present an extensive survey of various traffic control schemes and network protocols for ATM networks. In this survey, possible traffic control schemes are examined, and problems of those schemes and their possible solutions are presented. Next, we investigate two key research issues in ATM networks (and other types of high-speed networks): the effects of protocol-processing overhead and the efficiency of traffic control schemes.We first investigate the effects of protocol-processing overhead on the performance of error recovery schemes. Specifically, we investigate the performance trade-offs between link-by-link and edge-to-edge error recovery schemes. Our results show that for a network with high-speed/low-error-rate channels, an edge-to-edge scheme gives a smaller delay than a link-by-link scheme. We then investigate the effectiveness of a priority packet discarding scheme, a congestion control mechanism suitable for high-speed networks. We derive loss probabilities for each stream and investigate the impact of burstiness of traffic streams on the performance of individual streams
Analytical models for the multiplexing of worst case traffic sources and their application to ATM traffic control.
Postprint (published version
Performance Modelling and Optimisation of Multi-hop Networks
A major challenge in the design of large-scale networks is to predict and optimise the
total time and energy consumption required to deliver a packet from a source node to a
destination node. Examples of such complex networks include wireless ad hoc and sensor
networks which need to deal with the effects of node mobility, routing inaccuracies, higher
packet loss rates, limited or time-varying effective bandwidth, energy constraints, and the
computational limitations of the nodes. They also include more reliable communication
environments, such as wired networks, that are susceptible to random failures, security
threats and malicious behaviours which compromise their quality of service (QoS) guarantees.
In such networks, packets traverse a number of hops that cannot be determined
in advance and encounter non-homogeneous network conditions that have been largely
ignored in the literature. This thesis examines analytical properties of packet travel in
large networks and investigates the implications of some packet coding techniques on both
QoS and resource utilisation.
Specifically, we use a mixed jump and diffusion model to represent packet traversal
through large networks. The model accounts for network non-homogeneity regarding
routing and the loss rate that a packet experiences as it passes successive segments of a
source to destination route. A mixed analytical-numerical method is developed to compute
the average packet travel time and the energy it consumes. The model is able to capture
the effects of increased loss rate in areas remote from the source and destination, variable
rate of advancement towards destination over the route, as well as of defending against
malicious packets within a certain distance from the destination. We then consider sending
multiple coded packets that follow independent paths to the destination node so as to
mitigate the effects of losses and routing inaccuracies. We study a homogeneous medium
and obtain the time-dependent properties of the packet’s travel process, allowing us to
compare the merits and limitations of coding, both in terms of delivery times and energy
efficiency. Finally, we propose models that can assist in the analysis and optimisation
of the performance of inter-flow network coding (NC). We analyse two queueing models
for a router that carries out NC, in addition to its standard packet routing function. The
approach is extended to the study of multiple hops, which leads to an optimisation problem
that characterises the optimal time that packets should be held back in a router, waiting
for coding opportunities to arise, so that the total packet end-to-end delay is minimised
The effect of workload dependence in systems: Experimental evaluation, analytic models, and policy development
This dissertation presents an analysis of performance effects of burstiness (formalized by the autocorrelation function) in multi-tiered systems via a 3-pronged approach, i.e., experimental measurements, analytic models, and policy development. This analysis considers (a) systems with finite buffers (e.g., systems with admission control that effectively operate as closed systems) and (b) systems with infinite buffers (i.e., systems that operate as open systems).;For multi-tiered systems with a finite buffer size, experimental measurements show that if autocorrelation exists in any of the tiers in a multi-tiered system, then autocorrelation propagates to all tiers of the system. The presence of autocorrelated flows in all tiers significantly degrades performance. Workload characterization in a real experimental environment driven by the TPC-W benchmark confirms the existence of autocorrelated flows, which originate from the autocorrelated service process of one of the tiers. A simple model is devised that captures the observed behavior. The model is in excellent agreement with experimental measurements and captures the propagation of autocorrelation in the multi-tiered system as well as the resulting performance trends.;For systems with an infinite buffer size, this study focuses on analytic models by proposing and comparing two families of approximations for the departure process of a BMAP/MAP/1 queue that admits batch correlated flows, and whose service time process may be autocorrelated. One approximation is based on the ETAQA methodology for the solution of M/G/1-type processes and the other arises from lumpability rules. Formal proofs are provided: both approximations preserve the marginal distribution of the inter-departure times and their initial correlation structures.;This dissertation also demonstrates how the knowledge of autocorrelation can be used to effectively improve system performance, D_EQAL, a new load balancing policy for clusters with dependent arrivals is proposed. D_EQAL separates jobs to servers according to their sizes as traditional load balancing policies do, but this separation is biased by the effort to reduce performance loss due to autocorrelation in the streams of jobs that are directed to each server. as a result of this, not all servers are equally utilized (i.e., the load in the system becomes unbalanced) but performance benefits of this load unbalancing are significant
- …