912 research outputs found
First-Passage Time and Large-Deviation Analysis for Erasure Channels with Memory
This article considers the performance of digital communication systems
transmitting messages over finite-state erasure channels with memory.
Information bits are protected from channel erasures using error-correcting
codes; successful receptions of codewords are acknowledged at the source
through instantaneous feedback. The primary focus of this research is on
delay-sensitive applications, codes with finite block lengths and, necessarily,
non-vanishing probabilities of decoding failure. The contribution of this
article is twofold. A methodology to compute the distribution of the time
required to empty a buffer is introduced. Based on this distribution, the mean
hitting time to an empty queue and delay-violation probabilities for specific
thresholds can be computed explicitly. The proposed techniques apply to
situations where the transmit buffer contains a predetermined number of
information bits at the onset of the data transfer. Furthermore, as additional
performance criteria, large deviation principles are obtained for the empirical
mean service time and the average packet-transmission time associated with the
communication process. This rigorous framework yields a pragmatic methodology
to select code rate and block length for the communication unit as functions of
the service requirements. Examples motivated by practical systems are provided
to further illustrate the applicability of these techniques.Comment: To appear in IEEE Transactions on Information Theor
Recommended from our members
Joint rate control and scheduling for providing bounded delay with high efficiency in multihop wireless networks
This thesis considers the problem of supporting traffic with elastic bandwidth requirements and hard end-to-end delay constraints in multi-hop wireless networks, with focus on source transmission rates and link data rates as the key resource allocation decisions. Specifically, the research objective is to develop a source rate control and scheduling strategy that guarantees bounded average end-to-end queueing delay and maximises the overall utility of all incoming traffic, using network utility maximisation framework. The network utility maximisation based approaches to support delay-sensitive traffic have been predominantly based on either reducing link utilisation, or approximation of links as M/D/1 queues. Both approaches lead to unpredictable transient behaviour of packet delays, and inefficient link utilisation under optimal resource allocation. On the contrary, in this thesis an approach is proposed where instead of hard delay constraints based on inaccurate M/D/1 delay estimates, traffic end-to-end delay requirements are guaranteed by proper forms of concave and increasing utility functions of their transmission rates. Specifically, an alternative formulation is presented where the delay constraint is omitted and sources’ utility functions are multiplied by a weight factor. The alternative optimisation problem is solved by a distributed scheduling algorithm incorporating a duality-based rate control algorithm at its inner layer, where optimal link prices correlate with their average queueing delays. The proposed approach is then realised by a scheduling algorithm that runs jointly with an integral controller whereby each source regulates the queueing delay on its paths at the desired level, using its utility weight coefficient as the control variable. Since the proposed algorithms are based on solving the alternative concave optimisation problem, they are simple, distributed and lead to maximal link utilisation. Hence, they avoid the limitations of the previous approaches. The proposed algorithms are shown, using both theoretical analysis and simulation, to achieve asymptotic regulation of end-to-end delay given the step size of the proposed integral controller is within a specified range
Two extensions of Kingman's GI/G/1 bound
A simple bound in GI/G/1 queues was obtained by Kingman using a discrete martingale transform. We extend this technique to 1) multiclass queues and 2) Markov Additive Processes (MAPs) whose background processes can be time-inhomogeneous or have an uncountable state-space. Both extensions are facilitated by a necessary and sufficient ordinary differential equation (ODE) condition for MAPs to admit continuous martingale transforms. Simulations show that the bounds on waiting time distributions are almost exact in heavy-traffic, including the cases of 1) heterogeneous input, e.g., mixing Weibull and Erlang-k classes and 2) Generalized Markovian Arrival Processes, a new class extending the Batch Markovian Arrival Processes to continuous batch sizes
On Coding for Reliable Communication over Packet Networks
We present a capacity-achieving coding scheme for unicast or multicast over
lossy packet networks. In the scheme, intermediate nodes perform additional
coding yet do not decode nor even wait for a block of packets before sending
out coded packets. Rather, whenever they have a transmission opportunity, they
send out coded packets formed from random linear combinations of previously
received packets. All coding and decoding operations have polynomial
complexity.
We show that the scheme is capacity-achieving as long as packets received on
a link arrive according to a process that has an average rate. Thus, packet
losses on a link may exhibit correlation in time or with losses on other links.
In the special case of Poisson traffic with i.i.d. losses, we give error
exponents that quantify the rate of decay of the probability of error with
coding delay. Our analysis of the scheme shows that it is not only
capacity-achieving, but that the propagation of packets carrying "innovative"
information follows the propagation of jobs through a queueing network, and
therefore fluid flow models yield good approximations. We consider networks
with both lossy point-to-point and broadcast links, allowing us to model both
wireline and wireless packet networks.Comment: 33 pages, 6 figures; revised appendi
Markovian Workload Characterization for QoS Prediction in the Cloud.
Resource allocation in the cloud is usually driven by performance predictions, such as estimates of the future incoming load to the servers or of the quality-of-service (QoS) offered by applications to end users. In this context, characterizing web workload fluctuations in an accurate way is fundamental to understand how to provision cloud resources under time-varying traffic intensities. In this paper, we investigate the Markovian Arrival Processes (MAP) and the related MAP/MAP/1 queueing model as a tool for performance prediction of servers deployed in the cloud. MAPs are a special class of Markov models used as a compact description of the time-varying characteristics of workloads. In addition, MAPs can fit heavy-tail distributions, that are common in HTTP traffic, and can be easily integrated within analytical queueing models to efficiently predict system performance without simulating. By comparison with trace-driven simulation, we observe that existing techniques for MAP parameterization from HTTP log files often lead to inaccurate performance predictions. We then define a maximum likelihood method for fitting MAP parameters based on data commonly available in Apache log files, and a new technique to cope with batch arrivals, which are notoriously difficult to model accurately. Numerical experiments demonstrate the accuracy of our approach for performance prediction of web systems. © 2011 IEEE
A hybrid method for performance analysis of G/G/m queueing networks
Open queueing networks are useful for the performance analysis of numerous real systems. Since exact results exist only for a limited class of networks, decomposition methods have been extensively used for approximate analysis of general networks. This procedure is based on several approximation steps. Successive approximations made in this approach can lead to a considerable error in the output. In particular, there are no general accurate formulas for computing the mean waiting time and the inter-departure variance in general multiple-server queues. This causes the results from decomposition methods when applied to G/G/m queueing networks to be very approximative and to significantly deviate from actual performance values. We suggest substituting some approximate formulae by low-cost simulation estimates in order to obtain more accurate results when benefiting from the speed of an analytical method. Numerical experiments are presented to show that the proposed approach provides improved performance
Performance Modelling and Resource Allocation of the Emerging Network Architectures for Future Internet
With the rapid development of information and communications technologies, the traditional network architecture has approached to its performance limit, and thus is unable to meet the requirements of various resource-hungry applications. Significant infrastructure improvements to the network domain are urgently needed to guarantee the continuous network evolution and innovation. To address this important challenge, tremendous research efforts have been made to foster the evolution to Future Internet. Long-term Evolution Advanced (LTE-A), Software Defined Networking (SDN) and Network Function Virtualisation (NFV) have been proposed as the key promising network architectures for Future Internet and attract significant attentions in the network and telecom community. This research mainly focuses on the performance modelling and resource allocations of these three architectures. The major contributions are three-fold:
1) LTE-A has been proposed by the 3rd Generation Partnership Project (3GPP) as a promising candidate for the evolution of LTE wireless communication. One of the major features of LTE-A is the concept of Carrier Aggregation (CA). CA enables the network operators to exploit the fragmented spectrum and increase the peak transmission data rate, however, this technical innovation introduces serious unbalanced loads among in the radio resource allocation of LTE-A. To alleviate this problem, a novel QoS-aware resource allocation scheme, termed as Cross-CC User Migration (CUM) scheme, is proposed in this research to support real-time services, taking into consideration the system throughput, user fairness and QoS constraints.
2) SDN is an emerging technology towards next-generation Internet. In order to improve the performance of the SDN network, a preemption-based packet-scheduling scheme is firstly proposed in this research to improve the global fairness and reduce the packet loss rate in SDN data plane. Furthermore, in order to achieve a comprehensive and deeper understanding of the performance behaviour of SDN network, this work develops two analytical models to investigate the performance of SDN in the presence of Poisson Process and Markov Modulated Poisson Process (MMPP) respectively.
3) NFV is regarded as a disruptive technology for telecommunication service providers to reduce the Capital Expenditure (CAPEX) and Operational Expenditure (OPEX) through decoupling individual network functions from the underlying hardware devices. While NFV faces a significant challenging problem of Service-Level-Agreement (SLA) guarantee during service provisioning. In order to bridge this gap, a novel comprehensive analytical model based on stochastic network calculus is proposed in this research to investigate end-to-end performance of NFV network.
The resource allocation strategies proposed in this study significantly improve the network performance in terms of packet loss probability, global allocation fairness and throughput per user in LTE-A and SDN networks; the analytical models designed in this study can accurately predict the network performances of SDN and NFV networks. Both theoretical analysis and simulation experiments are conducted to demonstrate the effectiveness of the proposed algorithms and the accuracy of the designed models. In addition, the models are used as practical and cost-effective tools to pinpoint the performance bottlenecks of SDN and NFV networks under various network conditions
- …