359 research outputs found
Robust measurement-based buffer overflow probability estimators for QoS provisioning and traffic anomaly prediction applicationm
Suitable estimators for a class of Large Deviation approximations of rare
event probabilities based on sample realizations of random processes have been
proposed in our earlier work. These estimators are expressed as non-linear
multi-dimensional optimization problems of a special structure. In this paper,
we develop an algorithm to solve these optimization problems very efficiently
based on their characteristic structure. After discussing the nature of the
objective function and constraint set and their peculiarities, we provide a
formal proof that the developed algorithm is guaranteed to always converge. The
existence of efficient and provably convergent algorithms for solving these
problems is a prerequisite for using the proposed estimators in real time
problems such as call admission control, adaptive modulation and coding with
QoS constraints, and traffic anomaly detection in high data rate communication
networks
Robust measurement-based buffer overflow probability estimators for QoS provisioning and traffic anomaly prediction applications
Suitable estimators for a class of Large Deviation approximations of rare event probabilities based on sample realizations of random processes have been proposed in our earlier work. These estimators are expressed as non-linear multi-dimensional optimization problems of a special structure. In this paper, we develop an algorithm to solve these optimization problems very efficiently based on their characteristic structure. After discussing the nature of the objective function and constraint set and their peculiarities, we provide a formal proof that the developed algorithm is guaranteed to always converge. The existence of efficient and provably convergent algorithms for solving these problems is a prerequisite for using the proposed estimators in real time problems such as call admission control, adaptive modulation and coding with QoS constraints, and traffic anomaly detection in high data rate communication networks
On Coding for Reliable Communication over Packet Networks
We present a capacity-achieving coding scheme for unicast or multicast over
lossy packet networks. In the scheme, intermediate nodes perform additional
coding yet do not decode nor even wait for a block of packets before sending
out coded packets. Rather, whenever they have a transmission opportunity, they
send out coded packets formed from random linear combinations of previously
received packets. All coding and decoding operations have polynomial
complexity.
We show that the scheme is capacity-achieving as long as packets received on
a link arrive according to a process that has an average rate. Thus, packet
losses on a link may exhibit correlation in time or with losses on other links.
In the special case of Poisson traffic with i.i.d. losses, we give error
exponents that quantify the rate of decay of the probability of error with
coding delay. Our analysis of the scheme shows that it is not only
capacity-achieving, but that the propagation of packets carrying "innovative"
information follows the propagation of jobs through a queueing network, and
therefore fluid flow models yield good approximations. We consider networks
with both lossy point-to-point and broadcast links, allowing us to model both
wireline and wireless packet networks.Comment: 33 pages, 6 figures; revised appendi
Performance Modelling and Optimisation of Multi-hop Networks
A major challenge in the design of large-scale networks is to predict and optimise the
total time and energy consumption required to deliver a packet from a source node to a
destination node. Examples of such complex networks include wireless ad hoc and sensor
networks which need to deal with the effects of node mobility, routing inaccuracies, higher
packet loss rates, limited or time-varying effective bandwidth, energy constraints, and the
computational limitations of the nodes. They also include more reliable communication
environments, such as wired networks, that are susceptible to random failures, security
threats and malicious behaviours which compromise their quality of service (QoS) guarantees.
In such networks, packets traverse a number of hops that cannot be determined
in advance and encounter non-homogeneous network conditions that have been largely
ignored in the literature. This thesis examines analytical properties of packet travel in
large networks and investigates the implications of some packet coding techniques on both
QoS and resource utilisation.
Specifically, we use a mixed jump and diffusion model to represent packet traversal
through large networks. The model accounts for network non-homogeneity regarding
routing and the loss rate that a packet experiences as it passes successive segments of a
source to destination route. A mixed analytical-numerical method is developed to compute
the average packet travel time and the energy it consumes. The model is able to capture
the effects of increased loss rate in areas remote from the source and destination, variable
rate of advancement towards destination over the route, as well as of defending against
malicious packets within a certain distance from the destination. We then consider sending
multiple coded packets that follow independent paths to the destination node so as to
mitigate the effects of losses and routing inaccuracies. We study a homogeneous medium
and obtain the time-dependent properties of the packet’s travel process, allowing us to
compare the merits and limitations of coding, both in terms of delivery times and energy
efficiency. Finally, we propose models that can assist in the analysis and optimisation
of the performance of inter-flow network coding (NC). We analyse two queueing models
for a router that carries out NC, in addition to its standard packet routing function. The
approach is extended to the study of multiple hops, which leads to an optimisation problem
that characterises the optimal time that packets should be held back in a router, waiting
for coding opportunities to arise, so that the total packet end-to-end delay is minimised
Characterizing Heavy-Tailed Distributions Induced by Retransmissions
Consider a generic data unit of random size L that needs to be transmitted
over a channel of unit capacity. The channel availability dynamics is modeled
as an i.i.d. sequence {A, A_i},i>0 that is independent of L. During each period
of time that the channel becomes available, say A_i, we attempt to transmit the
data unit. If L<A_i, the transmission was considered successful; otherwise, we
wait for the next available period and attempt to retransmit the data from the
beginning. We investigate the asymptotic properties of the number of
retransmissions N and the total transmission time T until the data is
successfully transmitted. In the context of studying the completion times in
systems with failures where jobs restart from the beginning, it was shown that
this model results in power law and, in general, heavy-tailed delays. The main
objective of this paper is to uncover the detailed structure of this class of
heavy-tailed distributions induced by retransmissions. More precisely, we study
how the functional dependence between P[L>x] and P[A>x] impacts the
distributions of N and T. In particular, we discover several functional
criticality points that separate classes of different functional behavior of
the distribution of N. We also discuss the engineering implications of our
results on communication networks since retransmission strategy is a
fundamental component of the existing network protocols on all communication
layers, from the physical to the application one.Comment: 39 pages, 2 figure
- …