7 research outputs found
Latency Analysis of Systems with Multiple Interfaces for Ultra-Reliable M2M Communication
One of the ways to satisfy the requirements of ultra-reliable low latency
communication for mission critical Machine-type Communications (MTC)
applications is to integrate multiple communication interfaces. In order to
estimate the performance in terms of latency and reliability of such an
integrated communication system, we propose an analysis framework that combines
traditional reliability models with technology-specific latency probability
distributions. In our proposed model we demonstrate how failure correlation
between technologies can be taken into account. We show for the considered
scenario with fiber and different cellular technologies how up to 5-nines
reliability can be achieved and how packet splitting can be used to reduce
latency substantially while keeping 4-nines reliability. The model has been
validated through simulation.Comment: Accepted for IEEE SPAWC'1
Ultra-Reliable Low Latency Communication (URLLC) using Interface Diversity
An important ingredient of the future 5G systems will be Ultra-Reliable
Low-Latency Communication (URLLC). A way to offer URLLC without intervention in
the baseband/PHY layer design is to use interface diversity and integrate
multiple communication interfaces, each interface based on a different
technology. In this work, we propose to use coding to seamlessly distribute
coded payload and redundancy data across multiple available communication
interfaces. We formulate an optimization problem to find the payload allocation
weights that maximize the reliability at specific target latency values. In
order to estimate the performance in terms of latency and reliability of such
an integrated communication system, we propose an analysis framework that
combines traditional reliability models with technology-specific latency
probability distributions. Our model is capable to account for failure
correlation among interfaces/technologies. By considering different scenarios,
we find that optimized strategies can in some cases significantly outperform
strategies based on -out-of- erasure codes, where the latter do not
account for the characteristics of the different interfaces. The model has been
validated through simulation and is supported by experimental results.Comment: Accepted for IEEE Transactions on Communication
Performance analysis of TCP and TCP-friendly rate control flows in wired and wireless networks
Master'sMASTER OF SCIENC
DDoS detection based on traffic self-similarity
Distributed denial of service attacks (or DDoS) are a common occurrence on the internet and are becoming more intense as
the bot-nets, used to launch them, grow bigger. Preventing or stopping DDoS is not possible without radically changing the
internet infrastructure; various DDoS mitigation techniques have been devised with different degrees of success. All mitigation
techniques share the need for a DDoS detection mechanism.
DDoS detection based on traffic self-similarity estimation is a relatively new approach which is built on the notion that undis-
turbed network traffic displays fractal like properties. These fractal like properties are known to degrade in presence of abnormal
traffic conditions like DDoS. Detection is possible by observing the changes in the level of self-similarity in the traffic flow at the
target of the attack.
Existing literature assumes that DDoS traffic lacks the self-similar properties of undisturbed traffic. We show how existing bot-
nets could be used to generate a self-similar traffic flow and thus break such assumptions. We then study the implications of
self-similar attack traffic on DDoS detection.
We find that, even when DDoS traffic is self-similar, detection is still possible. We also find that the traffic flow resulting from the
superimposition of DDoS flow and legitimate traffic flow possesses a level of self-similarity that depends non-linearly on both
relative traffic intensity and on the difference in self-similarity between the two incoming flows
Control of real-time multimedia applications in best-effort networks
The increasing demand for real-time multimedia applications and the lack
of quality of service (QoS) support in public best-effort or Internet Protocol (IP)
networks has prompted many researchers to propose improvements on the QoS of such
networks. This research aims to improve the QoS of real-time multimedia applications
in public best-effort networks, without modifying the core network infrastructure or
the existing codecs of the original media applications.
A source buffering control is studied based on a fluid model developed for a single
flow transported over a best-effort network while allowing for flow reversal. It is shown
that this control is effective for QoS improvement only when there is sufficient flow
reversal or packet reordering in the network.
An alternate control strategy based on predictive multi-path switching is studied
where only two paths are considered as alternate options. Initially, an emulation study
is performed, exploring the impact of path loss rate and traffic delay signal frequency
content on the proposed control. The study reveals that this control strategy provides
the best QoS improvement when the average comprehensive loss rates of the two paths
involved are between 5% and 15%, and when the delay signal frequency content is
around 0.5 Hz. Linear and nonlinear predictors are developed using actual network
data for use in predictive multi-path switching control. The control results show
that predictive path switching is better than no path switching, yet no one predictor developed is best for all cases studied. A voting based control strategy is proposed
to overcome this problem. The results show that the voting based control strategy
results in better performance for all cases studied. An actual voice quality test is
performed, proving that predictive path switching is better than no path switching.
Despite the improvements obtained, predictive path switching control has some
scalability problems and other shortcomings that require further investigation. If
there are more paths available to choose from, the increasing overhead in probing
traffic might become unacceptable. Further, if most of the VoIP flows on the Internet
use this control strategy, then the conclusions of this research might be different,
requiring modifications to the proposed approach. Further studies on these problems
are needed