37,120 research outputs found
Machine learning-based available bandwidth estimation
Today’s Internet Protocol (IP), the Internet’s network-layer protocol, provides
a best-effort service to all users without any guaranteed bandwidth. However,
for certain applications that have stringent network performance requirements
in terms of bandwidth, it is significantly important to provide Quality of Ser-
vice (QoS) guarantees in IP networks. The end-to-end available bandwidth of a
network path, i.e., the residual capacity that is left over by other traffic, is deter-
mined by its tight link, that is the link that has the minimal available bandwidth.
The tight link may differ from the bottleneck link, i.e., the link with the minimal
capacity.
Passive and active measurements are the two fundamental approaches used
to estimate the available bandwidth in IP networks. Unlike passive measurement tools that are based on the non-intrusive monitoring of traffic, active tools
are based on the concept of self-induced congestion. The dispersion, which
arises when packets traverse a network, carries information that can reveal relevant network characteristics. Using a fluid-flow probe gap model of a tight link
with First-in, First-out (FIFO) multiplexing, accepted probing tools measure the
packet dispersion to estimate the available bandwidth. Difficulties arise, how-
ever, if the dispersion is distorted compared to the model, e.g., by non-fluid
traffic, multiple tight links, clustering of packets due to interrupt coalescing
and inaccurate time-stamping in general. It is recognized that modeling these
effects is cumbersome if not intractable.
To alleviate the variability of noise-afflicted packet gaps, the state-of-the-art
bandwidth estimation techniques use post-processing of the measurement results, e.g., averaging over several packet pairs or packet trains, linear regression,
or a Kalman filter. These techniques, however, do not overcome the basic as-
sumptions of the deterministic fluid model. While packet trains and statistical
post-processing help to reduce the variability of available bandwidth estimates,
these cannot resolve systematic deviations such as the underestimation bias
in case of random cross traffic and multiple tight links. The limitations of the
state-of-the-art methods motivate us to explore the use of machine learning in
end-to-end active and passive available bandwidth estimation.
We investigate how to benefit from machine learning while using standard packet train probes for active available bandwidth estimation. To reduce
the amount of required training data, we propose a regression-based scale-
invariant method that is applicable without prior calibration to networks of arbitrary capacity. To reduce the amount of probe traffic further, we implement
a neural network that acts as a recommender and can effectively select the
probe rates that reduce the estimation error most quickly. We also evaluate our
method with other regression-based supervised machine learning techniques.
Furthermore, we propose two different multi-class classification-based meth-
ods for available bandwidth estimation. The first method employs reinforcement learning that learns through the network path’s observations without
having a training phase. We formulate the available bandwidth estimation as a
single-state Markov Decision Process (MDP) multi-armed bandit problem and
implement the ε-greedy algorithm to find the available bandwidth, where ε is
a parameter that controls the exploration vs. exploitation trade-off.
We propose another supervised learning-based classification method to ob-
tain reliable available bandwidth estimates with a reduced amount of network
overhead in networks, where available bandwidth changes very frequently. In
such networks, reinforcement learning-based method may take longer to con-
verge as it has no training phase and learns in an online manner. We also evaluate our method with different classification-based supervised machine learning techniques. Furthermore, considering the correlated changes in a network’s
traffic through time, we apply filtering techniques on the estimation results in
order to track the available bandwidth changes.
Active probing techniques provide flexibility in designing the input struc-
ture. In contrast, the vast majority of Internet traffic is Transmission Control
Protocol (TCP) flows that exhibit a rather chaotic traffic pattern. We investigate
how the theory of active probing can be used to extract relevant information
from passive TCP measurements. We extend our method to perform the estima-
tion using only sender-side measurements of TCP data and acknowledgment
packets. However, non-fluid cross traffic, multiple tight links, and packet loss
in the reverse path may alter the spacing of acknowledgments and hence in-
crease the measurement noise. To obtain reliable available bandwidth estimates
from noise-afflicted acknowledgment gaps we propose a neural network-based
method.
We conduct a comprehensive measurement study in a controlled network
testbed at Leibniz University Hannover. We evaluate our proposed methods
under a variety of notoriously difficult network conditions that have not been
included in the training such as randomly generated networks with multiple
tight links, heavy cross traffic burstiness, delays, and packet loss. Our testing
results reveal that our proposed machine learning-based techniques are able to
identify the available bandwidth with high precision from active and passive
measurements. Furthermore, our reinforcement learning-based method without any training phase shows accurate and fast convergence to available band-
width estimates
Optical Frequency Comb Noise Characterization Using Machine Learning
A novel tool, based on Bayesian filtering framework and expectation
maximization algorithm, is numerically and experimentally demonstrated for
accurate frequency comb noise characterization. The tool is statistically
optimum in a mean-square-error-sense, works at wide range of SNRs and offers
more accurate noise estimation compared to conventional methods
Partially Blind Handovers for mmWave New Radio Aided by Sub-6 GHz LTE Signaling
For a base station that supports cellular communications in sub-6 GHz LTE and
millimeter (mmWave) bands, we propose a supervised machine learning algorithm
to improve the success rate in the handover between the two radio frequencies
using sub-6 GHz and mmWave prior channel measurements within a temporal window.
The main contributions of our paper are to 1) introduce partially blind
handovers, 2) employ machine learning to perform handover success predictions
from sub-6 GHz to mmWave frequencies, and 3) show that this machine learning
based algorithm combined with partially blind handovers can improve the
handover success rate in a realistic network setup of colocated cells.
Simulation results show improvement in handover success rates for our proposed
algorithm compared to standard handover algorithms.Comment: (c) 2018 IEEE. Personal use of this material is permitted. Permission
from IEEE must be obtained for all other uses, in any current or future
media, including reprinting/republishing this material for advertising or
promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other work
One-Class Support Measure Machines for Group Anomaly Detection
We propose one-class support measure machines (OCSMMs) for group anomaly
detection which aims at recognizing anomalous aggregate behaviors of data
points. The OCSMMs generalize well-known one-class support vector machines
(OCSVMs) to a space of probability measures. By formulating the problem as
quantile estimation on distributions, we can establish an interesting
connection to the OCSVMs and variable kernel density estimators (VKDEs) over
the input space on which the distributions are defined, bridging the gap
between large-margin methods and kernel density estimators. In particular, we
show that various types of VKDEs can be considered as solutions to a class of
regularization problems studied in this paper. Experiments on Sloan Digital Sky
Survey dataset and High Energy Particle Physics dataset demonstrate the
benefits of the proposed framework in real-world applications.Comment: Conference on Uncertainty in Artificial Intelligence (UAI2013
- …