639 research outputs found
An adaptive admission control and load balancing algorithm for a QoS-aware Web system
The main objective of this thesis focuses on the design of an adaptive algorithm for admission control and content-aware load balancing for Web traffic. In order to set the context of this work, several reviews are included to introduce the reader in the background concepts of Web load balancing, admission control and the Internet traffic characteristics that may affect the good performance of a Web site. The admission control and load balancing algorithm described in this thesis manages the distribution of traffic to a Web cluster based on QoS requirements. The goal of the proposed scheduling algorithm is to avoid situations in which the system provides a lower performance than desired due to servers' congestion. This is achieved through the implementation of forecasting calculations. Obviously, the increase of the computational cost of the algorithm results in some overhead. This is the reason for designing an adaptive time slot scheduling that sets the execution times of the algorithm depending on the burstiness that is arriving to the system. Therefore, the predictive scheduling algorithm proposed includes an adaptive overhead control. Once defined the scheduling of the algorithm, we design the admission control module based on throughput predictions. The results obtained by several throughput predictors are compared and one of them is selected to be included in our algorithm. The utilisation level that the Web servers will have in the near future is also forecasted and reserved for each service depending on the Service Level Agreement (SLA). Our load balancing strategy is based on a classical policy. Hence, a comparison of several classical load balancing policies is also included in order to know which of them better fits our algorithm. A simulation model has been designed to obtain the results presented in this thesis
iTeleScope: Intelligent Video Telemetry and Classification in Real-Time using Software Defined Networking
Video continues to dominate network traffic, yet operators today have poor
visibility into the number, duration, and resolutions of the video streams
traversing their domain. Current approaches are inaccurate, expensive, or
unscalable, as they rely on statistical sampling, middle-box hardware, or
packet inspection software. We present {\em iTelescope}, the first intelligent,
inexpensive, and scalable SDN-based solution for identifying and classifying
video flows in real-time. Our solution is novel in combining dynamic flow rules
with telemetry and machine learning, and is built on commodity OpenFlow
switches and open-source software. We develop a fully functional system, train
it in the lab using multiple machine learning algorithms, and validate its
performance to show over 95\% accuracy in identifying and classifying video
streams from many providers including Youtube and Netflix. Lastly, we conduct
tests to demonstrate its scalability to tens of thousands of concurrent
streams, and deploy it live on a campus network serving several hundred real
users. Our system gives unprecedented fine-grained real-time visibility of
video streaming performance to operators of enterprise and carrier networks at
very low cost.Comment: 12 pages, 16 figure
Datacenter Traffic Control: Understanding Techniques and Trade-offs
Datacenters provide cost-effective and flexible access to scalable compute
and storage resources necessary for today's cloud computing needs. A typical
datacenter is made up of thousands of servers connected with a large network
and usually managed by one operator. To provide quality access to the variety
of applications and services hosted on datacenters and maximize performance, it
deems necessary to use datacenter networks effectively and efficiently.
Datacenter traffic is often a mix of several classes with different priorities
and requirements. This includes user-generated interactive traffic, traffic
with deadlines, and long-running traffic. To this end, custom transport
protocols and traffic management techniques have been developed to improve
datacenter network performance.
In this tutorial paper, we review the general architecture of datacenter
networks, various topologies proposed for them, their traffic properties,
general traffic control challenges in datacenters and general traffic control
objectives. The purpose of this paper is to bring out the important
characteristics of traffic control in datacenters and not to survey all
existing solutions (as it is virtually impossible due to massive body of
existing research). We hope to provide readers with a wide range of options and
factors while considering a variety of traffic control mechanisms. We discuss
various characteristics of datacenter traffic control including management
schemes, transmission control, traffic shaping, prioritization, load balancing,
multipathing, and traffic scheduling. Next, we point to several open challenges
as well as new and interesting networking paradigms. At the end of this paper,
we briefly review inter-datacenter networks that connect geographically
dispersed datacenters which have been receiving increasing attention recently
and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial
Embracing corruption burstiness: Fast error recovery for ZigBee under wi-Fi interference
This is the author accepted manuscript. The final version is available from the publisher via the DOI in this record.The ZigBee communication can be easily and severely interfered by Wi-Fi traffic. Error recovery, as an important means for
ZigBee to survive Wi-Fi interference, has been extensively studied in recent years. The existing works add upfront redundancy to
in-packet blocks for recovering a certain number of random corruptions. Therefore the bursty nature of ZigBee in-packet corruptions
under Wi-Fi interference is often considered harmful, since some blocks are full of errors which cannot be recovered and some blocks
have no errors but still requiring redundancy. As a result, they often use interleaving to reshape the bursty errors, before applying
complex FEC codes to recover the re-shaped random distributed errors. In this paper, we take a different view that burstiness may be
helpful. With burstiness, the in-packet corruptions are often consecutive and the requirement for error recovery is reduced as
”recovering any k consecutive errors” instead of ”recovering any random k errors”. This lowered requirement allows us to design far
more efficient code than the existing FEC codes. Motivated by this implication, we exploit the corruption burstiness to design a simple
yet effective error recovery code using XOR operations (called ZiXOR). ZiXOR uses XOR code and the delay is significantly reduced.
More, ZiXOR uses RSSI-hinted approach to detect in packet corruptions without CRC, incurring almost no extra transmission
overhead. The testbed evaluation results show that ZiXOR outperforms the state-of-the-art works in terms of the throughput (by 47%)
and latency (by 22%)This work was supported by the National Natural Science
Foundation of China (No. 61602095 and No. 61472360), the
Fundamental Research Funds for the Central Universities (No.
ZYGX2016KYQD098 and No. 2016FZA5010), National Key
Technology R&D Program (Grant No. 2014BAK15B02), CCFIntel
Young Faculty Researcher Program, CCF-Tencent Open
Research Fund, China Ministry of Education—China Mobile
Joint Project under Grant No. MCM20150401 and the EU FP7
CLIMBER project under Grant Agreement No. PIRSES-GA-
2012-318939. Wei Dong is the corresponding author
Detection of network anomalies and novel attacks in the internet via statistical network traffic separation and normality prediction
With the advent and the explosive growth of the global Internet and the electronic commerce environment, adaptive/automatic network and service anomaly detection is fast gaining critical research and practical importance. If the next generation of network technology is to operate beyond the levels of current networks, it will require a set of well-designed tools for its management that will provide the capability of dynamically and reliably identifying network anomalies. Early detection of network anomalies and performance degradations is a key to rapid fault recovery and robust networking, and has been receiving increasing attention lately.
In this dissertation we present a network anomaly detection methodology, which relies on the analysis of network traffic and the characterization of the dynamic statistical properties of traffic normality, in order to accurately and timely detect network anomalies. Anomaly detection is based on the concept that perturbations of normal behavior suggest the presence of anomalies, faults, attacks etc. This methodology can be uniformly applied in order to detect network attacks, especially in cases where novel attacks are present and the nature of the intrusion is unknown.
Specifically, in order to provide an accurate identification of the normal network traffic behavior, we first develop an anomaly-tolerant non-stationary traffic prediction technique, which is capable of removing both pulse and continuous anomalies. Furthermore we introduce and design dynamic thresholds, and based on them we define adaptive anomaly violation conditions, as a combined function of both the magnitude and duration of the traffic deviations. Numerical results are presented that demonstrate the operational effectiveness and efficiency of the proposed approach, under different anomaly traffic scenarios and attacks, such as mail-bombing and UDP flooding attacks.
In order to improve the prediction accuracy of the statistical network traffic normality, especially in cases where high burstiness is present, we propose, study and analyze a new network traffic prediction methodology, based on the frequency domain traffic analysis and filtering, with the objective_of enhancing the network anomaly detection capabilities. Our approach is based on the observation that the various network traffic components, are better identified, represented and isolated in the frequency domain. As a result, the traffic can be effectively separated into a baseline component, that includes most of the low frequency traffic and presents low burstiness, and the short-term traffic that includes the most dynamic part. The baseline traffic is a mean non-stationary periodic time series, and the Extended Resource-Allocating Network (BRAN) methodology is used for its accurate prediction. The short-term traffic is shown to be a time-dependent series, and the Autoregressive Moving Average (ARMA) model is proposed to be used for the accurate prediction of this component. Furthermore, it is demonstrated that the proposed enhanced traffic prediction strategy can be combined with the use of dynamic thresholds and adaptive anomaly violation conditions, in order to improve the network anomaly detection effectiveness. The performance evaluation of the proposed overall strategy, in terms of the achievable network traffic prediction accuracy and anomaly detection capability, and the corresponding numerical results demonstrate and quantify the significant improvements that can be achieved
Burst-aware predictive autoscaling for containerized microservices
Autoscaling methods are used for cloud-hosted applications to dynamically scale the allocated resources for guaranteeing Quality-of-Service (QoS). The public-facing application serves dynamic workloads, which contain bursts and pose challenges for autoscaling methods to ensure application performance. Existing State-of-the-art autoscaling methods are burst-oblivious to determine and provision the appropriate resources. For dynamic workloads, it is hard to detect and handle bursts online for maintaining application performance. In this article, we propose a novel burst-aware autoscaling method which detects burst in dynamic workloads using workload forecasting, resource prediction, and scaling decision making while minimizing response time service-level objectives (SLO) violations. We evaluated our approach through a trace-driven simulation, using multiple synthetic and realistic bursty workloads for containerized microservices, improving performance when comparing against existing state-of-the-art autoscaling methods. Such experiments show an increase of Ă— 1.09 in total processed requests, a reduction of Ă— 5.17 for SLO violations, and an increase of Ă— 0.767 cost as compared to the baseline method.This work was partially supported by the European Research Council (ERC) under the EU Horizon 2020 programme (GA 639595), the Spanish Ministry of Economy, Industry and Competitiveness (TIN2015-65316-P and IJCI2016-27485) and the Generalitat de Catalunya (2014-SGR-1051).Peer ReviewedPostprint (author's final draft
Methods of Congestion Control for Adaptive Continuous Media
Since the first exchange of data between machines in different locations in early 1960s,
computer networks have grown exponentially with millions of people now using the
Internet. With this, there has also been a rapid increase in different kinds of services offered
over the World Wide Web from simple e-mails to streaming video. It is generally accepted
that the commonly used protocol suite TCP/IP alone is not adequate for a number of
modern applications with high bandwidth and minimal delay requirements. Many
technologies are emerging such as IPv6, Diffserv, Intserv etc, which aim to replace the onesize-fits-all approach of the current lPv4. There is a consensus that the networks will have
to be capable of multi-service and will have to isolate different classes of traffic through
bandwidth partitioning such that, for example, low priority best-effort traffic does not cause
delay for high priority video traffic. However, this research identifies that even within a
class there may be delays or losses due to congestion and the problem will require different
solutions in different classes.
The focus of this research is on the requirements of the adaptive continuous media
class. These are traffic flows that require a good Quality of Service but are also able to
adapt to the network conditions by accepting some degradation in quality. It is potentially
the most flexible traffic class and therefore, one of the most useful types for an increasing
number of applications.
This thesis discusses the QoS requirements of adaptive continuous media and
identifies an ideal feedback based control system that would be suitable for this class. A
number of current methods of congestion control have been investigated and two methods
that have been shown to be successful with data traffic have been evaluated to ascertain if
they could be adapted for adaptive continuous media. A novel method of control based on
percentile monitoring of the queue occupancy is then proposed and developed. Simulation
results demonstrate that the percentile monitoring based method is more appropriate to this
type of flow. The problem of congestion control at aggregating nodes of the network
hierarchy, where thousands of adaptive flows may be aggregated to a single flow, is then
considered. A unique method of pricing mean and variance is developed such that each
individual flow is charged fairly for its contribution to the congestion
Service quality monitoring in confined spaces through mining Twitter data
Promoting public transport depends on adapting effective tools for concurrent monitoring of perceived service quality. Social media feeds, in general, provide an opportunity to ubiquitously look for service quality events, but when applied to confined geographic area such as a transport node, the sparsity of concurrent social media data leads to two major challenges. Both the limited number of social media messages--leading to biased machine-learning--and the capturing of bursty events in the study period considerably reduce the effectiveness of general event detection methods. In contrast to previous work and to face these challenges, this paper presents a hybrid solution based on a novel fine-tuned BERT language model and aspect-based sentiment analysis. BERT enables extracting aspects from a limited context, where traditional methods such as topic modeling and word embedding fail. Moreover, leveraging aspect-based sentiment analysis improves the sensitivity of event detection. Finally, the efficacy of event detection is further improved by proposing a statistical approach to combine frequency-based and sentiment-based solutions. Experiments on a real-world case study demonstrate that the proposed solution improves the effectiveness of event detection compared to state-of-the-art approaches
- …