695 research outputs found
Simulation model of ACO, FLC and PID controller for TCP/AQM wireless networks by using MATLAB/Simulink
The current work aims to develop a suitable design for control systems as part of a queue management system using the transmission control protocol/and active queue management (TCP/AQM) protocol to handle the expected congestion in the network. The research also aims to make a comparison between the different control methods, including the traditional proportional integral derivative (PID) and the expert fuzzy logic control (FLC), as well as the optimal ant colony optimization (ACO) that is used according to the performance improvement criteria to reach the best values for parameters the traditional controller (kd, ki, k p), where the addition of the performance indicator time-weighted absolute error (ITAE) was adopted. The use of this method without any other optimization algorithm that can be applied to adjust the parameters of the PID to verify the possibility of improving performance and enhance that with experience and to know the level of improvement for this particular system being the subject of the study. The results showed the superiority of the optimal ACO over both the FLC expert and the conventional PID, as well as the superiority of the FLC expert over the traditional PID
Prediction-based techniques for the optimization of mobile networks
MenciĂłn Internacional en el tĂtulo de doctorMobile cellular networks are complex system whose behavior is characterized by the superposition
of several random phenomena, most of which, related to human activities, such as mobility,
communications and network usage. However, when observed in their totality, the many individual
components merge into more deterministic patterns and trends start to be identifiable and
predictable.
In this thesis we analyze a recent branch of network optimization that is commonly referred to
as anticipatory networking and that entails the combination of prediction solutions and network
optimization schemes. The main intuition behind anticipatory networking is that knowing in
advance what is going on in the network can help understanding potentially severe problems and
mitigate their impact by applying solution when they are still in their initial states. Conversely,
network forecast might also indicate a future improvement in the overall network condition (i.e.
load reduction or better signal quality reported from users). In such a case, resources can be
assigned more sparingly requiring users to rely on buffered information while waiting for the
better condition when it will be more convenient to grant more resources.
In the beginning of this thesis we will survey the current anticipatory networking panorama
and the many prediction and optimization solutions proposed so far. In the main body of the work,
we will propose our novel solutions to the problem, the tools and methodologies we designed to
evaluate them and to perform a real world evaluation of our schemes.
By the end of this work it will be clear that not only is anticipatory networking a very promising
theoretical framework, but also that it is feasible and it can deliver substantial benefit to current
and next generation mobile networks. In fact, with both our theoretical and practical results we
show evidences that more than one third of the resources can be saved and even larger gain can
be achieved for data rate enhancements.Programa Oficial de Doctorado en IngenierĂa TelemáticaPresidente: Albert Banchs Roca.- Presidente: Pablo Serrano Yañez-Mingot.- Secretario: Jorge OrtĂn Gracia.- Vocal: Guevara Noubi
Internet performance modeling: the state of the art at the turn of the century
Seemingly overnight, the Internet has gone from an academic experiment to a worldwide information matrix. Along the way, computer scientists have come to realize that understanding the performance of the Internet is a remarkably challenging and subtle problem. This challenge is all the more important because of the increasingly significant role the Internet has come to play in society. To take stock of the field of Internet performance modeling, the authors organized a workshop at Schloß Dagstuhl. This paper summarizes the results of discussions, both plenary and in small groups, that took place during the four-day workshop. It identifies successes, points to areas where more work is needed, and poses “Grand Challenges” for the performance evaluation community with respect to the Internet
Optimizing the delivery of multimedia over mobile networks
MenciĂłn Internacional en el tĂtulo de doctorThe consumption of multimedia content is moving from a residential environment to mobile
phones. Mobile data traffic, driven mostly by video demand, is increasing rapidly and wireless
spectrum is becoming a more and more scarce resource. This makes it highly important to operate
mobile networks efficiently. To tackle this, recent developments in anticipatory networking
schemes make it possible to to predict the future capacity of mobile devices and optimize the
allocation of the limited wireless resources. Further, optimizing Quality of Experience—smooth,
quick, and high quality playback—is more difficult in the mobile setting, due to the highly dynamic
nature of wireless links. A key requirement for achieving, both anticipatory networking
schemes and QoE optimization, is estimating the available bandwidth of mobile devices. Ideally,
this should be done quickly and with low overhead.
In summary, we propose a series of improvements to the delivery of multimedia over mobile
networks. We do so, be identifying inefficiencies in the interconnection of mobile operators with
the servers hosting content, propose an algorithm to opportunistically create frequent capacity estimations
suitable for use in resource optimization solutions and finally propose another algorithm
able to estimate the bandwidth class of a device based on minimal traffic in order to identify the
ideal streaming quality its connection may support before commencing playback.
The main body of this thesis proposes two lightweight algorithms designed to provide bandwidth
estimations under the high constraints of the mobile environment, such as and most notably
the usually very limited traffic quota. To do so, we begin with providing a thorough overview
of the communication path between a content server and a mobile device. We continue with
analysing how accurate smartphone measurements can be and also go in depth identifying the
various artifacts adding noise to the fidelity of on device measurements. Then, we first propose
a novel lightweight measurement technique that can be used as a basis for advanced resource
optimization algorithms to be run on mobile phones. Our main idea leverages an original packet
dispersion based technique to estimate per user capacity. This allows passive measurements by
just sampling the existing mobile traffic. Our technique is able to efficiently filter outliers introduced
by mobile network schedulers and phone hardware. In order to asses and verify our
measurement technique, we apply it to a diverse dataset generated by both extensive simulations
and a week-long measurement campaign spanning two cities in two countries, different radio
technologies, and covering all times of the day. The results demonstrate that our technique is effective even if it is provided only with a small fraction of the exchanged packets of a flow. The
only requirement for the input data is that it should consist of a few consecutive packets that are
gathered periodically. This makes the measurement algorithm a good candidate for inclusion in
OS libraries to allow for advanced resource optimization and application-level traffic scheduling,
based on current and predicted future user capacity.
We proceed with another algorithm that takes advantage of the traffic generated by short-lived
TCP connections, which form the majority of the mobile connections, to passively estimate the
currently available bandwidth class. Our algorithm is able to extract useful information even if the
TCP connection never exits the slow start phase. To the best of our knowledge, no other solution
can operate with such constrained input. Our estimation method is able to achieve good precision
despite artifacts introduced by the slow start behavior of TCP, mobile scheduler and phone hardware.
We evaluate our solution against traces collected in 4 European countries. Furthermore, the
small footprint of our algorithm allows its deployment on resource limited devices.
Finally, in an attempt to face the rapid traffic increase, mobile application developers outsource
their cloud infrastructure deployment and content delivery to cloud computing services
and content delivery networks. Studying how these services, which we collectively denote Cloud
Service Providers (CSPs), perform over Mobile Network Operators (MNOs) is crucial to understanding
some of the performance limitations of today’s mobile apps. To that end, we perform
the first empirical study of the complex dynamics between applications, MNOs and CSPs. First,
we use real mobile app traffic traces that we gathered through a global crowdsourcing campaign
to identify the most prevalent CSPs supporting today’s mobile Internet. Then, we investigate how
well these services interconnect with major European MNOs at a topological level, and measure
their performance over European MNO networks through a month-long measurement campaign
on the MONROE mobile broadband testbed. We discover that the top 6 most prevalent CSPs
are used by 85% of apps, and observe significant differences in their performance across different
MNOs due to the nature of their services, peering relationships with MNOs, and deployment
strategies. We also find that CSP performance in MNOs is affected by inflated path length, roaming,
and presence of middleboxes, but not influenced by the choice of DNS resolver. We also
observe that the choice of operator’s Point of Presence (PoP) may inflate by at least 20% the
delay towards popular websites.This work has been supported by IMDEA Networks Institute.Programa Oficial de Doctorado en IngenierĂa TelemáticaPresidente: Ahmed Elmokashfi.- Secretario: RubĂ©n Cuevas RumĂn.- Vocal: Paolo Din
Parallel network protocol stacks using replication
Computing applications demand good performance from networking systems. This includes high-bandwidth communication using protocols with sophisticated features such as ordering, reliability, and congestion control. Much of this protocol processing occurs in software, both on desktop systems and servers. Multi-processing is a requirement on today\u27s computer architectures because their design does not allow for increased processor frequencies. At the same time, network bandwidths continue to increase. In order to meet application demand for throughput, protocol processing must be parallel to leverage the full capabilities of multi-processor or multi-core systems. Existing parallelization strategies have performance difficulties that limit their scalability and their application to single, high-speed data streams. This dissertation introduces a new approach to parallelizing network protocol processing without the need for locks or for global state. Rather than maintain global states, each processor maintains its own copy of protocol state. Therefore, updates are local and don\u27t require fine-grained locks or explicit synchronization. State management work is replicated, but logically independent work is parallelized. Along with the approach, this dissertation describes Dominoes, a new framework for implementing replicated processing systems. Dominoes organizes the state information into Domains and the communication into Channels. These two abstractions provide a powerful, but flexible model for testing the replication approach. This dissertation uses Dominoes to build a replicated network protocol system. The performance of common protocols, such as TCP/IP, is increased by multiprocessing single connections. On commodity hardware, throughput increases between 15-300% depending on the type of communication. Most gains are possible when communicating with unmodified peer implementations, such as Linux. In addition to quantitative results, protocol behavior is studied as it relates to the replication approach
Final report on the evaluation of RRM/CRRM algorithms
Deliverable public del projecte EVERESTThis deliverable provides a definition and a complete evaluation of the RRM/CRRM algorithms selected in D11 and D15, and evolved and refined on an iterative process. The evaluation will be carried out by means of simulations using the simulators provided at D07, and D14.Preprin
An Information-Theoretic Framework for Consistency Maintenance in Distributed Interactive Applications
Distributed Interactive Applications (DIAs) enable geographically dispersed users
to interact with each other in a virtual environment. A key factor to the success
of a DIA is the maintenance of a consistent view of the shared virtual world for
all the participants. However, maintaining consistent states in DIAs is difficult
under real networks. State changes communicated by messages over such networks
suffer latency leading to inconsistency across the application. Predictive Contract
Mechanisms (PCMs) combat this problem through reducing the number of messages
transmitted in return for perceptually tolerable inconsistency. This thesis examines
the operation of PCMs using concepts and methods derived from information theory.
This information theory perspective results in a novel information model of PCMs
that quantifies and analyzes the efficiency of such methods in communicating the
reduced state information, and a new adaptive multiple-model-based framework for
improving consistency in DIAs.
The first part of this thesis introduces information measurements of user behavior
in DIAs and formalizes the information model for PCM operation. In presenting the
information model, the statistical dependence in the entity state, which makes using
extrapolation models to predict future user behavior possible, is evaluated. The
efficiency of a PCM to exploit such predictability to reduce the amount of network
resources required to maintain consistency is also investigated. It is demonstrated
that from the information theory perspective, PCMs can be interpreted as a form
of information reduction and compression.
The second part of this thesis proposes an Information-Based Dynamic Extrapolation
Model for dynamically selecting between extrapolation algorithms based on
information evaluation and inferred network conditions. This model adapts PCM
configurations to both user behavior and network conditions, and makes the most
information-efficient use of the available network resources. In doing so, it improves
PCM performance and consistency in DIAs
Router-based algorithms for improving internet quality of service.
We begin this thesis by generalizing some results related to a recently proposed positive system model of TCP congestion control algorithms. Then, motivated by a mean ÂŻeld analysis of the positive system model, a novel, stateless, queue management scheme is designed: Multi-Level Comparisons with index l (MLC(l)). In the limit, MLC(l) enforces max-min fairness in a network of TCP flows.
We go further, showing that counting past drops at a congested link provides su±cient information to enforce max-min fairness among long-lived flows and to reduce the flow completion times of short-lived flows. Analytical models are presented, and the accuracy of predictions are validated by packet level ns2 simulations.
We then move our attention to e±cient measurement and monitoring techniques. A small active counter architecture is presented that addresses the problem of accurate approximation of statistics counter values at very-high speeds that can be both updated and estimated on a per-packet basis. These algorithms are necessary in the design of router-based flow control algorithms since on-chip
Static RAM (SRAM) currently is a scarce resource, and being economical with its usage is an important task. A highly scalable method for heavy-hitter identifcation that uses our small active counters architecture is developed based on heuristic argument. Its performance is compared to several state-of-the-art algorithms and shown to out-perform them.
In the last part of the thesis we discuss the delay-utilization tradeoff in the congested Internet links.
While several groups of authors have recently analyzed this tradeoff, the lack of realistic assumption in their models and the extreme complexity in estimation of model parameters, reduces their applicability at real Internet links. We propose an adaptive scheme that regulates the available queue space to keep utilization at desired, high, level. As a consequence, in large-number-of-users regimes, sacrifcing 1-2% of bandwidth can result in queueing delays that are an order of magnitude smaller than in the standard BDP-bu®ering case. We go further and introduce an optimization framework for describing the problem of interest and propose an online algorithm for solving it
Router-based algorithms for improving internet quality of service.
We begin this thesis by generalizing some results related to a recently proposed positive system model of TCP congestion control algorithms. Then, motivated by a mean ÂŻeld analysis of the positive system model, a novel, stateless, queue management scheme is designed: Multi-Level Comparisons with index l (MLC(l)). In the limit, MLC(l) enforces max-min fairness in a network of TCP flows.
We go further, showing that counting past drops at a congested link provides su±cient information to enforce max-min fairness among long-lived flows and to reduce the flow completion times of short-lived flows. Analytical models are presented, and the accuracy of predictions are validated by packet level ns2 simulations.
We then move our attention to e±cient measurement and monitoring techniques. A small active counter architecture is presented that addresses the problem of accurate approximation of statistics counter values at very-high speeds that can be both updated and estimated on a per-packet basis. These algorithms are necessary in the design of router-based flow control algorithms since on-chip
Static RAM (SRAM) currently is a scarce resource, and being economical with its usage is an important task. A highly scalable method for heavy-hitter identifcation that uses our small active counters architecture is developed based on heuristic argument. Its performance is compared to several state-of-the-art algorithms and shown to out-perform them.
In the last part of the thesis we discuss the delay-utilization tradeoff in the congested Internet links.
While several groups of authors have recently analyzed this tradeoff, the lack of realistic assumption in their models and the extreme complexity in estimation of model parameters, reduces their applicability at real Internet links. We propose an adaptive scheme that regulates the available queue space to keep utilization at desired, high, level. As a consequence, in large-number-of-users regimes, sacrifcing 1-2% of bandwidth can result in queueing delays that are an order of magnitude smaller than in the standard BDP-bu®ering case. We go further and introduce an optimization framework for describing the problem of interest and propose an online algorithm for solving it
- …