66 research outputs found
On the intertwining between capacity scaling and TCP congestion control
Recent works advocate the possibility of improving energy efficiency of network devices by modulating switching and transmission capacity according to traffic load. However, addressing the trade-off between energy saving and Quality of Service (QoS) under these approaches is not a trivial task, specially because most of the traffic in the Internet of today is carried by TCP, and is hence adaptive to the available resources. In this paper we present a preliminary investigation of the possible intertwining between capacity scaling approaches and TCP congestion control, and we show how this interaction can affect performance in terms of both energy saving and QoS
Network interface power management and TCP congestion control: a troubled marriage
Optimizing the trade-off between power saving and Quality of Service in the current Internet is a challenging research objective, whose difficulty stems also from the dominant presence of Transmission Control Protocol (TCP) traffic, and its elastic nature. More specifically, recent works support the possibility of improving energy efficiency of network devices by modulating switching and transmission capacity according to traffic load, whereas TCP traffic is in turn adaptive to the available resources. In a previous work, we have shown that an intertwining exists between capacity scaling approaches and TCP congestion control. In this paper, we investigate the reasons of such intertwining, and we evaluate how and how much the dynamics of the two algorithms affect each other’s performance. More specifically, we will show that such an interaction is essentially due to the relative speed of the two algorithms, which determines the conditions for the successful or unsuccessful coexistence of the two mechanisms
From Measurements to Modeling The Trade-off between Energy Efficiency and System Performance
In this thesis, the work is based on experimental and modeling
methodologies. I applied this starting from measurements of
Asymmetric Digital Subscriber Line (ADSL) technology, which is the
preferred high-speed access to the Internet. The work focuses on
the problem of determining the maximum stable bandwidth that an
Internet Service Provider (ISP) can offer. This task was
accomplished by monitoring and analyzing a large set of ADSL
end-users. Huge amount of collected data allowed us to get a
detailed statistical analysis of the behavior of ADSL lines, such
as the distribution, variable dependencies and correlation of
effective bitrate with the physical measurements exposed by network
devices. Analysis of collected data suggest that there is no clear
trend to predict a downstream bandwidth of ADSL line based on line
conditions, and thus requires an intelligent way of analyzing the
ADSL lines. Thus, a Neural Network (NN) was employed, which is an
intelligent machine learning tool to (i) learn the behavior of ADSL
lines, (ii) extract useful information from huge set of
measurements, (iii) automatically suggest maximum stable bandwidth.
The results indicate that NN performs well in predicting end-users
available bandwidth. However, NN is required to be properly
trained, as well as needs careful selection of design parameters.
Later, the focus of the work was centered over the energy
efficiency of telecommunication systems using mathematical modeling
approach. Motivation of first work was to know how much energy
efficient is Voice over Internet Protocol (VoIP) architecture as
compared to traditional Public-Switched Telephone Network (PSTN)
architecture used for voice communication. To answer this, these
two architectures already implemented at our campus were
extensively examined by means of measuring real power consumption.
To generalize or estimate power consumption for any given number of users, a
mathematical model of power consumption for both the architectures
is built. The results indicate that VoIP architecture consumes a
lot of power, yet VoIP system have the flexibility to be made
energy efficient by adding some sort of energy-wise schemes.
The last part of the work investigates the interaction between
the Transmission Control Protocol (TCP) and rate adaptation
mechanism. Where rate adaptation is an approach to save energy by
adapting the transmission rates or capacity according to traffic
dynamics. To have deep insight of the interaction, a mathematical
model based on fluid based framework is built to depict the
behavior of TCP and rate adaptation scheme. The model is then
tested for its accuracy and stability by conducting simulation
experiments and steady state analysis. Later, the model is used to
study the impact of tuning the parameters on system performance.
The results suggest that the implementation of rate adaptation
scheme can be effective but a careful parameter setting is needed
to avoid undesired disruptive interaction among controllers at
different levels, that impair QoS
From Measurements to Modeling The Trade-off between Energy Efficiency and System Performance
In this thesis, the work is based on experimental and modeling methodologies. I applied this starting from measurements of Asymmetric Digital Subscriber Line (ADSL) technology, which is the preferred high-speed access to the Internet. The work focuses on the problem of determining the maximum stable bandwidth that an Internet Service Provider (ISP) can offer. This task was accomplished by monitoring and analyzing a large set of ADSL end-users. Huge amount of collected data allowed us to get a detailed statistical analysis of the behavior of ADSL lines, such as the distribution, variable dependencies and correlation of effective bitrate with the physical measurements exposed by network devices. Analysis of collected data suggest that there is no clear trend to predict a downstream bandwidth of ADSL line based on line conditions, and thus requires an intelligent way of analyzing the ADSL lines. Thus, a Neural Network (NN) was employed, which is an intelligent machine learning tool to (i) learn the behavior of ADSL lines, (ii) extract useful information from huge set of measurements, (iii) automatically suggest maximum stable bandwidth. The results indicate that NN performs well in predicting end-users available bandwidth. However, NN is required to be properly trained, as well as needs careful selection of design parameters. Later, the focus of the work was centered over the energy efficiency of telecommunication systems using mathematical modeling approach. Motivation of first work was to know how much energy efficient is Voice over Internet Protocol (VoIP) architecture as compared to traditional Public-Switched Telephone Network (PSTN) architecture used for voice communication. To answer this, these two architectures already implemented at our campus were extensively examined by means of measuring real power consumption. To generalize or estimate power consumption for any given number of users, a mathematical model of power consumption for both the architectures is built. The results indicate that VoIP architecture consumes a lot of power, yet VoIP system have the flexibility to be made energy efficient by adding some sort of energy-wise schemes. The last part of the work investigates the interaction between the Transmission Control Protocol (TCP) and rate adaptation mechanism. Where rate adaptation is an approach to save energy by adapting the transmission rates or capacity according to traffic dynamics. To have deep insight of the interaction, a mathematical model based on fluid based framework is built to depict the behavior of TCP and rate adaptation scheme. The model is then tested for its accuracy and stability by conducting simulation experiments and steady state analysis. Later, the model is used to study the impact of tuning the parameters on system performance. The results suggest that the implementation of rate adaptation scheme can be effective but a careful parameter setting is needed to avoid undesired disruptive interaction among controllers at different levels, that impair Qo
Topics in Power Usage in Network Services
The rapid advance of computing technology has created a world powered
by millions of computers. Often these computers are idly consuming energy
unnecessarily in spite of all the efforts of hardware manufacturers. This thesis
examines proposals to determine when to power down computers without
negatively impacting on the service they are used to deliver, compares and
contrasts the efficiency of virtualisation with containerisation, and investigates
the energy efficiency of the popular cryptocurrency Bitcoin.
We begin by examining the current corpus of literature and defining the key
terms we need to proceed.
Then we propose a technique for improving the energy consumption of servers
by moving them into a sleep state and employing a low powered device to act
as a proxy in its place.
After this we move on to investigate the energy efficiency of virtualisation and
compare the energy efficiency of two of the most common means used to do
this.
Moving on from this we look at the cryptocurrency Bitcoin. We consider the
energy consumption of bitcoin mining and if this compared with the value of
bitcoin makes this profitable.
Finally we conclude by summarising the results and findings of this thesis.
This work increases our understanding of some of the challenges of energy
efficient computation as well as proposing novel mechanisms to save energy
Control Theory: A Mathematical Perspective on Cyber-Physical Systems
Control theory is an interdisciplinary field that is located at the crossroads of pure and applied mathematics with systems engineering and the sciences. Recently the control field is facing new challenges motivated by application domains that involve networks of systems. Examples are interacting robots, networks of autonomous cars or the smart grid. In order to address the new challenges posed by these application disciplines, the special focus of this workshop has been on the currently very active field of Cyber-Physical Systems, which forms the underlying basis for many network control applications. A series of lectures in this workshop was devoted to give an overview on current theoretical developments in Cyber-Physical Systems, emphasizing in particular the mathematical aspects of the field. Special focus was on the dynamics and control of networks of systems, distributed optimization and formation control, fundamentals of nonlinear interconnected systems, as well as open problems in control
Peer-to-peer multimedia communication
I sistemi Peer-to-Peer (P2P) sono stati inventi, messi in campo e studiati da più di
dieci anni, andando al di là della semplice applicazione per scambio di file. Nelle reti P2P i partecipanti
si organizzano in una rete "overlay" che è astratta rispetto alle caratteristiche della sottostante rete fisica.
Scopo di questi sistemi è la distribuzione di risorse quali contenuti, spazio di memorizzazione o cicli macchina. Gli utenti quindi giocano un ruolo attivo e possono essere considerati come sia clienti che serventi allo stesso tempo per il particolare servizio che la rete P2P offre.
Lo scopo di questa tesi di dottorato è lo studio di questi sistemi ed il dare un contributo nella loro analisi
prestazionale. L'analisi mira a valutare le prestazioni raggiunte dai sistemi e/o i limiti teorici raggiungibili.
Infatti, nonostante esistano diversi meccanismi per il peer-to-peer streaming, l'analisi prestazionale di questo tipo di sistemi può essere considerata ancora nella sua infanzia. A questo scopo, i contributi principali di questa tesi di dottorato sono: i)la derivazione di un limite teorico per il ritardo nei sistemi di
P2P streaming, ii) la creazione di un algoritmo che sfrutti le conoscenze acquisite attraverso il lavoro teorico, iii) l'analisi prestazionale dell'algoritmo utilizzando un simulatore espressamente progettato per riprodurre le caratteristiche delle reti P2P reali composte da centinaia di migliaia di nodi che si connettono e disconnettono in continuazione.Peer-to-Peer (P2P) systems have been invented, deployed and researched for more
than ten years and went far beyond the simple file sharing applications. In P2P
networks, participants organize themselves in an overlay network that abstracts
from the topological characteristics of the underlying physical network. Aim of
these systems is the distribution of some kind of resources like contents, storage, or
CPU cycles. Users, therefore, play an active role so that they can be considered
as client and server at the same time, for the particular service that is provided
through the P2P paradigm.
Goal of this dissertation thesis is to study
these systems, and give contributes in their performance evaluation. The analysis
will aim to evaluate the achieved performance of a system and/or the performance
bounds that could be achievable.
In fact, even if there are several proposals of different systems, peer-to-peer
streaming performance analysis can be considered still in its infancy and there is
still a lot of work to do. To this aim, the main contributes of this dissertation thesis
are i) the derivation of a theoretical delay bounds for P2P streaming system ii)
II
the creation of an algorithm that exploits the new insights that come out from the
theoretical study iii) the performance evaluation of this algorithm using an ad-hoc
simulator, expressly tailored to reproduce the characteristics of the real-world P2P
streaming systems, composed by hundred thousands of intermittently connected
users
Experimentation and Characterization of Mobile Broadband Networks
The Internet has brought substantial changes to our life as the main tool to access a large variety of services and applications. Internet distributed nature and technological improvements lead to new challenges for researchers, service providers, and network administrators. Internet traffic measurement and analysis is one of the most trivial and powerful tools to study such a complex environment from different aspects. Mobile BroadBand (MBB) networks have become one of the main means to access the Internet. MBB networks are evolving at a rapid pace with technology enhancements that promise drastic improvements in capacity, connectivity, and coverage, i.e., better performance in general. Open experimentation with operational MBB networks in the wild is currently a fundamental requirement of the research community in its endeavor to address the need for innovative solutions for mobile communications. There is a strong need for objective data relating to stability and performance of MBB (e.g., 2G, 3G, 4G, and soon-to-come 5G) networks and for tools that rigorously and scientifically assess their performance. Thus, measuring end user performance in such an environment is a challenge that calls for large-scale measurements and profound analysis of the collected data. The intertwining of technologies, protocols, and setups makes it even more complicated to design scientifically sound and robust measurement campaigns. In such a complex scenario, the randomness of the wireless access channel coupled with the often unknown operator configurations makes this scenario even more challenging. In this thesis, we introduce the MONROE measurement platform: an open access and flexible hardware-based platform for measurements on operational MBB networks. The MONROE platform enables accurate, realistic, and meaningful assessment of the performance and reliability of MBB networks. We detail the challenges we overcame while building and testing the MONROE testbed and argue our design and implementation choices accordingly. Measurements are designed
to stress performance of MBB networks at different network layers by proposing scalable experiments and methodologies. We study: (i) Network layer performance, characterizing and possibly estimating the download speed offered by commercial MBB networks; (ii) End users’ Quality of Experience (QoE), specifically targeting the web performance of HTTP1.1/TLS and HTTP2 on various popular web sites; (iii) Implication of roaming in Europe, understanding the roaming ecosystem in Europe after the "Roam like Home" initiative; and (iv) A novel adaptive scheduler family
with deadline is proposed for multihomed devices that only require a very coarse knowledge of the wireless bandwidth. Our results comprise different contributions in the scope of each research topic. To put it in a nutshell, we pinpoint the impact of different network configurations that further complicate the picture and hopefully contribute to the debate about performance assessment in MBB networks. The MBB users web performance shows that HTTP1.1/TLS is very similar to HTTP2 in our large-scale measurements. Furthermore, we observe that roaming is well supported for the monitored operators and the operators using the same approach for routing roaming traffic. The proposed adaptive schedulers for content upload in multihomed devices are evaluated in
both numerical simulations and real mobile nodes. Simulation results show that the adaptive solutions can effectively leverage the fundamental tradeoff between the upload cost and completion time, despite unpredictable variations in available bandwidth of wireless interfaces. Experiments in the real mobile nodes provided by the MONROE platform confirm the findings
Quantum Internet Protocol Stack: a Comprehensive Survey
Classical Internet evolved exceptionally during the last five decades, from a
network comprising a few static nodes in the early days to a leviathan
interconnecting billions of devices. This has been possible by the separation
of concern principle, for which the network functionalities are organized as a
stack of layers, each providing some communication functionalities through
specific network protocols. In this survey, we aim at highlighting the
impossibility of adapting the classical Internet protocol stack to the Quantum
Internet, due to the marvels of quantum mechanics. Indeed, the design of the
Quantum Internet requires a major paradigm shift of the whole protocol stack
for harnessing the peculiarities of quantum entanglement and quantum
information. In this context, we first overview the relevant literature about
Quantum Internet protocol stack. Then, stemming from this, we sheds the light
on the open problems and required efforts toward the design of an effective and
complete Quantum Internet protocol stack. To the best of authors' knowledge, a
survey of this type is the first of its own. What emerges from this analysis is
that the Quantum Internet, though still in its infancy, is a disruptive
technology whose design requires an inter-disciplinary effort at the border
between quantum physics, computer and telecommunications engineering
- …