1,034 research outputs found
Reducing Internet Latency : A Survey of Techniques and their Merit
Bob Briscoe, Anna Brunstrom, Andreas Petlund, David Hayes, David Ros, Ing-Jyh Tsang, Stein Gjessing, Gorry Fairhurst, Carsten Griwodz, Michael WelzlPeer reviewedPreprin
An Overview on Application of Machine Learning Techniques in Optical Networks
Today's telecommunication networks have become sources of enormous amounts of
widely heterogeneous data. This information can be retrieved from network
traffic traces, network alarms, signal quality indicators, users' behavioral
data, etc. Advanced mathematical tools are required to extract meaningful
information from these data and take decisions pertaining to the proper
functioning of the networks from the network-generated data. Among these
mathematical tools, Machine Learning (ML) is regarded as one of the most
promising methodological approaches to perform network-data analysis and enable
automated network self-configuration and fault management. The adoption of ML
techniques in the field of optical communication networks is motivated by the
unprecedented growth of network complexity faced by optical networks in the
last few years. Such complexity increase is due to the introduction of a huge
number of adjustable and interdependent system parameters (e.g., routing
configurations, modulation format, symbol rate, coding schemes, etc.) that are
enabled by the usage of coherent transmission/reception technologies, advanced
digital signal processing and compensation of nonlinear effects in optical
fiber propagation. In this paper we provide an overview of the application of
ML to optical communications and networking. We classify and survey relevant
literature dealing with the topic, and we also provide an introductory tutorial
on ML for researchers and practitioners interested in this field. Although a
good number of research papers have recently appeared, the application of ML to
optical networks is still in its infancy: to stimulate further work in this
area, we conclude the paper proposing new possible research directions
Mobile Ad Hoc Networks
Guiding readers through the basics of these rapidly emerging networks to more advanced concepts and future expectations, Mobile Ad hoc Networks: Current Status and Future Trends identifies and examines the most pressing research issues in Mobile Ad hoc Networks (MANETs). Containing the contributions of leading researchers, industry professionals, and academics, this forward-looking reference provides an authoritative perspective of the state of the art in MANETs. The book includes surveys of recent publications that investigate key areas of interest such as limited resources and the mobility of mobile nodes. It considers routing, multicast, energy, security, channel assignment, and ensuring quality of service. Also suitable as a text for graduate students, the book is organized into three sections: Fundamentals of MANET Modeling and SimulationâDescribes how MANETs operate and perform through simulations and models Communication Protocols of MANETsâPresents cutting-edge research on key issues, including MAC layer issues and routing in high mobility Future Networks Inspired By MANETsâTackles open research issues and emerging trends Illustrating the role MANETs are likely to play in future networks, this book supplies the foundation and insight you will need to make your own contributions to the field. It includes coverage of routing protocols, modeling and simulations tools, intelligent optimization techniques to multicriteria routing, security issues in FHAMIPv6, connecting moving smart objects to the Internet, underwater sensor networks, wireless mesh network architecture and protocols, adaptive routing provision using Bayesian inference, and adaptive flow control in transport layer using genetic algorithms
A Survey on Data Plane Programming with P4: Fundamentals, Advances, and Applied Research
With traditional networking, users can configure control plane protocols to
match the specific network configuration, but without the ability to
fundamentally change the underlying algorithms. With SDN, the users may provide
their own control plane, that can control network devices through their data
plane APIs. Programmable data planes allow users to define their own data plane
algorithms for network devices including appropriate data plane APIs which may
be leveraged by user-defined SDN control. Thus, programmable data planes and
SDN offer great flexibility for network customization, be it for specialized,
commercial appliances, e.g., in 5G or data center networks, or for rapid
prototyping in industrial and academic research. Programming
protocol-independent packet processors (P4) has emerged as the currently most
widespread abstraction, programming language, and concept for data plane
programming. It is developed and standardized by an open community and it is
supported by various software and hardware platforms. In this paper, we survey
the literature from 2015 to 2020 on data plane programming with P4. Our survey
covers 497 references of which 367 are scientific publications. We organize our
work into two parts. In the first part, we give an overview of data plane
programming models, the programming language, architectures, compilers,
targets, and data plane APIs. We also consider research efforts to advance P4
technology. In the second part, we analyze a large body of literature
considering P4-based applied research. We categorize 241 research papers into
different application domains, summarize their contributions, and extract
prototypes, target platforms, and source code availability.Comment: Submitted to IEEE Communications Surveys and Tutorials (COMS) on
2021-01-2
Delay estimation in computer networks
Computer networks are becoming increasingly large and complex; more so with the recent
penetration of the internet into all walks of life. It is essential to be able to monitor and
to analyse networks in a timely and efficient manner; to extract important metrics and
measurements and to do so in a way which does not unduly disturb or affect the performance
of the network under test. Network tomography is one possible method to accomplish these
aims. Drawing upon the principles of statistical inference, it is often possible to determine
the statistical properties of either the links or the paths of the network, whichever is desired,
by measuring at the most convenient points thus reducing the effort required. In particular,
bottleneck-link detection methods in which estimates of the delay distributions on network
links are inferred from measurements made at end-points on network paths, are examined as a
means to determine which links of the network are experiencing the highest delay.
Initially two published methods, one based upon a single Gaussian distribution and the other
based upon the method-of-moments, are examined by comparing their performance using three
metrics: robustness to scaling, bottleneck detection accuracy and computational complexity.
Whilst there are many published algorithms, there is little literature in which said algorithms
are objectively compared. In this thesis, two network topologies are considered, each with
three configurations in order to determine performance in six scenarios. Two new estimation
methods are then introduced, both based on Gaussian mixture models which are believed to
offer an advantage over existing methods in certain scenarios. Computationally, a mixture
model algorithm is much more complex than a simple parametric algorithm but the flexibility
in modelling an arbitrary distribution is vastly increased. Better model accuracy potentially
leads to more accurate estimation and detection of the bottleneck.
The concept of increasing flexibility is again considered by using a Pearson type-1 distribution
as an alternative to the single Gaussian distribution. This increases the flexibility but with
a reduced complexity when compared with mixture model approaches which necessitate the
use of iterative approximation methods. A hybrid approach is also considered where the
method-of-moments is combined with the Pearson type-1 method in order to circumvent
problems with the output stage of the former. This algorithm has a higher variance than
the method-of-moments but the output stage is more convenient for manipulation. Also
considered is a new approach to detection algorithms which is not dependant on any a-priori
parameter selection and makes use of the Kullback-Leibler divergence. The results show that it
accomplishes its aim but is not robust enough to replace the current algorithms.
Delay estimation is then cast in a different role, as an integral part of an algorithm to correlate
input and output streams in an anonymising network such as the onion router (TOR). TOR
is used by users in an attempt to conceal network traffic from observation. Breaking the
encryption protocols used is not possible without significant effort but by correlating the
un-encrypted input and output streams from the TOR network, it is possible to provide a degree
of certainty about the ownership of traffic streams. The delay model is essential as the network
is treated as providing a pseudo-random delay to each packet; having an accurate model allows
the algorithm to better correlate the streams
Transport Architectures for an Evolving Internet
In the Internet architecture, transport protocols are the glue between an applicationâs needs and the networkâs abilities. But as the Internet has evolved over the last 30 years, the implicit assumptions of these protocols have held less and less well. This can cause poor performance on newer networksâcellular networks, datacentersâand makes it challenging to roll out networking technologies that break markedly with the past.
Working with collaborators at MIT, I have built two systems that explore an objective-driven, computer-generated approach to protocol design. My thesis is that making protocols a function of stated assumptions and objectives can improve application performance and free network technologies to evolve.
Sprout, a transport protocol designed for videoconferencing over cellular networks, uses probabilistic inference to forecast network congestion in advance. On commercial cellular networks, Sprout gives 2-to-4 times the throughput and 7-to-9 times less delay than Skype, Apple Facetime, and Google Hangouts.
This work led to Remy, a tool that programmatically generates protocols for an uncertain multi-agent network. Remyâs computer-generated algorithms can achieve higher performance and greater fairness than some sophisticated human-designed schemes, including ones that put intelligence inside the network.
The Remy tool can then be used to probe the difficulty of the congestion control problem itselfâhow easy is it to âlearnâ a network protocol to achieve desired goals, given a necessarily imperfect model of the networks where it ultimately will be deployed? We found weak evidence of a tradeoff between the breadth of the operating range of a computer-generated protocol and its performance, but also that a single computer-generated protocol was able to outperform existing schemes over a thousand-fold range of link rates
Improving Large-Scale Network Traffic Simulation with Multi-Resolution Models
Simulating a large-scale network like the Internet is a challenging undertaking because of the sheer volume of its traffic. Packet-oriented representation provides high-fidelity details but is computationally expensive; fluid-oriented representation offers high simulation efficiency at the price of losing packet-level details. Multi-resolution modeling techniques exploit the advantages of both representations by integrating them in the same simulation framework. This dissertation presents solutions to the problems regarding the efficiency, accuracy, and scalability of the traffic simulation models in this framework. The ``ripple effect\u27\u27 is a well-known problem inherent in event-driven fluid-oriented traffic simulation, causing explosion of fluid rate changes. Integrating multi-resolution traffic representations requires estimating arrival rates of packet-oriented traffic, calculating the queueing delay upon a packet arrival, and computing packet loss rate under buffer overflow. Real time simulation of a large or ultra-large network demands efficient background traffic simulation. The dissertation includes a rate smoothing technique that provably mitigates the ``ripple effect\u27\u27, an accurate and efficient approach that integrates traffic models at multiple abstraction levels, a sequential algorithm that achieves real time simulation of the coarse-grained traffic in a network with 3 tier-1 ISP (Internet Service Provider) backbones using an ordinary PC, and a highly scalable parallel algorithm that simulates network traffic at coarse time scales
Mobile Ad Hoc Networks
Guiding readers through the basics of these rapidly emerging networks to more advanced concepts and future expectations, Mobile Ad hoc Networks: Current Status and Future Trends identifies and examines the most pressing research issues in Mobile Ad hoc Networks (MANETs). Containing the contributions of leading researchers, industry professionals, and academics, this forward-looking reference provides an authoritative perspective of the state of the art in MANETs. The book includes surveys of recent publications that investigate key areas of interest such as limited resources and the mobility of mobile nodes. It considers routing, multicast, energy, security, channel assignment, and ensuring quality of service. Also suitable as a text for graduate students, the book is organized into three sections: Fundamentals of MANET Modeling and SimulationâDescribes how MANETs operate and perform through simulations and models Communication Protocols of MANETsâPresents cutting-edge research on key issues, including MAC layer issues and routing in high mobility Future Networks Inspired By MANETsâTackles open research issues and emerging trends Illustrating the role MANETs are likely to play in future networks, this book supplies the foundation and insight you will need to make your own contributions to the field. It includes coverage of routing protocols, modeling and simulations tools, intelligent optimization techniques to multicriteria routing, security issues in FHAMIPv6, connecting moving smart objects to the Internet, underwater sensor networks, wireless mesh network architecture and protocols, adaptive routing provision using Bayesian inference, and adaptive flow control in transport layer using genetic algorithms
Transport Architectures for an Evolving Internet
In the Internet architecture, transport protocols are the glue between an applicationâs needs and the networkâs abilities. But as the Internet has evolved over the last 30 years, the implicit assumptions of these protocols have held less and less well. This can cause poor performance on newer networksâcellular networks, datacentersâand makes it challenging to roll out networking technologies that break markedly with the past.
Working with collaborators at MIT, I have built two systems that explore an objective-driven, computer-generated approach to protocol design. My thesis is that making protocols a function of stated assumptions and objectives can improve application performance and free network technologies to evolve.
Sprout, a transport protocol designed for videoconferencing over cellular networks, uses probabilistic inference to forecast network congestion in advance. On commercial cellular networks, Sprout gives 2-to-4 times the throughput and 7-to-9 times less delay than Skype, Apple Facetime, and Google Hangouts.
This work led to Remy, a tool that programmatically generates protocols for an uncertain multi-agent network. Remyâs computer-generated algorithms can achieve higher performance and greater fairness than some sophisticated human-designed schemes, including ones that put intelligence inside the network.
The Remy tool can then be used to probe the difficulty of the congestion control problem itselfâhow easy is it to âlearnâ a network protocol to achieve desired goals, given a necessarily imperfect model of the networks where it ultimately will be deployed? We found weak evidence of a tradeoff between the breadth of the operating range of a computer-generated protocol and its performance, but also that a single computer-generated protocol was able to outperform existing schemes over a thousand-fold range of link rates
- âŠ