20 research outputs found
A modular traffic sampling architecture for flexible network measurements
Dissertação de Mestrado (Programa Doutoral em Informática)The massive traffic volumes and the heterogeneity of services in today’s networks urge
for flexible, yet simple measurement solutions to assist network management tasks, without
impairing network performance. To turn treatable tasks requiring traffic analysis,
sampling the traffic has become mandatory, triggering substantial research in the area.
In fact, multiple sampling techniques have been proposed to assist network engineering
tasks, each one targeting specific measurement goals and traffic scenarios. Despite that,
there is still a lack of an encompassing solution able to support the flexible deployment
of these techniques in production networks.
In this context, this research work proposes a modular traffic sampling architecture
able to foster the flexible design and deployment of efficient measurement strategies.
The architecture is composed of three layers i.e., management plane, control plane and
data plane covering key components to achieve versatile and lightweight measurements
in diverse traffic scenarios and measurement activities. The flexibility and modularity
in deploying different sampling strategies relies upon a novel taxonomy of sampling
techniques, in which, current and emerging techniques are identified regarding their
inner characteristics - granularity, selection trigger and selection scheme.
Following the proposed taxonomy, a sampling framework prototype has been developed
and used as an experimental implementation of the proposed architecture,
providing a fair environment to assess and compare sampling techniques under distinct
measurement scenarios. Supported by the sampling framework, distinct techniques have
been evaluated regarding their performance in balancing the computational burden and
the accuracy in supporting traffic workload estimation and flow analysis. The results
have demonstrated the relevance and applicability of the proposed architecture, revealing
that a modular and configurable approach to sampling is a step forward for
improving sampling scope and efficiency.Os grandes volumes de tráfego e a heterogeneidade de serviços nas redes atuais
requerem soluções de medição que sejam flexÃveis e simples de modo a sustentar as
tarefas de gestão de redes sem afetar o desempenho das mesmas. Para tornar tratável
as tarefas que exigem análise de tráfego, tornou-se obrigatório recorrer a amostragem
do tráfego, motivando uma investigação substancial na área. Como consequência, várias
técnicas de amostragem foram propostas para auxiliar as tarefas de engenharia de redes,
cada uma orientada a satisfazer objetivos de medição e cenários de tráfego especÃficos.
Apesar disso, ainda não existe uma solução abrangente capaz de suportar a implantação
flexÃvel destas técnicas em redes de produção.
Neste contexto, este trabalho propõe uma arquitetura modular de amostragem de
tráfego capaz de fomentar a concepção flexÃvel e a implementação de estratégias efi-
cientes de medição de tráfego. A arquitetura é composta por três camadas, nomeadamente,
camada de gestão, camada de controle e camada de dados, cobrindo os principais
componentes para alcançar versatilidade e baixo custo computacional em variados
cenários de tráfego e atividades de medição. A flexibilidade e modularidade na implementação
de diferentes técnicas de amostragem baseia-se numa nova taxonomia, na
qual técnicas atuais e emergentes são identificadas de acordo com suas caracterÃsticas
internas - granularidade, trigger de seleção e esquema de seleção.
Seguindo a taxonomia proposta, um protótipo estruturando e agregando as diferentes
técnicas de amostragem foi desenvolvido e utilizado na implementação experimental
da arquitetura, permitindo avaliar e comparar as técnicas de amostragem em
diversos cenários de medição. Suportado pelo protótipo desenvolvido, distintas técnicas
foram avaliadas quanto ao seu desempenho em equilibrar a carga computacional
e a acurácia na estimação do volume de tráfego e na análise de fluxos. Os resultados
demonstraram a relevância e aplicabilidade da arquitetura de amostragem proposta,
revelando que uma abordagem modular e configurável constitui um avanço no sentido
de melhorar a eficiência na amostragem de tráfego
Characterizing, managing and monitoring the networks for the ATLAS data acquisition system
Particle physics studies the constituents of matter and the interactions between them. Many of the elementary particles do not exist under normal circumstances in nature. However, they can be created and detected during energetic collisions of other particles, as is done in particle accelerators. The Large Hadron Collider (LHC) being built at CERN will be the world's largest circular particle accelerator, colliding protons at energies of 14 TeV. Only a very small fraction of the interactions will give raise to interesting phenomena. The collisions produced inside the accelerator are studied using particle detectors. ATLAS is one of the detectors built around the LHC accelerator ring. During its operation, it will generate a data stream of 64 Terabytes/s. A Trigger and Data Acquisition System (TDAQ) is connected to ATLAS -- its function is to acquire digitized data from the detector and apply trigger algorithms to identify the interesting events. Achieving this requires the power of over 2000 computers plus an interconnecting network capable of sustaining a throughput of over 150 Gbit/s with minimal loss and delay. The implementation of this network required a detailed study of the available switching technologies to a high degree of precision in order to choose the appropriate components. We developed an FPGA-based platform (the GETB) for testing network devices. The GETB system proved to be flexible enough to be used as the ba sis of three different network-related projects. An analysis of the traffic pattern that is generated by the ATLAS data-taking applications was also possible thanks to the GETB. Then, while the network was being assembled, parts of the ATLAS detector started commissioning -- this task relied on a functional network. Thus it was imperative to be able to continuously identify existing and usable infrastructure and manage its operations. In addition, monitoring was required to detect any overload conditions with an indication where the excess demand was being generated. We developed tools to ease the maintenance of the network and to automatically produce inventory reports. We created a system that discovers the network topology and this permitted us to verify the installation and to track its progress. A real-time traffic visualization system has been built, allowing us to see at a glance which network segments are heavily utilized. Later, as the network achieves production status, it will be necessary to extend the monitoring to identify individual applications' use of the available bandwidth. We studied a traffic monitoring technology that will allow us to have a better understanding on how the network is used. This technology, based on packet sampling, gives the possibility of having a complete view of the network: not only its total capacity utilization, but also how this capacity is divided among users and software applicati ons. This thesis describes the establishment of a set of tools designed to characterize, monitor and manage complex, large-scale, high-performance networks. We describe in detail how these tools were designed, calibrated, deployed and exploited. The work that led to the development of this thesis spans over more than four years and closely follows the development phases of the ATLAS network: its design, its installation and finally, its current and future operation
QoE-driven rate adaptation heuristic for fair adaptive video streaming
HTTP Adaptive Streaming (HAS) is quickly becoming the de facto standard for video streaming services. In HAS, each video is temporally segmented and stored in different quality levels. Rate adaptation heuristics, deployed at the video player, allow the most appropriate level to be dynamically requested, based on the current network conditions. It has been shown that today's heuristics underperform when multiple clients consume video at the same time, due to fairness issues among clients. Concretely, this means that different clients negatively influence each other as they compete for shared network resources. In this article, we propose a novel rate adaptation algorithm called FINEAS (Fair In-Network Enhanced Adaptive Streaming), capable of increasing clients' Quality of Experience (QoE) and achieving fairness in a multiclient setting. A key element of this approach is an in-network system of coordination proxies in charge of facilitating fair resource sharing among clients. The strength of this approach is threefold. First, fairness is achieved without explicit communication among clients and thus no significant overhead is introduced into the network. Second, the system of coordination proxies is transparent to the clients, that is, the clients do not need to be aware of its presence. Third, the HAS principle is maintained, as the in-network components only provide the clients with new information and suggestions, while the rate adaptation decision remains the sole responsibility of the clients themselves. We evaluate this novel approach through simulations, under highly variable bandwidth conditions and in several multiclient scenarios. We show how the proposed approach can improve fairness up to 80% compared to state-of-the-art HAS heuristics in a scenario with three networks, each containing 30 clients streaming video at the same time
A study of the applicability of software-defined networking in industrial networks
173 p.Las redes industriales interconectan sensores y actuadores para llevar a cabo funciones de monitorización, control y protección en diferentes entornos, tales como sistemas de transporte o sistemas de automatización industrial. Estos sistemas ciberfÃsicos generalmente están soportados por múltiples redes de datos, ya sean cableadas o inalámbricas, a las cuales demandan nuevas prestaciones, de forma que el control y gestión de tales redes deben estar acoplados a las condiciones del propio sistema industrial. De este modo, aparecen requisitos relacionados con la flexibilidad, mantenibilidad y adaptabilidad, al mismo tiempo que las restricciones de calidad de servicio no se vean afectadas. Sin embargo, las estrategias de control de red tradicionales generalmente no se adaptan eficientemente a entornos cada vez más dinámicos y heterogéneos.Tras definir un conjunto de requerimientos de red y analizar las limitaciones de las soluciones actuales, se deduce que un control provisto independientemente de los propios dispositivos de red añadirÃa flexibilidad a dichas redes. Por consiguiente, la presente tesis explora la aplicabilidad de las redes definidas por software (Software-Defined Networking, SDN) en sistemas de automatización industrial. Para llevar a cabo este enfoque, se ha tomado como caso de estudio las redes de automatización basadas en el estándar IEC 61850, el cual es ampliamente usado en el diseño de las redes de comunicaciones en sistemas de distribución de energÃa, tales como las subestaciones eléctricas. El estándar IEC 61850 define diferentes servicios y protocolos con altos requisitos en terminos de latencia y disponibilidad de la red, los cuales han de ser satisfechos mediante técnicas de ingenierÃa de tráfico. Como resultado, aprovechando la flexibilidad y programabilidad ofrecidas por las redes definidas por software, en esta tesis se propone una arquitectura de control basada en el protocolo OpenFlow que, incluyendo tecnologÃas de gestión y monitorización de red, permite establecer polÃticas de tráfico acorde a su prioridad y al estado de la red.Además, las subestaciones eléctricas son un ejemplo representativo de infraestructura crÃtica, que son aquellas en las que un fallo puede resultar en graves pérdidas económicas, daños fÃsicos y materiales. De esta forma, tales sistemas deben ser extremadamente seguros y robustos, por lo que es conveniente la implementación de topologÃas redundantes que ofrezcan un tiempo de reacción ante fallos mÃnimo. Con tal objetivo, el estándar IEC 62439-3 define los protocolos Parallel Redundancy Protocol (PRP) y High-availability Seamless Redundancy (HSR), los cuales garantizan un tiempo de recuperación nulo en caso de fallo mediante la redundancia activa de datos en redes Ethernet. Sin embargo, la gestión de redes basadas en PRP y HSR es estática e inflexible, lo que, añadido a la reducción de ancho de banda debida la duplicación de datos, hace difÃcil un control eficiente de los recursos disponibles. En dicho sentido, esta tesis propone control de la redundancia basado en el paradigma SDN para un aprovechamiento eficiente de topologÃas malladas, al mismo tiempo que se garantiza la disponibilidad de las aplicaciones de control y monitorización. En particular, se discute cómo el protocolo OpenFlow permite a un controlador externo configurar múltiples caminos redundantes entre dispositivos con varias interfaces de red, asà como en entornos inalámbricos. De esta forma, los servicios crÃticos pueden protegerse en situaciones de interferencia y movilidad.La evaluación de la idoneidad de las soluciones propuestas ha sido llevada a cabo, principalmente, mediante la emulación de diferentes topologÃas y tipos de tráfico. Igualmente, se ha estudiado analÃtica y experimentalmente cómo afecta a la latencia el poder reducir el número de saltos en las comunicaciones con respecto al uso de un árbol de expansión, asà como balancear la carga en una red de nivel 2. Además, se ha realizado un análisis de la mejora de la eficiencia en el uso de los recursos de red y la robustez alcanzada con la combinación de los protocolos PRP y HSR con un control llevado a cabo mediante OpenFlow. Estos resultados muestran que el modelo SDN podrÃa mejorar significativamente las prestaciones de una red industrial de misión crÃtica
Load shedding in network monitoring applications
Monitoring and mining real-time network data streams are crucial operations for managing and operating data networks. The information that network operators desire to extract from the network traffic is of different size, granularity and accuracy depending on the measurement task (e.g., relevant data for capacity planning and intrusion detection are very different). To satisfy these different demands, a new class of monitoring systems is emerging to handle multiple and arbitrary monitoring applications.
Such systems must inevitably cope with the effects of continuous overload situations due to the large volumes, high data rates and bursty nature of the network traffic. These overload situations can severely compromise the accuracy and effectiveness of monitoring systems, when their results are most valuable to network operators.
In this thesis, we propose a technique called load shedding as an effective and low-cost alternative to over-provisioning in network monitoring systems.
It allows these systems to handle efficiently overload situations in the presence of multiple, arbitrary and competing monitoring applications. We present the design and evaluation of a predictive load shedding scheme that can shed excess load in front of extreme traffic conditions and maintain the accuracy of the monitoring applications within bounds defined by end users, while assuring a fair allocation of computing resources to non-cooperative applications.
The main novelty of our scheme is that it considers monitoring applications as black boxes, with arbitrary (and highly variable) input traffic and processing cost. Without any explicit knowledge of the application internals, the proposed scheme extracts a set of features from the traffic streams to build an on-line prediction model of the resource requirements of each monitoring application, which is used to anticipate overload situations and control the overall resource usage by sampling the input packet streams. This way, the monitoring system preserves a high degree of flexibility, increasing the range of applications and network scenarios where it can be used.
Since not all monitoring applications are robust against sampling, we then extend our load shedding scheme to support custom load shedding methods defined by end users, in order to provide a generic solution for arbitrary monitoring applications. Our scheme allows the monitoring system to safely delegate the task of shedding excess load to the applications and still guarantee fairness of service with non-cooperative users.
We implemented our load shedding scheme in an existing network monitoring system and deployed it in a research ISP network. We present experimental evidence of the performance and robustness of our system with several concurrent monitoring applications during long-lived executions and using real-world traffic traces.Postprint (published version
Architectures for virtualization and performance evaluation in software defined networks
[no abstract
Stochastic methods for measurement-based network control
The main task of network administrators is to ensure that their network functions properly. Whether they manage a telecommunication or a road network, they generally base their decisions on the analysis of measurement data. Inspired by such network control applications, this dissertation investigates several stochastic modelling techniques for data analysis. The focus is on two areas within the field of stochastic processes: change point detection and queueing theory. Part I deals with statistical methods for the automatic detection of change points, being changes in the probability distribution underlying a data sequence. This part starts with a review of existing change point detection methods for data sequences consisting of independent observations. The main contribution of this part is the generalisation of the classic cusum method to account for dependence within data sequences. We analyse the false alarm probability of the resulting methods using a large deviations approach. The part also discusses numerical tests of the new methods and a cyber attack detection application, in which we investigate how to detect dns tunnels. The main contribution of Part II is the application of queueing models (probabilistic models for waiting lines) to situations in which the system to be controlled can only be observed partially. We consider two types of partial information. Firstly, we develop a procedure to get insight into the performance of queueing systems between consecutive system-state measurements and apply it in a numerical study, which was motivated by capacity management in cable access networks. Secondly, inspired by dynamic road control applications, we study routing policies in a queueing system for which just part of the jobs are observable and controllable
Toward Software-Defined Networking-Based IoT Frameworks: A Systematic Literature Review, Taxonomy, Open Challenges and Prospects
Internet of Things (IoT) is characterized as one of the leading actors for the next evolutionary stage in the computing world. IoT-based applications have already produced a plethora of novel services and are improving the living standard by enabling innovative and smart solutions. However, along with its rapid adoption, IoT technology also creates complex challenges regarding the management of IoT networks due to its resource limitations (computational power, energy, and security). Hence, it is urgently needed to refine the IoT-based application’s architectures to robustly manage the overall IoT infrastructure. Software-defined networking (SDN) has emerged as a paradigm that offers software-based controllers to manage hardware infrastructure and traffic flow on a network effectively. SDN architecture has the potential to provide efficient and reliable IoT network management. This research provides a comprehensive survey investigating the published studies on SDN-based frameworks to address IoT management issues in the dimensions of fault tolerance, energy management, scalability, load balancing, and security service provisioning within the IoT networks. We conducted a Systematic Literature Review (SLR) on the research studies (published from 2010 to 2022) focusing on SDN-based IoT management frameworks. We provide an extensive discussion on various aspects of SDN-based IoT solutions and architectures. We elaborate a taxonomy of the existing SDN-based IoT frameworks and solutions by classifying them into categories such as network function virtualization, middleware, OpenFlow adaptation, and blockchain-based management. We present the research gaps by identifying and analyzing the key architectural requirements and management issues in IoT infrastructures. Finally, we highlight various challenges and a range of promising opportunities for future research to provide a roadmap for addressing the weaknesses and identifying the benefits from the potentials offered by SDN-based IoT solutions