6 research outputs found
Multipoint passive monitoring in packet networks
Traffic monitoring is essential to manage large networks and validate Service Level Agreements. Passive monitoring is particularly valuable to promptly identify transient fault episodes and react in a timely manner. This paper proposes a novel, non-invasive and flexible method to passively monitor large backbone networks. By using only packet counters, commonly available on existing hardware, we can accurately measure packet losses, in different segments of the network, affecting only specific flows. We can monitor not only end-to-end flows, but any generic flow with packets following several different paths in the network (multipoint flows). We also sketch a possible extension of the method to measure average one-way delay for multipoint flows, provided that the measurement points are synchronized. Through various experiments we show that the method is effective and enables easy zooming in on the cause packet losses. Moreover, the method can scale to very large networks with a very low overhead on the data plane and the management plane
Cloud Radio Access Network in Constrained Fronthaul
The Cloud Radio Access Network (C-RAN) has been proposed for the provision of advanced fourth and fifth generation wireless communication services. The C-RAN system have been shown to reduce costs, and can provide high spectral efficiency and energy efficiency. The fronthaul in such networks, defined as the transmission links between Remote Radio Units (RRUs) and a central Baseband Unit (BBU), usually has high fronthaul load and constrained capacity.
In this thesis, we explore and investigate the basic C-RAN system structure, based on which we propose two developed C-RAN systems. With each system we evaluate the Bit Error Ratio (BER) performance and transmission efficiency in multiple scenarios, and give advanced solutions to reduce the fronthaul load. We also analyse the effect of quantization on BPSK and QPSK modulation schemes, with different detection methods.
Error control in fronthaul transmission is considered as erroneous frames may be received at the BBU. Error Detection Coding and Error Correction Coding approaches can be applied to the fronthaul network. They may increase the fronthaul latency, but great improve the end-to-end BER performance.
Source compression techniques such as Slepian-Wolf (SW) coding can compress two correlated sources separately and de-compress them jointly. Since each RRU serves many user terminals, and some of them may also be served by another neighbour RRU, which results similarly in correlation of the received data between two RRUs. In this thesis, we applied the SW code to the C-RAN system and evaluate the compression rate achieved in fronthaul networks
Modélisation temporelle de la consommation électrique en analyse du cycle de vie appliquée au contexte des TIC
La Terre a des ressources limitées. Depuis la révolution industrielle l’Homme utilise des
ressources énergétiques non renouvelables qui sont responsables d’impacts environnementaux
majeurs sur l’environnement. La production d’énergie est un enjeu de taille pour l’ensemble du
développement durable.
Les systèmes de technologies de l’information et de la communication (TIC) prennent une
place de plus en plus importante dans notre quotidien (internet, téléphonie etc.). À l’échelle de la
société il est observé que la croissance des TIC est exponentielle. Les avancées en terme de TIC
sont la porte ouverte à de nombreux systèmes intelligents, optimisés et dynamiques, permettant
de dématérialiser les services et de lutter contre le réchauffement climatique. Néanmoins, les TIC
sont aussi responsables d’une quantité non négligeable d’émissions de gaz à effet de serre GES
(3%), induite par leurs consommations Ă©lectriques importantes. De ce fait, le secteur des TIC
collabore activement à mettre en place des mesures visant à réduire les émissions de GES des ses
activités. Pour optimiser et évaluer les services de TIC de façon adéquate il est nécessaire
d’utiliser des méthodes d’évaluations environnementales qui tiennent compte des particularités
des systèmes étudiés. Actuellement, les méthodes de calculs des émissions de GES ne sont pas
adaptées aux problématiques dynamiques dont font parties les TIC. En particulier, la variabilité
de la production d’électricité demeure absente des lignes directrices des méthodes de calculs des
impacts. Au delà de la question de la modélisation des GES c’est toute la problématique
temporelle à la fois de la consommation et de la production d’électricité qui se pose. La méthode
d’analyse du cycle de vie (ACV) apparaît comme un outil complet d’analyse de l’ensemble des
impacts environnementaux mais à l’instar des méthodes de calculs de GES, elle doit aussi être
adaptée à des problématiques dynamiques telle que celles de l’électricité et des TIC. Dans le
cadre de l’analyse des TIC, il devient donc nécessaire en ACV de modéliser les variations au
cours du temps des technologies de production d’électricité susceptibles de faire changer les
impacts environnementaux associés à la consommation d’électricité.
Ce mémoire de maîtrise propose un nouveau cadre méthodologique afin d’incorporer dans
l’ACV les aspects temporels de la production et de la consommation d’électricité. L’étude
développe un modèle temporel donnant accès à une série de données de production,
d’importations et d’exportations électriques. Le travail est mené autour d’un projet de recherche
vi
d’implantation d’un réseau interprovincial de « Cloud Computing » au Canada. Le modèle
temporel permet de définir historiquement l’impact environnemental par heure induit par la
consommation électrique dans trois provinces canadiennes: Alberta, Ontario et Québec. La
modélisation temporelle des différentes technologies de production de l’électricité au sein de
l’ACV permet d’optimiser le choix du moment d’utilisation de service de TIC, comme par
exemple une conversation internet ou encore la maintenance d’un serveur. Ces travaux sont
prometteurs car ils autorisent une Ă©valuation environnementale des TIC plus novatrice et
permettent l’obtention de données d’inventaire en ACV plus précises. La désagrégation des flux
d’inventaire d’électricité en ACV rend le calcul des impacts de la production électrique plus
précis à la fois historiquement mais aussi en temps réel.
Il a Ă©galement Ă©tĂ© menĂ© lors de ce mĂ©moire, une première rĂ©flexion sur l’aspect prĂ©dictif Ă
très court terme des importations et des exportations électriques, afin de pouvoir anticiper
l’optimisation dans le temps d’un service de TIC. À partir des profils historiques de
consommation, un modèle prédictif de consommation du Québec a été établi. Le profil
environnemental d’un kilowattheure consommé au Québec est étroitement relié aux échanges
électriques entre le Québec et les régions avoisinantes. Ces échanges étant corrélés au prix, la
température et la demande en puissance, il est possible de prédire le profil environnemental d’un
kilowattheure consommé au Québec en fonction de l’évolution de ces paramètres dans le temps.
Les résultats obtenus, ouvrent d’importantes perspectives sur l’application prévisionnelle des
impacts environnementaux de services comme le « Cloud Computing » ou encore l’ensemble des
services dits intelligent comme les « Smart-grid ». Une gestion intelligente entre consommation
électrique et impacts environnementaux offre une prise de décision en accords avec le
développement durable.
----------
Fossil fuels are a scarce energy resource. Since the industrial revolution, mankind uses and
abuses of non-renewable energies. They are responsible for many environmental damages. The
production of energy is one of the main challenges for a global sustainable development.
In our society, we can witness an exponential increase of the usage of the systems of
Information and Communication Technologies (ICT) such as Internet, phone calls, etc. The ICT
development allows the creation and optimization of many smart systems, the pooling of
services, and it also helps damping the climate change. However, because of their electric
consumption, the ICT are also responsible for some green house gases (GHG) emissions: 3% in
total. This fact gives them the willingness to change in order to limit their GHG emissions. In
order to properly evaluate and optimize the ICT services, it is necessary to use some methods of
evaluation that comply with the specificity of these systems. Currently, the methods used to
evaluate the GHG emissions are not adapted to dynamic systems, which include the ICT systems.
The variations of the production of electricity in a day or even a month are not yet taken into
account. This problem is far from being restricted to the modelling of GHG emissions, it widens
to the global variation in production and consumption of electricity. The Life Cycle Assessment
(LCA) method grants useful and complete tools to analyse their environmental impacts, but, as
with the GHG computation methods, it should be dynamically adapted. In the ICT framework,
the first step to solve this LCA problem is to be able to model the variations in time of the
electricity production.
This master thesis introduces a new way to include the variation in time of the consumption
and production of electricity in LCA methods. First, it generates an historical hourly database of
the electricity production and import-export of three Canadian states: Alberta, Ontario and
Quebec. Then it develops a model in function of time to predict their electricity consumption.
This study is done for a project implementing a « cloud computing » service in between these
states. The consumption model then provides information to optimize the best place and time to
make use of ICT services such as Internet messaging or server maintenance. This first-ever
implementation of time parameter allows more precision and vision in LCA data. The
disintegration of electrical inventory flows in LCA refines the effects of the electricity production
both historically and in real time.
viii
Some short-term predictions for the state of Quebec electrical exportations and
importations were also computed in this thesis. The goal is to foresee and optimize in real time
the ICT services use. The origin of a kilowatt-hour consumed in Quebec depends on the importexport
variable with its surrounding states. This parameter relies mainly on the price of the
electricity, the weather and the need for the state of Quebec in energy. This allows to plot a timevarying
estimate of the environmental consequences for the consumption of a kilowatt-hour in
Quebec. This can then be used to limit the GHG emission of ICT services like « cloudcomputing
» or « smart-grids ». A smart trade-off between electricity consumption and
environmental issues will lead to a more efficient sustainable development
On the Edge of Secure Connectivity via Software-Defined Networking
Securing communication in computer networks has been an essential feature ever since the Internet, as we know it today, was started. One of the best known and most common methods for secure communication is to use a Virtual Private Network (VPN) solution, mainly operating with an IP security (IPsec) protocol suite originally published in 1995 (RFC1825). It is clear that the Internet, and networks in general, have changed dramatically since then. In particular, the onset of the Cloud and the Internet-of-Things (IoT) have placed new demands on secure networking. Even though the IPsec suite has been updated over the years, it is starting to reach the limits of its capabilities in its present form. Recent advances in networking have thrown up Software-Defined Networking (SDN), which decouples the control and data planes, and thus centralizes the network control. SDN provides arbitrary network topologies and elastic packet forwarding that have enabled useful innovations at the network level.
This thesis studies SDN-powered VPN networking and explains the benefits of this combination. Even though the main context is the Cloud, the approaches described here are also valid for non-Cloud operation and are thus suitable for a variety of other use cases for both SMEs and large corporations.
In addition to IPsec, open source TLS-based VPN (e.g. OpenVPN) solutions are often used to establish secure tunnels. Research shows that a full-mesh VPN network between multiple sites can be provided using OpenVPN and it can be utilized by SDN to create a seamless, resilient layer-2 overlay for multiple purposes, including the Cloud. However, such a VPN tunnel suffers from resiliency problems and cannot meet the increasing availability requirements. The network setup proposed here is similar to Software-Defined WAN (SD-WAN) solutions and is extremely useful for applications with strict requirements for resiliency and security, even if best-effort ISP is used.
IPsec is still preferred over OpenVPN for some use cases, especially by smaller enterprises. Therefore, this research also examines the possibilities for high availability, load balancing, and faster operational speeds for IPsec. We present a novel approach involving the separation of the Internet Key Exchange (IKE) and the Encapsulation Security Payload (ESP) in SDN fashion to operate from separate devices. This allows central management for the IKE while several separate ESP devices can concentrate on the heavy processing.
Initially, our research relied on software solutions for ESP processing. Despite the ingenuity of the architectural concept, and although it provided high availability and good load balancing, there was no anti-replay protection. Since anti-replay protection is vital for secure communication, another approach was required. It thus became clear that the ideal solution for such large IPsec tunneling would be to have a pool of fast ESP devices, but to confine the IKE operation to a single centralized device. This would obviate the need for load balancing but still allow high availability via the device pool.
The focus of this research thus turned to the study of pure hardware solutions on an FPGA, and their feasibility and production readiness for application in the Cloud context. Our research shows that FPGA works fluently in an SDN network as a standalone IPsec accelerator for ESP packets. The proposed architecture has 10 Gbps throughput, yet the latency is less than 10 µs, meaning that this architecture is especially efficient for data center use and offers increased performance and latency requirements.
The high demands of the network packet processing can be met using several different approaches, so this approach is not just limited to the topics presented in this thesis. Global network traffic is growing all the time, so the development of more efficient methods and devices is inevitable. The increasing number of IoT devices will result in a lot of network traffic utilising the Cloud infrastructures in the near future. Based on the latest research, once SDN and hardware acceleration have become fully integrated into the Cloud, the future for secure networking looks promising. SDN technology will open up a wide range of new possibilities for data forwarding, while hardware acceleration will satisfy the increased performance requirements. Although it still remains to be seen whether SDN can answer all the requirements for performance, high availability and resiliency, this thesis shows that it is a very competent technology, even though we have explored only a minor fraction of its capabilities
Transferencia tecnológica de networking datacenter a infraestructura virtual cloud computing (iaas) en laboratorio, limitada a saturación de tráfico
Cloud Computing se enfoca en tres categorĂas como son IaaS, PaaS y SaaS, mediante el uso de recursos virtuales que permiten la transiciĂłn entre arquitecturas de red tradicional a arquitecturas de virtualizaciĂłn cumpliendo caracteristicas de flexibilidad, disponibilidad, confiabilidad, escalabilidad y portabilidad. Este estudio se centra en la categorĂa IaaS mediante el análisis de disponibilidad de varios modelos de equipos de red que hacen parte de la Infraestructura de datacenter tradicional de un carrier de comunicaciones, el análisis de disponibilidad se realizĂł en cinco modelos de equipos con mayor Ăndice de fallas, igualmente se implementĂł un escenario de laboratorio de Cloud Computing IaaS donde se configurĂł un componente de virtualizaciĂłn de Servers y un Componente de virtualizaciĂłn de hardware de networking para virtualizar cada uno de los modelos de equipos considerados en la muestra de estudio. De acuerdo con los resultados en los equipos fĂsicos, el modelo 3 es el equipo que presentĂł menor carga de CPU cuando se dio la saturaciĂłn de tráfico, el Modelo 1 presentĂł una indisponibilidad del 32% y el Modelo 4 del 56%, los Modelos 2 ,3 y 5 tuvieron una disponibilidad del 100%. Los resultados en los equipos virtualizados, muestran que el estado de salud de los dispositivos virtualizados tiende a reducirse a medida que hay mayor saturaciĂłn, esto debido a la falta de un hypervisor en la configuraciĂłn del ambiente virtualizado, el modelo 2 presenta el mejor comportamiento en CPU, mientras que en el modelo 5 a mayor saturaciĂłn de tráfico la CPU se consume por completo viĂ©ndose afectado el estado de salud del dispositivo, con una indisponibilidad del 100%.Abstract. Cloud Computing focuses on three categories such as IaaS, PaaS and SaaS, by using virtual resources that allow the transition between traditional network architectures to virtualization architectures fulfilling characteristics of flexibility, availability, reliability, scalability and portability. Cloud Computing focuses on three categories such as IaaS, PaaS and SaaS, by using virtual resources that allow the transition from traditional network architectures to virtualization architectures. This study focuses on the IaaS category by analyzing availability of several models of network equipment that are part of the traditional datacenter infrastructure of a communications carrier, the availability analysis was conducted in five equipment models with the highest faults record, also was implemented a Cloud Computing laboratory where was configured a server virtualization component and a networking hardware virtualization component to virtualize each device models considered in the study sample. According to the results in the physical equipments, the equipment model 3 had the most lowest CPU load when the traffic saturation occured, the equipment model 1 had an unavailability of 32% and the equipment model 4 had an unavailability of 56%, the equipment models 2,3 and 4 had a availability of 100%. The results in the virtualized equipment models show that the health of devices tends to decrease as no greater traffic saturation, this due to lack of a hypervisor in the configuration of virtualized environment, the equipment model 2 presents the best performance in CPU, while the equipment model 5 to highest traffic saturation affect the CPU performance and it is completely consumed being affected the health of the device, with an unavailability of 100%.MaestrĂ