434 research outputs found
Deep generative models for network data synthesis and monitoring
Measurement and monitoring are fundamental tasks in all networks, enabling the down-stream management and optimization of the network.
Although networks inherently
have abundant amounts of monitoring data, its access and effective measurement is
another story. The challenges exist in many aspects. First, the inaccessibility of network monitoring data for external users, and it is hard to provide a high-fidelity dataset
without leaking commercial sensitive information. Second, it could be very expensive
to carry out effective data collection to cover a large-scale network system, considering the size of network growing, i.e., cell number of radio network and the number of
flows in the Internet Service Provider (ISP) network. Third, it is difficult to ensure fidelity and efficiency simultaneously in network monitoring, as the available resources
in the network element that can be applied to support the measurement function are
too limited to implement sophisticated mechanisms. Finally, understanding and explaining the behavior of the network becomes challenging due to its size and complex
structure. Various emerging optimization-based solutions (e.g., compressive sensing)
or data-driven solutions (e.g. deep learning) have been proposed for the aforementioned challenges. However, the fidelity and efficiency of existing methods cannot yet
meet the current network requirements.
The contributions made in this thesis significantly advance the state of the art in
the domain of network measurement and monitoring techniques. Overall, we leverage
cutting-edge machine learning technology, deep generative modeling, throughout the
entire thesis. First, we design and realize APPSHOT , an efficient city-scale network
traffic sharing with a conditional generative model, which only requires open-source
contextual data during inference (e.g., land use information and population distribution). Second, we develop an efficient drive testing system — GENDT, based on generative model, which combines graph neural networks, conditional generation, and quantified model uncertainty to enhance the efficiency of mobile drive testing. Third, we
design and implement DISTILGAN, a high-fidelity, efficient, versatile, and real-time
network telemetry system with latent GANs and spectral-temporal networks. Finally,
we propose SPOTLIGHT , an accurate, explainable, and efficient anomaly detection system of the Open RAN (Radio Access Network) system. The lessons learned through
this research are summarized, and interesting topics are discussed for future work in
this domain. All proposed solutions have been evaluated with real-world datasets and
applied to support different applications in real systems
Recommended from our members
Using Blockchain to Ensure Reputation Credibility in Decentralized Review Management
In recent years, there have been incidents which decreased people's trust in some organizations and authorities responsible for ratings and accreditation. For a few prominent examples, there was a security breach at Equifax (2017), misconduct was found in the Standard & Poor's Ratings Services (2015), and the Accrediting Council for Independent Colleges and Schools (2022) validated some of the low-performing schools as delivering higher standards than they actually were. A natural solution to these types of issues is to decentralize the relevant trust management processes using blockchain technologies. The research problems which are tackled in this thesis consider the issue of trust in reputation for assessment and review credibility at different angles, in the context of blockchain applications.
We first explored the following questions. How can we trust courses in one college to provide students with the type and level of knowledge which is needed in a specific workplace? Micro-accreditation on a blockchain was our solution, including using a peer-review system to determine the rigor of a course (through a consensus). Rigor is the level of difficulty in regard to a student's expected level of knowledge. Currently, we make assumptions about the quality and rigor of what is learned, but this is prone to human bias and misunderstandings. We present a decentralized approach that tracks student records throughout the academic progress at a school and helps to match employers' requirements to students' knowledge. We do this by applying micro-accredited topics and Knowledge Units (KU) defined by NSA's Center of Academic Excellence to courses and assignments. We demonstrate that the system was successful in increasing accuracy of hires through simulated datasets, and that it is efficient, as well as scalable. Another problem is how can we trust that the peer reviews are honest and reflect an accurate rigor score? Assigning reputation to peers is a natural method to ensure correctness of these assessments. The reputation of the peers providing rigor scores needs to be taken into account for an overall rigor of a course, its topics, and its tasks. Specifically, those with a higher reputation should have more influence on the total score.
Hence, we focused on how a peer's reputation is managed. We explored decentralized reputation management for the peers, choosing a decentralized marketplace as a sample application. We presented an approach to ensuring review credibility, which is a particular aspect of trust in reviews and reputation of the parties who provide them. We use a Proof-of-Stake based Algorand system as a base of our implementation, since this system is open-source, and it has a rich community support. Specifically, we directly map reputation to stake, which allows us to deploy Algorand at the blockchain layer. Reviews are analyzed by the proposed evaluation component using Natural Language Processing (NLP). In our system, NLP gauges the positivity of the written review, compares that value to a scaled numerical rating given, and determines adjustments to a peer's reputation from that result. We demonstrate that this architecture ensures credible and trustworthy assessments. It also efficiently manages the reputation of the peers, while keeping reasonable consensus times.
We then turned our focus on ensuring that a peer's reputation is credible. This led us to introducing a new type of consensus called "Proof-of-Review". Our proposed implementation is again based on Algorand, since its modular architecture allows for easy modifications, such as adding extra components, but this time, we modified the engine. The proposed model then provides a trust in evaluations (review and assessment credibility) and in those who provide them (reputation credibility) using a blockchain. We introduce a blacklisting component, which prevents malicious nodes from participating in the protocol, and a minimum-reputation component, which limits the influence of under-performing users. Our results showed that the proposed blockchain system maintains liveliness and completeness. Specifically, blacklisting and the minimum-reputation requirement (when properly tuned) do not affect these properties. We note that the Proof-of-Review concept can be deployed in other types of applications with similar needs of trust in assessments and the players providing them, such as sensor arrays, autonomous car groups (caravans), marketplaces, and more
Codificación adaptativa de red para sistemas inalámbricos IEEE 802.11s en modo infraestructura
Las redes inalámbricas malladas IEEE 802.11s en modo infraestructura, denominadas comúnmente como iWMNs (Infrastructure Wireless Mesh Networks), están constituidas por nodos inalámbricos estáticos capaces de trabajar coordinadamente para encaminar paquetes de datos. De esta manera, los nodos colaboran para poder intercambiar información entre sí. Más aún, las iWMNs pueden ser interconectadas con otras tecnologías de red y, de este modo, coadyuvar a extender inalámbricamente la cobertura de estas redes; por ejemplo, las iWMNs se emplean hoy en día para extender la cobertura de redes celulares o de redes cableadas. Gracias a estas características, y también a su bajo costo de infraestructura, las redes iWMNs son consideradas hoy en día como una excelente opción para ofrecer servicios de conectividad inalámbrica a Internet en zonas geográficas donde el uso de otras tecnologías resulta inviable. A pesar de las prometedoras características de las iWMNs; existen estudios y resultados que plantean dudas sobre su desempeño, ya que se ha documentado que el rendimiento de estas redes puede ser afectado por numerosos factores; tales como el uso de TCP para transportar información en entornos inalámbricos, la tasa de errores en el medio inalámbrico, así como la contienda por el acceso al medio entre usuarios de la red. Todos estos factores pueden degradar las prestaciones de las iWMNs y, consecuentemente, afectar la calidad de la experiencia que reciben los usuarios. En esta tesis doctoral se atienden algunos de estos problemas de desempeño mediante la técnica denominada como codificación adaptativa de red. Esta técnica ayuda a que los nodos de una iWMN puedan combinar varios paquetes de datos y de este modo construir un paquete codificado; al transmitir este paquete se transporta la información contenida en los paquetes originales requiriendo únicamente una transmisión inalámbrica, reduciendo de esta manera el uso del medio inalámbrico y, con ello, se incrementa la capacidad de la red. La técnica propuesta, además, busca que el proceso de codificación se adapte a las condiciones de tráfico en la red a través del ajuste dinámico del tiempo de espera de los paquetes en un nodo antes de poder ser combinados; es así como se puede disminuir el retardo de codificación. Con esta propuesta se pretende mejorar sustancialmente el desempeño de las iWMNs, resolviendo algunos problemas que las afectan. La evaluación de la propuesta se realiza empleando simulaciones y evaluaciones numéricas. A través de un minucioso análisis de resultados encontramos que las iWMNs pueden mejorar su rendimiento al emplear la técnica de codificación adaptativa de red, ya que se reduce considerablemente el número de transmisiones inalámbricas en la red, y, por consiguiente: i) se disminuye la contienda por el medio, ii) se reducen las probabilidades de error en el medio y iii) se incrementa la capacidad de la red.IEEE 802.11s INFRASTRUCTURE WIRELESS MESH NETWORKS (commonly known as iWMNs) are integrated by static wireless nodes capable of working in coordination to route data packets. In this way, the nodes collaborate to exchange information with each other. In addition, iWMNs can be interconnected with other network technologies and, in this way, help to wirelessly extend the coverage of these networks; for example, iWMNs are used today to extend the coverage of cellular or wired networks. Thanks to this feature, and also to their low infrastructure cost, iWMNs networks are considered today as an excellent option to offer wireless Internet connectivity services in geographical areas where the use of other network technologies is unfeasible. Despite the promising features of iWMNs, there are studies and results that cast doubt on their performance, since it has been documented that the performance of these networks can be affected by numerous factors; such as the use of TCP to transport information in wireless environments, the transmission errors in the wireless medium, as well as the access contention between network users. All these factors can degrade the performance of iWMNs and, consequently, affect the quality of the experience for the users. In this doctoral thesis, some of these performance problems are addressed through the technique called adaptive network coding. With this technique, the nodes of an iWMN are allowed to combine various data packets and thus build an encoded packet; this packet contains the information from the original packets, requiring only one wireless transmission to transport the original information, reducing the use of the wireless medium and, thereby, increasing the capacity of the network. The proposed technique also seeks to adapt the coding process to the traffic conditions in the network through the dynamic adjustment of the waiting time of the packets in a node before they can be combined. This proposal aims to substantially improve the performance of iWMNs, solving some problems that affect them. The evaluation of the proposal is carried out through simulations and numerical evaluations. After a detailed analysis of the results, we find that iWMNs can improve their performance by using the adaptive network coding technique, since the number of wireless transmissions in the network is considerably reduced, and, consequently, i) the medium access contention decreases, ii) the probability of errors in the medium is reduced, and iii) the capacity of the network increase
On distributed ledger technology for the internet of things: design and applications
Distributed ledger technology (DLT) can used to store information in such a way that no individual or organisation can compromise its veracity, contrary to a traditional centralised ledger. This nascent technology has received a great deal of attention from both researchers and practitioners in recent years due to the vast array of open questions related to its design and the assortment novel applications it unlocks. In this thesis, we are especially interested in the design of DLTs suitable for application in the domain of the internet of things (IoT), where factors such as efficiency, performance and scalability are of paramount importance. This work confronts the challenges of designing IoT-oriented distributed ledgers through analysis of ledger properties, development of design tools and the design of a number of core protocol components. We begin by introducing a class of DLTs whose data structures consist of directed acyclic graphs (DAGs) and which possess properties that make them particularly well suited to IoT applications. With a focus on the DAG structure, we then present analysis through mathematical modelling and simulations which provides new insights to the properties of this class of ledgers and allows us to propose novel security enhancements. Next, we shift our focus away from the DAG structure itself to another open problem for DAG-based distributed ledgers, that of access control. Specifically, we present a networking approach which removes the need for an expensive and inefficient mechanism known as Proof of Work, solving an open problem for IoT-oriented distributed ledgers. We then draw upon our analysis of the DAG structure to integrate and test our new access control with other core components of the DLT. Finally, we present a mechanism for orchestrating the interaction between users of a DLT and its operators, seeking to improves the usability of DLTs for IoT applications. In the appendix, we present two projects also carried out during this PhD which showcase applications of this technology in the IoT domain.Open Acces
WiFi-Based Human Activity Recognition Using Attention-Based BiLSTM
Recently, significant efforts have been made to explore human activity recognition (HAR) techniques that use information gathered by existing indoor wireless infrastructures through WiFi signals without demanding the monitored subject to carry a dedicated device. The key intuition is that different activities introduce different multi-paths in WiFi signals and generate different patterns in the time series of channel state information (CSI). In this paper, we propose and evaluate a full pipeline for a CSI-based human activity recognition framework for 12 activities in three different spatial environments using two deep learning models: ABiLSTM and CNN-ABiLSTM. Evaluation experiments have demonstrated that the proposed models outperform state-of-the-art models. Also, the experiments show that the proposed models can be applied to other environments with different configurations, albeit with some caveats. The proposed ABiLSTM model achieves an overall accuracy of 94.03%, 91.96%, and 92.59% across the 3 target environments. While the proposed CNN-ABiLSTM model reaches an accuracy of 98.54%, 94.25% and 95.09% across those same environments
End-to-end active queue management with Named-Data Networking
The innovative information-based Named-Data Networking (NDN) architecture provides a good opportunity to rethink many of the design decisions that are taken for granted in the Internet today. For example, active queue management (AQM) tasks have been traditionally implemented in the routers to alleviate network congestion before their buffers fill up. However, AQM operations could be performed on an end-to-end basis by taking advantage of NDN features. In this paper, we provide an implementation of an AQM algorithm for the NDN architecture that we use to drive a classical AIMD-based congestion control protocol at the receivers. To accomplish this, we take advantage of the 64-bit Congestion Mark field present in the link layer of NDN packets to encode both rate and delay information about each transmission queue along a network path. In order to make the solution scalable, this information is delivered stochastically, guaranteeing that receivers get accurate and updated information about every pertinent queue. This information is enough to implement the well-known controlled delay (CoDel) AQM algorithm. Simulation results show that our client-located CoDel implementation is able to react to congestion when the bottleneck queuing delay surpasses the 5 ms target set by the usual in-network CoDel implementation and, at the same time, get a fair and efficient share of the available transmission capacityAgencia estatal de investigación | Ref. PID2020-113240RB-I00Universidade de Vigo/CISU
Quality of service and dependability of cellular vehicular communication networks
Improving the dependability of mobile network applications is a complicated task for many reasons: Especially in Germany, the development of cellular infrastructure has not always been fast enough to keep up with the growing demand, resulting in many blind spots that cause communication outages. However, even when the infrastructure is available, the mobility of the users still poses a major challenge when it comes to the dependability of applications: As the user moves, the capacity of the channel can experience major changes. This can mean that applications like adjustable bitrate video streaming cannot infer future performance by analyzing past download rates, as it will only have old information about the data rate at a different location.
In this work, we explore the use of 4G LTE for dependable communication in mobile vehicular scenarios. For this, we first look at the performance of LTE, especially in mobile environments, and how it has developed over time. We compare measurements performed several years apart and look at performance differences in urban and rural areas. We find that even though the continued development of the 4G standard has enabled better performance in theory, this has not always been reflected in real-life performance due to the slow development of infrastructure, especially along highways.
We also explore the possibility of performance prediction in LTE networks without the need to perform active measurements. For this, we look at the relationship between the measured signal quality and the achievable data rates and latencies. We find that while there is a strong correlation between some of the signal quality indicators and the achievable data rates, the relationship between them is stochastic, i.e., a higher signal quality makes better performance more probable but does not guarantee it. We then use our empirical measurement results as a basis for a model that uses signal quality measurements to predict a throughput distribution. The resulting estimate of the obtainable throughput can then be used in adjustable bitrate applications like video streaming to improve their dependability.
Mobile networks also task TCP congestion control algorithms with a new challenge: Usually, senders use TCP congestion control to avoid causing congestion in the network by sending too many packets and so that the network bandwidth is divided fairly. This can be a challenging task since it is not known how many senders are in the network, and the network load can change at any time. In mobile vehicular networks, TCP congestion control is confronted with the additional problem of a constantly changing capacity: As users change their location, the quality of the channel also changes, and the capacity of the channel can experience drastic reductions even when the difference of location is very small. Additionally, in our measurements, we have observed that packet losses only rarely occur (and instead, packets are delayed and retransmitted), meaning that loss-based algorithms like Reno or CUBIC can be at a significant disadvantage. In this thesis, we compare several popular congestion control algorithms in both stationary and mobile scenarios. We find that many loss-based algorithms tend to cause bufferbloat and thus overly increase delays. At the same time, many delay-based algorithms tend to underestimate the network capacity and thus achieve data rates that are too low. The algorithm that performed the best in our measurements was TCP BBR, as it was able to utilize the full capacity of the channel without causing bufferbloat and also react to changes in capacity by adjusting its window. However, since TCP BBR can be unfair towards other algorithms in wired networks, its use could be problematic.
Finally, we also propose how our model for data rate prediction can be used to improve the dependability of mobile video streaming. For this, we develop an algorithm for adaptive bitrate streaming that provides a guarantee that the video freeze probability does not exceed a certain pre-selected upper threshold. For the algorithm to work, it needs to know the distribution of obtainable throughput. We use a simulation to verify the function of this algorithm using a distribution obtained through the previously proposed data rate prediction algorithm. In our simulation, the algorithm limited the video freeze probability as intended. However, it did so at the cost of frequent switches of video bitrate, which can diminish the quality of user experience. In future work, we want to explore the possibility of different algorithms that offer a trade-off between the video freeze probability and the frequency of bitrate switches.Die Verbesserung der Zuverlässigkeit von mobilen Netzwerk-basierten Anwendungen ist aus vielen Gründen eine komplizierte Aufgabe: Vor allem in Deutschland war die Entwicklung der Mobilfunkinfrastruktur nicht immer schnell genug, um mit der wachsenden Nachfrage Schritt zu halten. Es gibt immer noch viele Funklöchern, die für Kommunikationsausfälle verantwortlich sind. Aber auch an Orten, an denen Infrastruktur ausreichend vorhanden ist, stellt die Mobilität der Nutzer eine große Herausforderung für die Zuverlässigkeit der Anwendungen dar: Wenn sich der Nutzer bewegt, kann sich die Kapazität des Kanals stark verändern. Dies kann dazu führen, dass Anwendungen wie Videostreaming mit einstellbarer Bitrate die in der Vergangenheit erreichten Downloadraten nicht zur Vorhersage der zukünftigen Leistung nutzen können, da diese nur alte Informationen über die Datenraten an einem anderen Standort enthalten.
In dieser Arbeit untersuchen wir die Nutzung von 4G LTE für zuverlässige Kommunikation in mobilen Fahrzeugszenarien. Zu diesem Zweck untersuchen wir zunächst die Leistung von LTE, insbesondere in mobilen Umgebungen, und wie sie sich im Laufe der Zeit entwickelt hat. Wir vergleichen Messungen, die in einem zeitlichen Abstand von mehreren Jahren durchgeführt wurden, und untersuchen Leistungsunterschiede in städtischen und ländlichen Gebieten. Wir stellen fest, dass die kontinuierliche Weiterentwicklung des 4G-Standards zwar theoretisch eine bessere Leistung ermöglicht hat, dass sich dies aber aufgrund des langsamen Ausbaus der Infrastruktur, insbesondere entlang von Autobahnen, nicht immer in der Praxis bemerkbar gemacht hat.
Wir untersuchen auch die Möglichkeit der Leistungsvorhersage in LTE-Netzen, ohne aktive Messungen durchführen zu müssen. Zu diesem Zweck untersuchen wir die Beziehung zwischen der gemessenen Signalqualität und den erreichbaren Datenraten und Latenzzeiten. Wir stellen fest, dass es zwar eine starke Korrelation zwischen einigen der Signalqualitätsindikatoren und den erreichbaren Datenraten gibt, die Beziehung zwischen ihnen aber stochastisch ist, d. h. eine höhere Signalqualität macht eine bessere Leistung zwar wahrscheinlicher, garantiert sie aber nicht. Wir verwenden dann unsere empirischen Messergebnisse als Grundlage für ein Modell, das die Signalqualitätsmessungen zur Vorhersage einer Durchsatzverteilung nutzt. Die sich daraus ergebende Schätzung des erzielbaren Durchsatzes kann dann in Anwendungen mit einstellbarer Bitrate wie Videostreaming verwendet werden, um deren Zuverlässigkeit zu verbessern.
Mobile Netze stellen auch TCP Congestion Control Algorithmen vor eine neue Herausforderung: Normalerweise verwenden Sender TCP Congestion Control, um eine Überlastung des Netzes durch das Senden von zu vielen Paketen zu vermeiden, und um die Bandbreite des Netzes gerecht aufzuteilen. Dies kann eine schwierige Aufgabe sein, da es nicht bekannt ist, wie viele Sender sich im Netz befinden, und sich die Netzlast jederzeit ändern kann. In mobilen Fahrzeugnetzen ist TCP Congestion Control mit dem zusätzlichen Problem einer sich ständig ändernden Kapazität konfrontiert: Wenn die Benutzer ihren Standort wechseln, ändert sich auch die Qualität des Kanals, und die Kanalkapazität des Kanals kann drastisch sinken, selbst wenn der Unterschied zwischen den Standorten sehr gering ist. Darüber hinaus haben wir bei unseren Messungen festgestellt, dass Paketverluste nur selten auftreten (stattdessen werden Pakete verzögert und erneut übertragen), was bedeutet, dass verlustbasierte Algorithmen wie Reno oder CUBIC einen großen Nachteil haben können. In dieser Arbeit vergleichen wir mehrere gängige Congestion Control Algorithmen sowohl in stationären als auch in mobilen Szenarien. Wir stellen fest, dass viele verlustbasierte Algorithmen dazu neigen, einen Pufferüberlauf zu verursachen und somit die Latenzen übermäßig erhöhen, während viele latenzbasierte Algorithmen dazu neigen, die Kanalkapazität zu unterschätzen und somit zu niedrige Datenraten erzielen. Der Algorithmus, der bei unseren Messungen am besten abgeschnitten hat, war TCP BBR, da er in der Lage war, die volle Kapazität des Kanals auszunutzen, ohne den Pufferfüllstand übermäßig zu erhöhen. Ebenso hat TCP BBR schnell auf Kapazitätsänderungen reagiert, indem er seine Fenstergröße angepasst hat. Da TCP BBR jedoch in kabelgebundenen Netzen gegenüber anderen Algorithmen unfair sein kann, könnte seine Verwendung problematisch sein.
Schließlich schlagen wir auch vor, wie unser Modell zur Vorhersage von Datenraten verwendet werden kann, um die Zuverlässigkeit des mobilen Videostreaming zu verbessern. Dazu entwickeln wir einen Algorithmus für Streaming mit adaptiver Bitrate, der garantiert, dass die Wahrscheinlichkeit des Anhaltens eines Videos eine bestimmte, vorher festgelegte Obergrenze nicht überschreitet. Damit der Algorithmus funktionieren kann, muss er die Verteilung des erreichbaren Durchsatzes kennen. Wir verwenden eine Simulation, um die Funktion dieses Algorithmus zu überprüfen. Hierzu verwenden wir eine Verteilung, die wir durch den zuvor vorgeschlagenen Algorithmus zur Vorhersage von Datenraten erhalten haben. In unserer Simulation begrenzte der Algorithmus die Wahrscheinlichkeit des Anhaltens von Videos wie beabsichtigt, allerdings um den Preis eines häufigen Wechsels der Videobitrate, was die Qualität der Benutzererfahrung beeinträchtigen kann. In zukünftigen Arbeiten wollen wir die Möglichkeit verschiedener Algorithmen untersuchen, die einen Kompromiss zwischen der Wahrscheinlichkeit des Anhaltens des Videos und der Häufigkeit der Bitratenwechsel bieten
Nanoporous Gold: From Structure Evolution to Functional Properties in Catalysis and Electrochemistry
Nanoporous gold (NPG) is characterized by a bicontinuous network of nanometer-sized metallic struts and interconnected pores formed spontaneously by oxidative dissolution of the less noble element from gold alloys. The resulting material exhibits decent catalytic activity for low-temperature, aerobic total as well as partial oxidation reactions, the oxidative coupling of methanol to methyl formate being the prototypical example. This review not only provides a critical discussion of ways to tune the morphology and composition of this material and its implication for catalysis and electrocatalysis, but will also exemplarily review the current mechanistic understanding of the partial oxidation of methanol using information from quantum chemical studies, model studies on single-crystal surfaces, gas phase catalysis, aerobic liquid phase oxidation, and electrocatalysis. In this respect, a particular focus will be on mechanistic aspects not well understood, yet. Apart from the mechanistic aspects of catalysis, best practice examples with respect to material preparation and characterization will be discussed. These can improve the reproducibility of the materials property such as the catalytic activity and selectivity as well as the scope of reactions being identified as the main challenges for a broader application of NPG in target-oriented organic synthesis
- …