1,279 research outputs found
Cybersecurity in Motion: A Survey of Challenges and Requirements for Future Test Facilities of CAVs
The way we travel is changing rapidly and Cooperative Intelligent Transportation Systems (C-ITSs) are at the forefront of this evolution. However, the adoption of C-ITSs introduces new risks and challenges, making cybersecurity a top priority for ensuring safety and reliability. Building on this premise, this paper introduces an envisaged Cybersecurity Centre of Excellence (CSCE) designed to bolster researching, testing, and evaluating the cybersecurity of C-ITSs. We explore the design, functionality, and challenges of CSCE's testing facilities, outlining the technological, security, and societal requirements. Through a thorough survey and analysis, we assess the effectiveness of these systems in detecting and mitigating potential threats, highlighting their flexibility to adapt to future C-ITSs. Finally, we identify current unresolved challenges in various C-ITS domains, with the aim of motivating further research into the cybersecurity of C-ITSs
Analysis and Design of Non-Orthogonal Multiple Access (NOMA) Techniques for Next Generation Wireless Communication Systems
The current surge in wireless connectivity, anticipated to amplify significantly in future wireless technologies, brings a new wave of users. Given the impracticality of an endlessly expanding bandwidth, there’s a pressing need for communication techniques that efficiently serve this burgeoning user base with limited resources. Multiple Access (MA) techniques, notably Orthogonal Multiple Access (OMA), have long addressed bandwidth constraints. However, with escalating user numbers, OMA’s orthogonality becomes limiting for emerging wireless technologies. Non-Orthogonal Multiple Access (NOMA), employing superposition coding, serves more users within the same bandwidth as OMA by allocating different power levels to users whose signals can then be detected using the gap between them, thus offering superior spectral efficiency and massive connectivity. This thesis examines the integration of NOMA techniques with cooperative relaying, EXtrinsic Information Transfer (EXIT) chart analysis, and deep learning for enhancing 6G and beyond communication systems. The adopted methodology aims to optimize the systems’ performance, spanning from bit-error rate (BER) versus signal to noise ratio (SNR) to overall system efficiency and data rates. The primary focus of this thesis is the investigation of the integration of NOMA with cooperative relaying, EXIT chart analysis, and deep learning techniques. In the cooperative relaying context, NOMA notably improved diversity gains, thereby proving the superiority of combining NOMA with cooperative relaying over just NOMA. With EXIT chart analysis, NOMA achieved low BER at mid-range SNR as well as achieved optimal user fairness in the power allocation stage. Additionally, employing a trained neural network enhanced signal detection for NOMA in the deep learning scenario, thereby producing a simpler signal detection for NOMA which addresses NOMAs’ complex receiver problem
Exploring the inner mechanisms of 5G networks for orchestrating container-based applications in edge data centers
One of the novel new features of mobile 5G networks is what is commonly known as "Ultra Reliable Low Latency" communication.
To achieve the "Low Latency" part, it is necessary to introduce processing and storage capabilities closer to the radio access network, thus introducing Edge data centers.
An Edge data center will be capable of hosting third-party applications and a user of these applications can access them using the cellular mobile network.
This makes the network path between the user equipment (UE) and the application short in terms of physical distance and network hops, thus reducing the latency dramatically.
This thesis looks into these new features of the 5th-generation mobile networks to establish if, and how they can be used to orchestrate container-based applications deployed at edge data centers.
The orchestration mechanism suggested will be described in more detail in the thesis body but as an overview, it involves using the user's positions and the knowledge about which applications the users are accessing and information about where these applications reside to move applications between edge data centers.
One of the 5G exploration findings was that the location of users in a 5G network can be determined using the Network Exposure Function (NEF) API.
The NEF is one of the new 5G network functions and enables trusted third-party actors to interact with the 5G core through a publisher-subscriber-oriented API.
The proposed orchestration strategy involves calculating the ``weighted average location'' of 5G users who have accessed the specific application residing in the Edge within a specified time frame.
A live 5G network with a stand-alone (SA) core was not available at the time of writing and part of the thesis work has therefore been to identify if there exist network emulators with the functionality needed to reach the goal of this thesis, i.e. design and implement the orchestrator based on interaction with the network.
More specifically: can we find a NEF emulator that can be configured to give us network data related to user equipment location? Unfortunately, the three alternatives considered: Open5Gs, NEF\_emulator, and Nokia's Open5Glab do not fully meet our requirements for generating user events.
Open5Gs an open source 5G network implementation lacks the whole NEF north-bridge implementation, NEF\_emulator has limited implementation and integration complexities, and Nokia's Open5Glab's simulated users are inactive and thus do not generate sufficient data.
Given the absence of suitable emulators to generate the needed data, the thesis pivoted to also include the design and implementation of a mobile network emulator with the following key components: a mobile network abstraction that encompasses crucial elements from 5G, such as users and radio access nodes, allowing users to connect to the mobile network; a network abstraction that hosts emulated edge data centers and the corresponding applications accessible to connected users; and mobile network exposure that exposes mobile network core events through a simplified NEF north-bound API implementation.
Finally, the thesis concludes by implementing the proposed orchestration strategy using the mobile network emulator, demonstrating that orchestrating can effectively reduce the end-to-end latency from users to applications, as evidenced by the obtained results
Design and Real-World Evaluation of Dependable Wireless Cyber-Physical Systems
The ongoing effort for an efficient, sustainable, and automated interaction between humans, machines, and our environment will make cyber-physical systems (CPS) an integral part of the industry and our daily lives. At their core, CPS integrate computing elements, communication networks, and physical processes that are monitored and controlled through sensors and actuators. New and innovative applications become possible by extending or replacing static and expensive cable-based communication infrastructures with wireless technology. The flexibility of wireless CPS is a key enabler for many envisioned scenarios, such as intelligent factories, smart farming, personalized healthcare systems, autonomous search and rescue, and smart cities.
High dependability, efficiency, and adaptivity requirements complement the demand for wireless and low-cost solutions in such applications. For instance, industrial and medical systems should work reliably and predictably with performance guarantees, even if parts of the system fail. Because emerging CPS will feature mobile and battery-driven devices that can execute various tasks, the systems must also quickly adapt to frequently changing conditions. Moreover, as applications become ever more sophisticated, featuring compact embedded devices that are deployed densely and at scale, efficient designs are indispensable to achieve desired operational lifetimes and satisfy high bandwidth demands.
Meeting these partly conflicting requirements, however, is challenging due to imperfections of wireless communication and resource constraints along several dimensions, for example, computing, memory, and power constraints of the devices. More precisely, frequent and correlated message losses paired with very limited bandwidth and varying delays for the message exchange significantly complicate the control design. In addition, since communication ranges are limited, messages must be relayed over multiple hops to cover larger distances, such as an entire factory. Although the resulting mesh networks are more robust against interference, efficient communication is a major challenge as wireless imperfections get amplified, and significant coordination effort is needed, especially if the networks are dynamic.
CPS combine various research disciplines, which are often investigated in isolation, ignoring their complex interaction. However, to address this interaction and build trust in the proposed solutions, evaluating CPS using real physical systems and wireless networks paired with formal guarantees of a system’s end-to-end behavior is necessary. Existing works that take this step can only satisfy a few of the abovementioned requirements. Most notably, multi-hop communication has only been used to control slow physical processes while providing no guarantees. One of the reasons is that the current communication protocols are not suited for dynamic multi-hop networks.
This thesis closes the gap between existing works and the diverse needs of emerging wireless CPS. The contributions address different research directions and are split into two parts. In the first part, we specifically address the shortcomings of existing communication protocols and make the following contributions to provide a solid networking foundation:
• We present Mixer, a communication primitive for the reliable many-to-all message exchange in dynamic wireless multi-hop networks. Mixer runs on resource-constrained low-power embedded devices and combines synchronous transmissions and network coding for a highly scalable and topology-agnostic message exchange. As a result, it supports mobile nodes and can serve any possible traffic patterns, for example, to efficiently realize distributed control, as required by emerging CPS applications.
• We present Butler, a lightweight and distributed synchronization mechanism with formally guaranteed correctness properties to improve the dependability of synchronous transmissions-based protocols. These protocols require precise time synchronization provided by a specific node. Upon failure of this node, the entire network cannot communicate. Butler removes this single point of failure by quickly synchronizing all nodes in the network without affecting the protocols’ performance.
In the second part, we focus on the challenges of integrating communication and various control concepts using classical time-triggered and modern event-based approaches. Based on the design, implementation, and evaluation of the proposed solutions using real systems and networks, we make the following contributions, which in many ways push the boundaries of previous approaches:
• We are the first to demonstrate and evaluate fast feedback control over low-power wireless multi-hop networks. Essential for this achievement is a novel co-design and integration of communication and control. Our wireless embedded platform tames the imperfections impairing control, for example, message loss and varying delays, and considers the resulting key properties in the control design. Furthermore, the careful orchestration of control and communication tasks enables real-time operation and makes our system amenable to an end-to-end analysis. Due to this, we can provably guarantee closed-loop stability for physical processes with linear time-invariant dynamics.
• We propose control-guided communication, a novel co-design for distributed self-triggered control over wireless multi-hop networks. Self-triggered control can save energy by transmitting data only when needed. However, there are no solutions that bring those savings to multi-hop networks and that can reallocate freed-up resources, for example, to other agents. Our control system informs the communication system of its transmission demands ahead of time so that communication resources can be allocated accordingly. Thus, we can transfer the energy savings from the control to the communication side and achieve an end-to-end benefit.
• We present a novel co-design of distributed control and wireless communication that resolves overload situations in which the communication demand exceeds the available bandwidth. As systems scale up, featuring more agents and higher bandwidth demands, the available bandwidth will be quickly exceeded, resulting in overload. While event-triggered control and self-triggered control approaches reduce the communication demand on average, they cannot prevent that potentially all agents want to communicate simultaneously. We address this limitation by dynamically allocating the available bandwidth to the agents with the highest need. Thus, we can formally prove that our co-design guarantees closed-loop stability for physical systems with stochastic linear time-invariant dynamics.:Abstract
Acknowledgements
List of Abbreviations
List of Figures
List of Tables
1 Introduction
1.1 Motivation
1.2 Application Requirements
1.3 Challenges
1.4 State of the Art
1.5 Contributions and Road Map
2 Mixer: Efficient Many-to-All Broadcast in Dynamic Wireless Mesh Networks
2.1 Introduction
2.2 Overview
2.3 Design
2.4 Implementation
2.5 Evaluation
2.6 Discussion
2.7 Related Work
3 Butler: Increasing the Availability of Low-Power Wireless Communication Protocols
3.1 Introduction
3.2 Motivation and Background
3.3 Design
3.4 Analysis
3.5 Implementation
3.6 Evaluation
3.7 Related Work
4 Feedback Control Goes Wireless: Guaranteed Stability over Low-Power Multi-Hop Networks
4.1 Introduction
4.2 Related Work
4.3 Problem Setting and Approach
4.4 Wireless Embedded System Design
4.5 Control Design and Analysis
4.6 Experimental Evaluation
4.A Control Details
5 Control-Guided Communication: Efficient Resource Arbitration and Allocation in Multi-Hop Wireless Control Systems
5.1 Introduction
5.2 Problem Setting
5.3 Co-Design Approach
5.4 Wireless Communication System Design
5.5 Self-Triggered Control Design
5.6 Experimental Evaluation
6 Scaling Beyond Bandwidth Limitations: Wireless Control With Stability Guarantees Under Overload
6.1 Introduction
6.2 Problem and Related Work
6.3 Overview of Co-Design Approach
6.4 Predictive Triggering and Control System
6.5 Adaptive Communication System
6.6 Integration and Stability Analysis
6.7 Testbed Experiments
6.A Proof of Theorem 4
6.B Usage of the Network Bandwidth for Control
7 Conclusion and Outlook
7.1 Contributions
7.2 Future Directions
Bibliography
List of Publication
ACiS: smart switches with application-level acceleration
Network performance has contributed fundamentally to the growth of supercomputing over the past decades. In parallel, High Performance Computing (HPC) peak performance has depended, first, on ever faster/denser CPUs, and then, just on increasing density alone. As operating frequency, and now feature size, have levelled off, two new approaches are becoming central to achieving higher net performance: configurability and integration. Configurability enables hardware to map to the application, as well as vice versa. Integration enables system components that have generally been single function-e.g., a network to transport data—to have additional functionality, e.g., also to operate on that data. More generally, integration enables compute-everywhere: not just in CPU and accelerator, but also in network and, more specifically, the communication switches.
In this thesis, we propose four novel methods of enhancing HPC performance through Advanced Computing in the Switch (ACiS). More specifically, we propose various flexible and application-aware accelerators that can be embedded into or attached to existing communication switches to improve the performance and scalability of HPC and Machine Learning (ML) applications. We follow a modular design discipline through introducing composable plugins to successively add ACiS capabilities.
In the first work, we propose an inline accelerator to communication switches for user-definable collective operations. MPI collective operations can often be performance killers in HPC applications; we seek to solve this bottleneck by offloading them to reconfigurable hardware within the switch itself. We also introduce a novel mechanism that enables the hardware to support MPI communicators of arbitrary shape and that is scalable to very large systems.
In the second work, we propose a look-aside accelerator for communication switches that is capable of processing packets at line-rate. Functions requiring loops and states are addressed in this method. The proposed in-switch accelerator is based on a RISC-V compatible Coarse Grained Reconfigurable Arrays (CGRAs).
To facilitate usability, we have developed a framework to compile user-provided C/C++ codes to appropriate back-end instructions for configuring the accelerator.
In the third work, we extend ACiS to support fused collectives and the combining of collectives with map operations. We observe that there is an opportunity of fusing communication (collectives) with computation. Since the computation can vary for different applications, ACiS support should be programmable in this method.
In the fourth work, we propose that switches with ACiS support can control and manage the execution of applications, i.e., that the switch be an active device with decision-making capabilities. Switches have a central view of the network; they can collect telemetry information and monitor application behavior and then use this information for control, decision-making, and coordination of nodes.
We evaluate the feasibility of ACiS through extensive RTL-based simulation as well as deployment in an open-access cloud infrastructure. Using this simulation framework, when considering a Graph Convolutional Network (GCN) application as a case study, a speedup of on average 3.4x across five real-world datasets is achieved on 24 nodes compared to a CPU cluster without ACiS capabilities
Flashpoint: A Low-latency Serverless Platform for Deep Learning Inference Serving
Recent breakthroughs in Deep Learning (DL) have led to high demand for executing inferences in interactive services such as ChatGPT and GitHub Copilot. However, these interactive services require low-latency inferences, which can only be met with GPUs and result in exorbitant operating costs. For instance, ChatGPT reportedly requires millions of U.S. dollars in cloud GPUs to serve its 1+ million users. A potential solution to meet low-latency requirements with acceptable costs is to use serverless platforms. These platforms automatically scale resources to meet user demands. However, current serverless systems have long cold starts which worsen with larger DL models and lead to poor performance during bursts of requests. Meanwhile, the demand for larger and larger DL models make it more challenging to deliver an acceptable user experience cost-effectively. While current systems over-provision GPUs to address this issue, they incur high costs in idle resources which greatly reduces the benefit of using a serverless platform.
In this thesis, we introduce Flashpoint, a GPU-based serverless platform that serves DL inferences with low latencies. Flashpoint achieves this by reducing cold start durations, especially for large DL models, making serverless computing feasible for latency-sensitive DL workloads. To reduce cold start durations, Flashpoint reduces download times by sourcing the DL model data from within the compute cluster rather than slow cloud storage. Additionally, Flashpoint minimizes in-cluster network congestion from redundant packet transfers of the same DL model to multiple machines with multicasting. Finally, Flashpoint also reduces cold start durations by automatically partitioning models and deploying them in parallel on multiple machines. The reduced cold start durations achieved by Flashpoint enable the platform to scale resource allocations elastically and complete requests with low latencies without over-provisioning expensive GPU resources.
We perform large-scale data center simulations that were parameterized with measurements our prototype implementations. We evaluate the system using six state-of-the-art DL models ranging from 499 MB to 11 GB in size. We also measure the performance of the system in representative real-world traces from Twitter and Microsoft Azure. Our results in the full-scale simulations show that Flashpoint achieves an arithmetic mean of 93.51% shorter average cold start durations, leading to 75.42% and 66.90% respective reductions in average and 99th percentile end-to-end request latencies across the DL models with the same amount of resources. These results show that Flashpoint boosts the performance of serving DL inferences on a serverless platform without increasing costs
Intrusion detection system in software-defined networks
Mestrado de dupla diplomação com a UTFPR - Universidade Tecnológica Federal do ParanáSoftware-Defined Networking technologies represent a recent cutting-edge paradigm in network management, offering unprecedented flexibility and scalability. As the adoption of SDN continues to grow, so does the urgency of studying methods to enhance its security. It is the critical importance of understanding and fortifying SDN security, given its pivotal role in the modern digital ecosystem. With the ever-evolving threat landscape, research into innovative security measures is essential to ensure the integrity, confidentiality, and availability of network resources in this dynamic and transformative technology, ultimately
safeguarding the reliability and functionality of our interconnected world. This research presents a novel approach to enhancing security in Software-Defined Networking through the development of an initial Intrusion Detection System. The IDS offers a scalable solution, facilitating the transmission and storage of network traffic with robust support for failure recovery across multiple nodes. Additionally, an innovative analysis module incorporates artificial intelligence (AI) to predict the nature of network traffic, effectively
distinguishing between malicious and benign data. The system integrates a diverse range of technologies and tools, enabling the processing and analysis of network traffic data from PCAP files, thus contributing to the reinforcement of SDN security.As tecnologias de Redes Definidas por Software representam um paradigma recente na gestão de redes, oferecendo flexibilidade e escalabilidade sem precedentes. À medida que a adoção de soluções SDN continuam a crescer, também aumenta a urgência de estudar métodos para melhorar a sua segurança. É de extrema importância compreender e fortalecer a segurança das SDN, dado o seu papel fundamental no ecossistema digital moderno. Com o cenário de ameaças em constante evolução, a investigação de medidas
de segurança inovadoras é essencial para garantir a integridade, a confidencialidade e a disponibilidade dos recursos da rede nesta tecnologia dinâmica e transformadora. Esta investigação apresenta uma nova abordagem para melhorar a segurança nas redes definidas por software através do desenvolvimento de um sistema inicial de deteção de intrusões. O IDS oferece uma solução escalável, facilitando a transmissão e o armazenamento do tráfego de rede com suporte robusto para recuperação de falhas em vários nós. Além disso, um módulo de análise inovador incorpora inteligência artificial (IA) para prever a natureza do
tráfego de rede, distinguindo efetivamente entre dados maliciosos e benignos. O sistema integra uma gama diversificada de tecnologias e ferramentas, permitindo o processamento e a análise de dados de tráfego de rede a partir de ficheiros PCAP, contribuindo assim para o reforço da segurança SDN
Resilient and Scalable Forwarding for Software-Defined Networks with P4-Programmable Switches
Traditional networking devices support only fixed features and limited configurability.
Network softwarization leverages programmable software and hardware platforms to remove those limitations.
In this context the concept of programmable data planes allows directly to program the packet processing pipeline of networking devices and create custom control plane algorithms.
This flexibility enables the design of novel networking mechanisms where the status quo struggles to meet high demands of next-generation networks like 5G, Internet of Things, cloud computing, and industry 4.0.
P4 is the most popular technology to implement programmable data planes.
However, programmable data planes, and in particular, the P4 technology, emerged only recently.
Thus, P4 support for some well-established networking concepts is still lacking and several issues remain unsolved due to the different characteristics of programmable data planes in comparison to traditional networking.
The research of this thesis focuses on two open issues of programmable data planes.
First, it develops resilient and efficient forwarding mechanisms for the P4 data plane as there are no satisfying state of the art best practices yet.
Second, it enables BIER in high-performance P4 data planes.
BIER is a novel, scalable, and efficient transport mechanism for IP multicast traffic which has only very limited support of high-performance forwarding platforms yet.
The main results of this thesis are published as 8 peer-reviewed and one post-publication peer-reviewed publication. The results cover the development of suitable resilience mechanisms for P4 data planes, the development and implementation of resilient BIER forwarding in P4, and the extensive evaluations of all developed and implemented mechanisms. Furthermore, the results contain a comprehensive P4 literature study.
Two more peer-reviewed papers contain additional content that is not directly related to the main results.
They implement congestion avoidance mechanisms in P4 and develop a scheduling concept to find cost-optimized load schedules based on day-ahead forecasts
Rethinking Film Festivals in the Pandemic Era and After
‘Rethinking Film Festivals’ explores how COVID-19 intervened in the film festival circuit by disrupting regular industry cycles and allowing new ways of reaching audiences to flourish. Films found other distribution channels, often online, while the festival format proved particularly vulnerable in a one-half-meter society. The search for a full-fledged substitute for live events and shared experiences involved all kinds of experimentation and an acceleration of digital developments that were already underway. The book documents how different festivals in different local contexts were affected differently by the pandemic and reacted differently to it as well. At the same time, the case studies confirm the interconnectedness and transnational relationships inherent in globalised media systems. Finally, the book seizes the momentum of the crisis to make the case for sustainable interventions: festivals must address their ecological footprint, decolonise their organisations, and ensure that their history and heritage is safeguarded for the future
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
- …