7,793 research outputs found
TANDEM: taming failures in next-generation datacenters with emerging memory
The explosive growth of online services, leading to unforeseen scales, has made modern datacenters highly prone to failures. Taming these failures hinges on fast and correct recovery, minimizing service interruptions.
Applications, owing to recovery, entail additional measures to maintain a recoverable state of data and computation logic during their failure-free execution. However, these precautionary measures have
severe implications on performance, correctness, and programmability, making recovery incredibly challenging to realize in practice.
Emerging memory, particularly non-volatile memory (NVM) and disaggregated memory (DM), offers a promising opportunity to achieve fast recovery with maximum performance. However, incorporating these technologies into datacenter architecture presents significant challenges; Their distinct architectural attributes, differing significantly from traditional memory devices, introduce new semantic challenges for
implementing recovery, complicating correctness and programmability.
Can emerging memory enable fast, performant, and correct recovery in the datacenter? This thesis aims to answer this question while addressing the associated challenges.
When architecting datacenters with emerging memory, system architects face four key challenges: (1) how to guarantee correct semantics; (2) how to efficiently enforce correctness with optimal performance; (3) how to validate end-to-end correctness including recovery; and (4) how to preserve programmer productivity (Programmability).
This thesis aims to address these challenges through the following approaches: (a)
defining precise consistency models that formally specify correct end-to-end semantics
in the presence of failures (consistency models also play a crucial role in programmability); (b) developing new low-level mechanisms to efficiently enforce the prescribed models given the capabilities of emerging memory; and (c) creating robust testing frameworks to validate end-to-end correctness and recovery.
We start our exploration with non-volatile memory (NVM), which offers fast persistence capabilities directly accessible through the processorâs load-store (memory) interface. Notably, these capabilities can be leveraged to enable fast recovery for Log-Free Data Structures (LFDs) while maximizing performance. However, due to the complexity of modern cache hierarchies, data hardly persist in any specific order, jeop-
ardizing recovery and correctness. Therefore, recovery needs primitives that explicitly control the order of updates to NVM (known as persistency models). We outline the precise specification of a novel persistency model â Release Persistency (RP) â that provides a consistency guarantee for LFDs on what remains in non-volatile memory upon failure. To efficiently enforce RP, we propose a novel microarchitecture mechanism,
lazy release persistence (LRP). Using standard LFDs benchmarks, we show that LRP achieves fast recovery while incurring minimal overhead on performance.
We continue our discussion with memory disaggregation which decouples memory from traditional monolithic servers, offering a promising pathway for achieving very high availability in replicated in-memory data stores. Achieving such availability hinges on transaction protocols that can efficiently handle recovery in this setting, where
compute and memory are independent. However, there is a challenge: disaggregated memory (DM) fails to work with RPC-style protocols, mandating one-sided transaction protocols. Exacerbating the problem, one-sided transactions expose critical low-level
ordering to architects, posing a threat to correctness. We present a highly available transaction protocol, Pandora, that is specifically designed to achieve fast recovery in disaggregated key-value stores (DKVSes).
Pandora is the first one-sided transactional protocol that ensures correct, non-blocking, and fast recovery in DKVS. Our experimental implementation artifacts demonstrate that Pandora achieves fast recovery and high availability while causing minimal disruption to services.
Finally, we introduce a novel target litmus-testing framework â DART â to validate the end-to-end correctness of transactional protocols with recovery. Using DARTâs target testing capabilities, we have found several critical bugs in Pandora, highlighting the need for robust end-to-end testing methods in the design loop to iteratively fix correctness bugs. Crucially, DART is lightweight and black-box, thereby eliminating
any intervention from the programmers
Deep generative models for network data synthesis and monitoring
Measurement and monitoring are fundamental tasks in all networks, enabling the down-stream management and optimization of the network.
Although networks inherently
have abundant amounts of monitoring data, its access and effective measurement is
another story. The challenges exist in many aspects. First, the inaccessibility of network monitoring data for external users, and it is hard to provide a high-fidelity dataset
without leaking commercial sensitive information. Second, it could be very expensive
to carry out effective data collection to cover a large-scale network system, considering the size of network growing, i.e., cell number of radio network and the number of
flows in the Internet Service Provider (ISP) network. Third, it is difficult to ensure fidelity and efficiency simultaneously in network monitoring, as the available resources
in the network element that can be applied to support the measurement function are
too limited to implement sophisticated mechanisms. Finally, understanding and explaining the behavior of the network becomes challenging due to its size and complex
structure. Various emerging optimization-based solutions (e.g., compressive sensing)
or data-driven solutions (e.g. deep learning) have been proposed for the aforementioned challenges. However, the fidelity and efficiency of existing methods cannot yet
meet the current network requirements.
The contributions made in this thesis significantly advance the state of the art in
the domain of network measurement and monitoring techniques. Overall, we leverage
cutting-edge machine learning technology, deep generative modeling, throughout the
entire thesis. First, we design and realize APPSHOT , an efficient city-scale network
traffic sharing with a conditional generative model, which only requires open-source
contextual data during inference (e.g., land use information and population distribution). Second, we develop an efficient drive testing system â GENDT, based on generative model, which combines graph neural networks, conditional generation, and quantified model uncertainty to enhance the efficiency of mobile drive testing. Third, we
design and implement DISTILGAN, a high-fidelity, efficient, versatile, and real-time
network telemetry system with latent GANs and spectral-temporal networks. Finally,
we propose SPOTLIGHT , an accurate, explainable, and efficient anomaly detection system of the Open RAN (Radio Access Network) system. The lessons learned through
this research are summarized, and interesting topics are discussed for future work in
this domain. All proposed solutions have been evaluated with real-world datasets and
applied to support different applications in real systems
Simultaneous Estimation of Vehicle Sideslip and Roll Angles Using an Event-Triggered-Based IoT Architecture
In recent years, there has been a significant integration of advanced technology into the automotive industry, aimed primarily at enhancing safety and ride comfort. While a notable proportion of these driver-assist systems focuses on skid prevention, insufficient attention has been paid to addressing other crucial scenarios, such as rollovers. The accurate estimation of slip and roll angles plays a vital role in ensuring vehicle control and safety, making these parameters essential, especially with the rise of modern technologies that incorporate networked communication and distributed computing. Furthermore, there exists a lag in the transmission of information between the various vehicle systems, including sensors, actuators, and controllers. This paper outlines the design of an IoT architecture that accurately estimates the sideslip angle and roll angle of a vehicle, while addressing network transmission delays with a networked control system and an event-triggered communication scheme. Experimental results are presented to validate the performance of the IoT architecture proposed. The event-triggered scheme of the IoT solution is used to decrease data transmission and prevent network overload.Funding. Grant [ PID2022-136468OB-I00 ] funded by MCIN/AEI/ 10.13039/501100011033 and by âERDF A way of making Europeâ
Developing IncidentUI -- A Ride Comfort and Disengagement Evaluation Application for Autonomous Vehicles
This report details the design, development, and implementation of
IncidentUI, an Android tablet application designed to measure user-experienced
ride comfort and record disengagement data for autonomous vehicles (AV) during
test drives. The goal of our project was to develop an Android application to
run on a peripheral tablet and communicate with the Drive Pegasus AGX, the AI
Computing Platform for Nvidia's AV Level 2 Autonomy Solution Architecture [1],
to detect AV disengagements and report ride comfort. We designed and developed
an Android XML-based intuitive user interface for IncidentUI. The development
of IncidentUI required a redesign of the system architecture by redeveloping
the system communications protocol in Java and implementing the Protocol
Buffers (Protobufs) in Java using the existing system Protobuf definitions. The
final iteration of IncidentUI yielded the desired functionality while testing
on an AV test drive. We also received positive feedback from Nvidia's AV
Platform Team during our final IncidentUI demonstration.Comment: Previously embargoed by Nvidia. Nvidia owns the right
Towards a centralized multicore automotive system
Todayâs automotive systems are inundated with embedded electronics to host chassis, powertrain, infotainment, advanced driver assistance systems, and other modern vehicle functions. As many as 100 embedded microcontrollers execute hundreds of millions of lines of code in a single vehicle. To control the increasing complexity in vehicle electronics and services, automakers are planning to consolidate different on-board automotive functions as software tasks on centralized multicore hardware platforms. However, these vehicle software services have different and contrasting timing, safety, and security requirements. Existing vehicle operating systems are ill-equipped to provide all the required service guarantees on a single machine. A centralized automotive system aims to tackle this by assigning software tasks to multiple criticality domains or levels according to their consequences of failures, or international safety standards like ISO 26262. This research investigates several emerging challenges in time-critical systems for a centralized multicore automotive platform and proposes a novel vehicle operating system framework to address them.
This thesis first introduces an integrated vehicle management system (VMS), called DriveOSâą, for a PC-class multicore hardware platform. Its separation kernel design enables temporal and spatial isolation among critical and non-critical vehicle services in different domains on the same machine. Time- and safety-critical vehicle functions are implemented in a sandboxed Real-time Operating System (OS) domain, and non-critical software is developed in a sandboxed general-purpose OS (e.g., Linux, Android) domain. To leverage the advantages of model-driven vehicle function development, DriveOS provides a multi-domain application framework in Simulink. This thesis also presents a real-time task pipeline scheduling algorithm in multiprocessors for communication between connected vehicle services with end-to-end guarantees. The benefits and performance of the overall automotive system framework are demonstrated with hardware-in-the-loop testing using real-world applications, car datasets and simulated benchmarks, and with an early-stage deployment in a production-grade luxury electric vehicle
On Age-of-Information Aware Resource Allocation for Industrial Control-Communication-Codesign
Unter dem Ăberbegriff Industrie 4.0 wird in der industriellen Fertigung die zunehmende Digitalisierung und Vernetzung von industriellen Maschinen und Prozessen zusammengefasst. Die drahtlose, hoch-zuverlĂ€ssige, niedrig-latente Kommunikation (engl. ultra-reliable low-latency communication, URLLC) â als Bestandteil von 5G gewĂ€hrleistet höchste DienstgĂŒten, die mit industriellen drahtgebundenen Technologien vergleichbar sind und wird deshalb als Wegbereiter von Industrie 4.0 gesehen. Entgegen diesem Trend haben eine Reihe von Arbeiten im Forschungsbereich der vernetzten Regelungssysteme (engl. networked control systems, NCS) gezeigt, dass die hohen DienstgĂŒten von URLLC nicht notwendigerweise erforderlich sind, um eine hohe RegelgĂŒte zu erzielen. Das Co-Design von Kommunikation und Regelung ermöglicht eine gemeinsame Optimierung von RegelgĂŒte und Netzwerkparametern durch die Aufweichung der Grenze zwischen Netzwerk- und Applikationsschicht. Durch diese VerschrĂ€nkung wird jedoch eine fundamentale (gemeinsame) Neuentwicklung von Regelungssystemen und Kommunikationsnetzen nötig, was ein Hindernis fĂŒr die Verbreitung dieses Ansatzes darstellt. Stattdessen bedient sich diese Dissertation einem Co-Design-Ansatz, der beide DomĂ€nen weiterhin eindeutig voneinander abgrenzt, aber das Informationsalter (engl. age of information, AoI) als bedeutenden Schnittstellenparameter ausnutzt.
Diese Dissertation trĂ€gt dazu bei, die EchtzeitanwendungszuverlĂ€ssigkeit als Folge der Ăberschreitung eines vorgegebenen Informationsalterschwellenwerts zu quantifizieren und fokussiert sich dabei auf den Paketverlust als Ursache. Anhand der Beispielanwendung eines fahrerlosen Transportsystems wird gezeigt, dass die zeitlich negative Korrelation von Paketfehlern, die in heutigen Systemen keine Rolle spielt, fĂŒr Echtzeitanwendungen Ă€uĂerst vorteilhaft ist. Mit der Annahme von schnellem Schwund als dominanter Fehlerursache auf der Luftschnittstelle werden durch zeitdiskrete Markovmodelle, die fĂŒr die zwei Netzwerkarchitekturen Single-Hop und Dual-Hop prĂ€sentiert werden, Kommunikationsfehlerfolgen auf einen Applikationsfehler abgebildet. Diese Modellierung ermöglicht die analytische Ableitung von anwendungsbezogenen ZuverlĂ€ssigkeitsmetriken wie die durschnittliche Dauer bis zu einem Fehler (engl. mean time to failure). FĂŒr Single-Hop-Netze wird das neuartige Ressourcenallokationsschema State-Aware Resource Allocation (SARA) entwickelt, das auf dem Informationsalter beruht und die AnwendungszuverlĂ€ssigkeit im Vergleich zu statischer Multi-KonnektivitĂ€t um GröĂenordnungen erhöht, wĂ€hrend der Ressourcenverbrauch im Bereich von konventioneller EinzelkonnektivitĂ€t bleibt.
Diese ZuverlĂ€ssigkeit kann auch innerhalb eines Systems von Regelanwendungen, in welchem mehrere Agenten um eine begrenzte Anzahl Ressourcen konkurrieren, statistisch garantiert werden, wenn die Anzahl der verfĂŒgbaren Ressourcen pro Agent um ca. 10 % erhöht werden. FĂŒr das Dual-Hop Szenario wird darĂŒberhinaus ein Optimierungsverfahren vorgestellt, das eine benutzerdefinierte Kostenfunktion minimiert, die niedrige AnwendungszuverlĂ€ssigkeit, hohes Informationsalter und hohen durchschnittlichen Ressourcenverbrauch bestraft und so das benutzerdefinierte optimale SARA-Schema ableitet. Diese Optimierung kann offline durchgefĂŒhrt und als Look-Up-Table in der unteren Medienzugriffsschicht zukĂŒnftiger industrieller Drahtlosnetze implementiert werden.:1. Introduction 1
1.1. The Need for an Industrial Solution . . . . . . . . . . . . . . . . . . . 3
1.2. Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2. Related Work 7
2.1. Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2. Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3. Codesign . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3.1. The Need for Abstraction â Age of Information . . . . . . . . 11
2.4. Dependability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3. Deriving Proper Communications Requirements 17
3.1. Fundamentals of Control Theory . . . . . . . . . . . . . . . . . . . . 18
3.1.1. Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.1.2. Performance Requirements . . . . . . . . . . . . . . . . . . . 21
3.1.3. Packet Losses and Delay . . . . . . . . . . . . . . . . . . . . . 22
3.2. Joint Design of Control Loop with Packet Losses . . . . . . . . . . . . 23
3.2.1. Method 1: Reduced Sampling . . . . . . . . . . . . . . . . . . 23
3.2.2. Method 2: Markov Jump Linear System . . . . . . . . . . . . . 25
3.2.3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.3. Focus Application: The AGV Use Case . . . . . . . . . . . . . . . . . . 31
3.3.1. Control Loop Model . . . . . . . . . . . . . . . . . . . . . . . 31
3.3.2. Control Performance Requirements . . . . . . . . . . . . . . . 33
3.3.3. Joint Modeling: Applying Reduced Sampling . . . . . . . . . . 34
3.3.4. Joint Modeling: Applying MJLS . . . . . . . . . . . . . . . . . 34
3.4. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4. Modeling Control-Communication Failures 43
4.1. Communication Assumptions . . . . . . . . . . . . . . . . . . . . . . 43
4.1.1. Small-Scale Fading as a Cause of Failure . . . . . . . . . . . . 44
4.1.2. Connectivity Models . . . . . . . . . . . . . . . . . . . . . . . 46
4.2. Failure Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.2.1. Single-hop network . . . . . . . . . . . . . . . . . . . . . . . . 49
4.2.2. Dual-hop network . . . . . . . . . . . . . . . . . . . . . . . . 51
4.3. Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.3.1. Mean Time to Failure . . . . . . . . . . . . . . . . . . . . . . . 54
4.3.2. Packet Loss Ratio . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.3.3. Average Number of Assigned Channels . . . . . . . . . . . . . 57
4.3.4. Age of Information . . . . . . . . . . . . . . . . . . . . . . . . 57
4.4. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5. Single Hop â Single Agent 61
5.1. State-Aware Resource Allocation . . . . . . . . . . . . . . . . . . . . 61
5.2. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.3. Erroneous Acknowledgments . . . . . . . . . . . . . . . . . . . . . . 67
5.4. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
6. Single Hop â Multiple Agents 71
6.1. Failure Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
6.1.1. Admission Control . . . . . . . . . . . . . . . . . . . . . . . . 72
6.1.2. Transition Probabilities . . . . . . . . . . . . . . . . . . . . . . 73
6.1.3. Computational Complexity . . . . . . . . . . . . . . . . . . . 74
6.1.4. Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . 75
6.2. Illustration Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.3. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
6.3.1. Verification through System-Level Simulation . . . . . . . . . 78
6.3.2. Applicability on the System Level . . . . . . . . . . . . . . . . 79
6.3.3. Comparison of Admission Control Schemes . . . . . . . . . . 80
6.3.4. Impact of the Packet Loss Tolerance . . . . . . . . . . . . . . . 82
6.3.5. Impact of the Number of Agents . . . . . . . . . . . . . . . . . 84
6.3.6. Age of Information . . . . . . . . . . . . . . . . . . . . . . . . 84
6.3.7. Channel Saturation Ratio . . . . . . . . . . . . . . . . . . . . 86
6.3.8. Enforcing Full Channel Saturation . . . . . . . . . . . . . . . 86
6.4. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
7. Dual Hop â Single Agent 91
7.1. State-Aware Resource Allocation . . . . . . . . . . . . . . . . . . . . 91
7.2. Optimization Targets . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
7.3. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
7.3.1. Extensive Simulation . . . . . . . . . . . . . . . . . . . . . . . 96
7.3.2. Non-Integer-Constrained Optimization . . . . . . . . . . . . . 98
7.4. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
8. Conclusions and Outlook 105
8.1. Key Results and Conclusions . . . . . . . . . . . . . . . . . . . . . . . 105
8.2. Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
A. DC Motor Model 111
Bibliography 113
Publications of the Author 127
List of Figures 129
List of Tables 131
List of Operators and Constants 133
List of Symbols 135
List of Acronyms 137
Curriculum Vitae 139In industrial manufacturing, Industry 4.0 refers to the ongoing convergence of the real and virtual worlds, enabled through intelligently interconnecting industrial machines and processes through information and communications technology. Ultrareliable low-latency communication (URLLC) is widely regarded as the enabling technology for Industry 4.0 due to its ability to fulfill highest quality-of-service (QoS) comparable to those of industrial wireline connections. In contrast to this trend, a range of works in the research domain of networked control systems have shown that URLLCâs supreme QoS is not necessarily required to achieve high quality-ofcontrol; the co-design of control and communication enables to jointly optimize and balance both quality-of-control parameters and network parameters through blurring the boundary between application and network layer. However, through the tight interlacing, this approach requires a fundamental (joint) redesign of both control systems and communication networks and may therefore not lead to short-term widespread adoption. Therefore, this thesis instead embraces a novel co-design approach which keeps both domains distinct but leverages the combination of control and communications by yet exploiting the age of information (AoI) as a valuable interface metric.
This thesis contributes to quantifying application dependability as a consequence of exceeding a given peak AoI with the particular focus on packet losses. The beneficial influence of negative temporal packet loss correlation on control performance is demonstrated by means of the automated guided vehicle use case. Assuming small-scale fading as the dominant cause of communication failure, a series of communication failures are mapped to an application failure through discrete-time Markov models for single-hop (e.g, only uplink or downlink) and dual-hop (e.g., subsequent uplink and downlink) architectures. This enables the derivation of application-related dependability metrics such as the mean time to failure in closed form. For single-hop networks, an AoI-aware resource allocation strategy termed state-aware resource allocation (SARA) is proposed that increases the application reliability by orders of magnitude compared to static multi-connectivity while keeping the resource consumption in the range of best-effort single-connectivity. This dependability can also be statistically guaranteed on a system level â where multiple agents compete for a limited number of resources â if the provided amount of resources per agent is increased by approximately 10 %. For the dual-hop scenario, an AoI-aware resource allocation optimization is developed that minimizes a user-defined penalty function that punishes low application reliability, high AoI, and high average resource consumption. This optimization may be carried out offline and each resulting optimal SARA scheme may be implemented as a look-up table in the lower medium access control layer of future wireless industrial networks.:1. Introduction 1
1.1. The Need for an Industrial Solution . . . . . . . . . . . . . . . . . . . 3
1.2. Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2. Related Work 7
2.1. Communications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2. Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3. Codesign . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3.1. The Need for Abstraction â Age of Information . . . . . . . . 11
2.4. Dependability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3. Deriving Proper Communications Requirements 17
3.1. Fundamentals of Control Theory . . . . . . . . . . . . . . . . . . . . 18
3.1.1. Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.1.2. Performance Requirements . . . . . . . . . . . . . . . . . . . 21
3.1.3. Packet Losses and Delay . . . . . . . . . . . . . . . . . . . . . 22
3.2. Joint Design of Control Loop with Packet Losses . . . . . . . . . . . . 23
3.2.1. Method 1: Reduced Sampling . . . . . . . . . . . . . . . . . . 23
3.2.2. Method 2: Markov Jump Linear System . . . . . . . . . . . . . 25
3.2.3. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.3. Focus Application: The AGV Use Case . . . . . . . . . . . . . . . . . . 31
3.3.1. Control Loop Model . . . . . . . . . . . . . . . . . . . . . . . 31
3.3.2. Control Performance Requirements . . . . . . . . . . . . . . . 33
3.3.3. Joint Modeling: Applying Reduced Sampling . . . . . . . . . . 34
3.3.4. Joint Modeling: Applying MJLS . . . . . . . . . . . . . . . . . 34
3.4. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4. Modeling Control-Communication Failures 43
4.1. Communication Assumptions . . . . . . . . . . . . . . . . . . . . . . 43
4.1.1. Small-Scale Fading as a Cause of Failure . . . . . . . . . . . . 44
4.1.2. Connectivity Models . . . . . . . . . . . . . . . . . . . . . . . 46
4.2. Failure Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.2.1. Single-hop network . . . . . . . . . . . . . . . . . . . . . . . . 49
4.2.2. Dual-hop network . . . . . . . . . . . . . . . . . . . . . . . . 51
4.3. Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.3.1. Mean Time to Failure . . . . . . . . . . . . . . . . . . . . . . . 54
4.3.2. Packet Loss Ratio . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.3.3. Average Number of Assigned Channels . . . . . . . . . . . . . 57
4.3.4. Age of Information . . . . . . . . . . . . . . . . . . . . . . . . 57
4.4. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
5. Single Hop â Single Agent 61
5.1. State-Aware Resource Allocation . . . . . . . . . . . . . . . . . . . . 61
5.2. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.3. Erroneous Acknowledgments . . . . . . . . . . . . . . . . . . . . . . 67
5.4. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
6. Single Hop â Multiple Agents 71
6.1. Failure Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
6.1.1. Admission Control . . . . . . . . . . . . . . . . . . . . . . . . 72
6.1.2. Transition Probabilities . . . . . . . . . . . . . . . . . . . . . . 73
6.1.3. Computational Complexity . . . . . . . . . . . . . . . . . . . 74
6.1.4. Performance Metrics . . . . . . . . . . . . . . . . . . . . . . . 75
6.2. Illustration Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.3. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
6.3.1. Verification through System-Level Simulation . . . . . . . . . 78
6.3.2. Applicability on the System Level . . . . . . . . . . . . . . . . 79
6.3.3. Comparison of Admission Control Schemes . . . . . . . . . . 80
6.3.4. Impact of the Packet Loss Tolerance . . . . . . . . . . . . . . . 82
6.3.5. Impact of the Number of Agents . . . . . . . . . . . . . . . . . 84
6.3.6. Age of Information . . . . . . . . . . . . . . . . . . . . . . . . 84
6.3.7. Channel Saturation Ratio . . . . . . . . . . . . . . . . . . . . 86
6.3.8. Enforcing Full Channel Saturation . . . . . . . . . . . . . . . 86
6.4. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
7. Dual Hop â Single Agent 91
7.1. State-Aware Resource Allocation . . . . . . . . . . . . . . . . . . . . 91
7.2. Optimization Targets . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
7.3. Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
7.3.1. Extensive Simulation . . . . . . . . . . . . . . . . . . . . . . . 96
7.3.2. Non-Integer-Constrained Optimization . . . . . . . . . . . . . 98
7.4. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
8. Conclusions and Outlook 105
8.1. Key Results and Conclusions . . . . . . . . . . . . . . . . . . . . . . . 105
8.2. Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
A. DC Motor Model 111
Bibliography 113
Publications of the Author 127
List of Figures 129
List of Tables 131
List of Operators and Constants 133
List of Symbols 135
List of Acronyms 137
Curriculum Vitae 13
Future developments in ground-based gamma-ray astronomy
Ground-based gamma-ray astronomy is a powerful tool to study cosmic-ray
physics, providing a diagnostic of the high-energy processes at work in the
most extreme astrophysical accelerators of the universe. Ground-based gamma-ray
detectors apply a number of experimental techniques to measure the products of
air showers induced by the primary gamma-rays over a wide energy range, from
about 30 GeV to few PeV. These are based either on the measurement of the
atmospheric Cherenkov light induced by the air showers, or the direct detection
of the shower's secondary particles at ground level. Thanks to the recent
development of new and highly sensitive ground-based gamma-ray detectors,
important scientific results are emerging which motivate new experimental
proposals, at various stages of implementation. In this chapter we will present
the current expectations for future experiments in the field.Comment: To appear in "Handbook of X-ray and Gamma-ray Astrophysics" by
Springer (Eds. C. Bambi and A. Santangelo) - 59 p
Laser Technologies for Applications in Quantum Information Science
Scientific progress in experimental physics is inevitably dependent on continuing advances in the underlying technologies. Laser technologies enable controlled coherent and dissipative atom-light interactions and micro-optical technologies allow for the implementation of versatile optical systems not accessible with standard optics.
This thesis reports on important advances in both technologies with targeted applications ranging from Rydberg-state mediated quantum simulation and computation with individual atoms in arrays of optical tweezers to high-resolution spectroscopy of highly-charged ions.
A wide range of advances in laser technologies are reported: The long-term stability and maintainability of external-cavity diode laser systems is improved significantly by introducing a mechanically adjustable lens mount. Tapered-amplifier modules based on a similar lens mount are developed. The diode laser systems are complemented by digital controllers for laser frequency and intensity stabilisation. The controllers offer a bandwidth of up to 1.25 MHz and a noise performance set by the commercial STEMlab platform. In addition, shot-noise limited photodetectors optimised for intensity stabilisation and Pound-Drever-Hall frequency stabilisation as well as a fiber based detector for beat notes in the MHz-regime are developed. The capabilities of the presented techniques are demonstrated by analysing the performance of a laser system used for laser cooling of Rb85 at a wavelength of 780 nm. A reference laser system is stabilised to a spectroscopic reference provided by modulation transfer spectroscopy. This spectroscopy scheme is analysed finding optimal operation at high modulation indices. A suitable signal is generated with a compact and cost-efficient module. A scheme for laser offset-frequency stabilisation based on an optical phase-locked loop is realised. All frequency locks derived from the reference laser system offer a Lorentzian linewidth of 60 kHz (FWHM) in combination with a long-term stability of 130 kHz peak-to-peak within 10 days. Intensity stabilisation based on acousto-optic modulators in combination with the digital controller allows for real-time intensity control on microsecond time scales complemented by a sample and hold feature with a response time of 150 ns.
High demands on the spectral properties of the laser systems are put forward for the coherent excitation of quantum states. In this thesis, the performance of active frequency stabilisation is enhanced by introducing a novel current modulation technique for diode lasers. A flat response from DC to 100 MHz and a phase lag below 90° up to 25 MHz are achieved extending the bandwidth available for laserfrequency stabilisation. Applying this technique in combination with a fast proportional-derivative controller, two laser fields with a relative phase noise of 42 mrad for driving rubidium ground state transitions are realised. A laser system for coherent Rydberg excitation via a two-photon scheme provides light at 780 nm and at 480 nm via frequency-doubling from 960 nm. An output power of 0.6 W at 480 nm from a single-mode optical fiber is obtained . The frequencies of both laser systems are stabilised to a high-finesse reference cavity resulting in a linewidth of 1.02 kHz (FWHM) at 960 nm. Numerical simulations quantify the effect of the finite linewidth on the coherence of Rydberg Rabi-oscillations. A laser system similar to the 480 nm Rydberg system is developed for spectroscopy on highly charged bismuth.
Advanced optical technologies are also at the heart of the micro-optical generation of tweezer arrays that offer unprecedented scalability of the system size. By using an optimised lens system in combination with an automatic evaluation routine, a tweezer array with several thousand sites and trap waists below 1 ÎŒm is demonstrated. A similar performance is achieved with a microlens array produced in an additive manufacturing process. The microlens design is optimised for the manufacturing process. Furthermore, scattering rates in dipole traps due to suppressed resonant light are analysed proving the feasibility of dipole trap generation using tapered amplifier systems
Secure storage systems for untrusted cloud environments
The cloud has become established for applications that need to be scalable and highly
available. However, moving data to data centers owned and operated by a third party,
i.e., the cloud provider, raises security concerns because a cloud provider could easily
access and manipulate the data or program flow, preventing the cloud from being
used for certain applications, like medical or financial.
Hardware vendors are addressing these concerns by developing Trusted Execution
Environments (TEEs) that make the CPU state and parts of memory inaccessible from
the host software. While TEEs protect the current execution state, they do not provide
security guarantees for data which does not fit nor reside in the protected memory
area, like network and persistent storage.
In this work, we aim to address TEEsâ limitations in three different ways, first we
provide the trust of TEEs to persistent storage, second we extend the trust to multiple
nodes in a network, and third we propose a compiler-based solution for accessing
heterogeneous memory regions. More specifically,
âą SPEICHER extends the trust provided by TEEs to persistent storage. SPEICHER
implements a key-value interface. Its design is based on LSM data structures, but
extends them to provide confidentiality, integrity, and freshness for the stored
data. Thus, SPEICHER can prove to the client that the data has not been tampered
with by an attacker.
âą AVOCADO is a distributed in-memory key-value store (KVS) that extends the
trust that TEEs provide across the network to multiple nodes, allowing KVSs to
scale beyond the boundaries of a single node. On each node, AVOCADO carefully
divides data between trusted memory and untrusted host memory, to maximize
the amount of data that can be stored on each node. AVOCADO leverages the
fact that we can model network attacks as crash-faults to trust other nodes with
a hardened ABD replication protocol.
âą TOAST is based on the observation that modern high-performance systems
often use several different heterogeneous memory regions that are not easily
distinguishable by the programmer. The number of regions is increased by the
fact that TEEs divide memory into trusted and untrusted regions. TOAST is a
compiler-based approach to unify access to different heterogeneous memory
regions and provides programmability and portability. TOAST uses a
load/store interface to abstract most library interfaces for different memory
regions
IoT Transmission Technologies for Distributed Measurement Systems in Critical Environments
Distributed measurement systems are spread in the most diverse application scenarios, and Internet of Things (IoT) transmission equipment is usually the enabling technologies for such measurement systems that need to feature wireless connectivity to ensure pervasiveness. Because wireless measurement systems have been deployed for the last years even in critical environments, assessing transmission technologies performances in such contexts is fundamental. Indeed, they are the most challenging ones for wireless data transmission due to their intrinsic attenuation capabilities.
Several scenarios in which measurement systems can be deployed are analysed. Firstly, marine contexts are treated by considering above-the-sea wireless links. Such setting can be experienced in whichever application requiring remote monitoring of facilities and assets that are offshore installed. Some instances are offshore sea farming plants, or remote video monitoring systems installed on seamark buoys. Secondly, wireless communications taking place from the underground to the aboveground are covered. This scenario is typical of precision agriculture applications, where the accurate measurement of underground physical parameters is needed to be remotely sent to optimise crops reducing the wastefulness of fundamental resources (e.g., irrigation water). Thirdly, wireless communications occurring from the underwater to the abovewater are addressed. Such situation is inevitable for all those infrastructures monitoring conservation status of underwater species like algae, seaweeds and reef. Then, wireless links happening traversing metal surfaces and structures are tackled. Such context is commonly encountered in asset tracking and monitoring (e.g., containers), or in smart metering applications (e.g., utility meters). Lastly, sundry harsh environments that are typical of industrial monitoring (e.g., vibrating machineries, harsh temperature and humidity rooms, corrosive atmospheres) are tested to validate pervasive measurement infrastructures even in such contexts that are usually experienced in Industrial Internet of Things (IIoT) applications. The performances of wireless measurement systems in such scenarios are tested by sorting out ad-hoc measurement campaigns. Finally, IoT measurement infrastructures respectively deployed in above-the-sea and underground-to-aboveground settings are described to provide real applications in which such facilities can be effectively installed. Nonetheless, the aforementioned application scenarios are only some amid their sundry variety. Indeed, nowadays distributed pervasive measurement systems have to be thought in a broad way, resulting in countless instances: predictive maintenance, smart healthcare, smart cities, industrial monitoring, or smart agriculture, etc.
This Thesis aims at showing distributed measurement systems in critical environments to set up pervasive monitoring infrastructures that are enabled by IoT transmission technologies. At first, they are presented, and then the harsh environments are introduced, along with the relative theoretical analysis modelling path loss in such conditions. It must be underlined that this Thesis aims neither at finding better path loss models with respect to the existing ones, nor at improving them. Indeed, path loss models are exploited as they are, in order to derive estimates of losses to understand the effectiveness of the deployed infrastructure. In fact, some transmission tests in those contexts are described, along with providing examples of these types of applications in the field, showing the measurement infrastructures and the relative critical environments serving as deployment sites. The scientific relevance of this Thesis is evident since, at the moment, the literature lacks a comparative study like this, showing both transmission performances in critical environments, and the deployment of real IoT distributed wireless measurement systems in such contexts
- âŠ