2,054 research outputs found
Configuration Management of Distributed Systems over Unreliable and Hostile Networks
Economic incentives of large criminal profits and the threat of legal consequences have pushed criminals to continuously improve their malware, especially command and control channels. This thesis applied concepts from successful malware command and control to explore the survivability and resilience of benign configuration management systems.
This work expands on existing stage models of malware life cycle to contribute a new model for identifying malware concepts applicable to benign configuration management. The Hidden Master architecture is a contribution to master-agent network communication. In the Hidden Master architecture, communication between master and agent is asynchronous and can operate trough intermediate nodes. This protects the master secret key, which gives full control of all computers participating in configuration management. Multiple improvements to idempotent configuration were proposed, including the definition of the minimal base resource dependency model, simplified resource revalidation and the use of imperative general purpose language for defining idempotent configuration.
Following the constructive research approach, the improvements to configuration management were designed into two prototypes. This allowed validation in laboratory testing, in two case studies and in expert interviews. In laboratory testing, the Hidden Master prototype was more resilient than leading configuration management tools in high load and low memory conditions, and against packet loss and corruption. Only the research prototype was adaptable to a network without stable topology due to the asynchronous nature of the Hidden Master architecture.
The main case study used the research prototype in a complex environment to deploy a multi-room, authenticated audiovisual system for a client of an organization deploying the configuration. The case studies indicated that imperative general purpose language can be used for idempotent configuration in real life, for defining new configurations in unexpected situations using the base resources, and abstracting those using standard language features; and that such a system seems easy to learn.
Potential business benefits were identified and evaluated using individual semistructured expert interviews. Respondents agreed that the models and the Hidden Master architecture could reduce costs and risks, improve developer productivity and allow faster time-to-market. Protection of master secret keys and the reduced need for incident response were seen as key drivers for improved security. Low-cost geographic scaling and leveraging file serving capabilities of commodity servers were seen to improve scaling and resiliency. Respondents identified jurisdictional legal limitations to encryption and requirements for cloud operator auditing as factors potentially limiting the full use of some concepts
Design and Real-World Evaluation of Dependable Wireless Cyber-Physical Systems
The ongoing effort for an efficient, sustainable, and automated interaction between humans, machines, and our environment will make cyber-physical systems (CPS) an integral part of the industry and our daily lives. At their core, CPS integrate computing elements, communication networks, and physical processes that are monitored and controlled through sensors and actuators. New and innovative applications become possible by extending or replacing static and expensive cable-based communication infrastructures with wireless technology. The flexibility of wireless CPS is a key enabler for many envisioned scenarios, such as intelligent factories, smart farming, personalized healthcare systems, autonomous search and rescue, and smart cities.
High dependability, efficiency, and adaptivity requirements complement the demand for wireless and low-cost solutions in such applications. For instance, industrial and medical systems should work reliably and predictably with performance guarantees, even if parts of the system fail. Because emerging CPS will feature mobile and battery-driven devices that can execute various tasks, the systems must also quickly adapt to frequently changing conditions. Moreover, as applications become ever more sophisticated, featuring compact embedded devices that are deployed densely and at scale, efficient designs are indispensable to achieve desired operational lifetimes and satisfy high bandwidth demands.
Meeting these partly conflicting requirements, however, is challenging due to imperfections of wireless communication and resource constraints along several dimensions, for example, computing, memory, and power constraints of the devices. More precisely, frequent and correlated message losses paired with very limited bandwidth and varying delays for the message exchange significantly complicate the control design. In addition, since communication ranges are limited, messages must be relayed over multiple hops to cover larger distances, such as an entire factory. Although the resulting mesh networks are more robust against interference, efficient communication is a major challenge as wireless imperfections get amplified, and significant coordination effort is needed, especially if the networks are dynamic.
CPS combine various research disciplines, which are often investigated in isolation, ignoring their complex interaction. However, to address this interaction and build trust in the proposed solutions, evaluating CPS using real physical systems and wireless networks paired with formal guarantees of a system’s end-to-end behavior is necessary. Existing works that take this step can only satisfy a few of the abovementioned requirements. Most notably, multi-hop communication has only been used to control slow physical processes while providing no guarantees. One of the reasons is that the current communication protocols are not suited for dynamic multi-hop networks.
This thesis closes the gap between existing works and the diverse needs of emerging wireless CPS. The contributions address different research directions and are split into two parts. In the first part, we specifically address the shortcomings of existing communication protocols and make the following contributions to provide a solid networking foundation:
• We present Mixer, a communication primitive for the reliable many-to-all message exchange in dynamic wireless multi-hop networks. Mixer runs on resource-constrained low-power embedded devices and combines synchronous transmissions and network coding for a highly scalable and topology-agnostic message exchange. As a result, it supports mobile nodes and can serve any possible traffic patterns, for example, to efficiently realize distributed control, as required by emerging CPS applications.
• We present Butler, a lightweight and distributed synchronization mechanism with formally guaranteed correctness properties to improve the dependability of synchronous transmissions-based protocols. These protocols require precise time synchronization provided by a specific node. Upon failure of this node, the entire network cannot communicate. Butler removes this single point of failure by quickly synchronizing all nodes in the network without affecting the protocols’ performance.
In the second part, we focus on the challenges of integrating communication and various control concepts using classical time-triggered and modern event-based approaches. Based on the design, implementation, and evaluation of the proposed solutions using real systems and networks, we make the following contributions, which in many ways push the boundaries of previous approaches:
• We are the first to demonstrate and evaluate fast feedback control over low-power wireless multi-hop networks. Essential for this achievement is a novel co-design and integration of communication and control. Our wireless embedded platform tames the imperfections impairing control, for example, message loss and varying delays, and considers the resulting key properties in the control design. Furthermore, the careful orchestration of control and communication tasks enables real-time operation and makes our system amenable to an end-to-end analysis. Due to this, we can provably guarantee closed-loop stability for physical processes with linear time-invariant dynamics.
• We propose control-guided communication, a novel co-design for distributed self-triggered control over wireless multi-hop networks. Self-triggered control can save energy by transmitting data only when needed. However, there are no solutions that bring those savings to multi-hop networks and that can reallocate freed-up resources, for example, to other agents. Our control system informs the communication system of its transmission demands ahead of time so that communication resources can be allocated accordingly. Thus, we can transfer the energy savings from the control to the communication side and achieve an end-to-end benefit.
• We present a novel co-design of distributed control and wireless communication that resolves overload situations in which the communication demand exceeds the available bandwidth. As systems scale up, featuring more agents and higher bandwidth demands, the available bandwidth will be quickly exceeded, resulting in overload. While event-triggered control and self-triggered control approaches reduce the communication demand on average, they cannot prevent that potentially all agents want to communicate simultaneously. We address this limitation by dynamically allocating the available bandwidth to the agents with the highest need. Thus, we can formally prove that our co-design guarantees closed-loop stability for physical systems with stochastic linear time-invariant dynamics.:Abstract
Acknowledgements
List of Abbreviations
List of Figures
List of Tables
1 Introduction
1.1 Motivation
1.2 Application Requirements
1.3 Challenges
1.4 State of the Art
1.5 Contributions and Road Map
2 Mixer: Efficient Many-to-All Broadcast in Dynamic Wireless Mesh Networks
2.1 Introduction
2.2 Overview
2.3 Design
2.4 Implementation
2.5 Evaluation
2.6 Discussion
2.7 Related Work
3 Butler: Increasing the Availability of Low-Power Wireless Communication Protocols
3.1 Introduction
3.2 Motivation and Background
3.3 Design
3.4 Analysis
3.5 Implementation
3.6 Evaluation
3.7 Related Work
4 Feedback Control Goes Wireless: Guaranteed Stability over Low-Power Multi-Hop Networks
4.1 Introduction
4.2 Related Work
4.3 Problem Setting and Approach
4.4 Wireless Embedded System Design
4.5 Control Design and Analysis
4.6 Experimental Evaluation
4.A Control Details
5 Control-Guided Communication: Efficient Resource Arbitration and Allocation in Multi-Hop Wireless Control Systems
5.1 Introduction
5.2 Problem Setting
5.3 Co-Design Approach
5.4 Wireless Communication System Design
5.5 Self-Triggered Control Design
5.6 Experimental Evaluation
6 Scaling Beyond Bandwidth Limitations: Wireless Control With Stability Guarantees Under Overload
6.1 Introduction
6.2 Problem and Related Work
6.3 Overview of Co-Design Approach
6.4 Predictive Triggering and Control System
6.5 Adaptive Communication System
6.6 Integration and Stability Analysis
6.7 Testbed Experiments
6.A Proof of Theorem 4
6.B Usage of the Network Bandwidth for Control
7 Conclusion and Outlook
7.1 Contributions
7.2 Future Directions
Bibliography
List of Publication
Multi-objective resource optimization in space-aerial-ground-sea integrated networks
Space-air-ground-sea integrated (SAGSI) networks are envisioned to connect satellite, aerial, ground,
and sea networks to provide connectivity everywhere and all the time in sixth-generation (6G) networks. However, the success of SAGSI networks is constrained by several challenges including
resource optimization when the users have diverse requirements and applications. We present a
comprehensive review of SAGSI networks from a resource optimization perspective. We discuss
use case scenarios and possible applications of SAGSI networks. The resource optimization discussion considers the challenges associated with SAGSI networks. In our review, we categorized
resource optimization techniques based on throughput and capacity maximization, delay minimization, energy consumption, task offloading, task scheduling, resource allocation or utilization,
network operation cost, outage probability, and the average age of information, joint optimization (data rate difference, storage or caching, CPU cycle frequency), the overall performance of
network and performance degradation, software-defined networking, and intelligent surveillance
and relay communication. We then formulate a mathematical framework for maximizing energy
efficiency, resource utilization, and user association. We optimize user association while satisfying
the constraints of transmit power, data rate, and user association with priority. The binary decision
variable is used to associate users with system resources. Since the decision variable is binary and
constraints are linear, the formulated problem is a binary linear programming problem. Based on
our formulated framework, we simulate and analyze the performance of three different algorithms
(branch and bound algorithm, interior point method, and barrier simplex algorithm) and compare
the results. Simulation results show that the branch and bound algorithm shows the best results,
so this is our benchmark algorithm. The complexity of branch and bound increases exponentially
as the number of users and stations increases in the SAGSI network. We got comparable results
for the interior point method and barrier simplex algorithm to the benchmark algorithm with low
complexity. Finally, we discuss future research directions and challenges of resource optimization
in SAGSI networks
SUTMS - Unified Threat Management Framework for Home Networks
Home networks were initially designed for web browsing and non-business critical applications. As infrastructure improved, internet broadband costs decreased, and home internet usage transferred to e-commerce and business-critical applications. Today’s home computers host personnel identifiable information and financial data and act as a bridge to corporate networks via remote access technologies like VPN. The expansion of remote work and the transition to cloud computing have broadened the attack surface for potential threats. Home networks have become the extension of critical networks and services, hackers can get access to corporate data by compromising devices attacked to broad- band routers. All these challenges depict the importance of home-based Unified Threat Management (UTM) systems. There is a need of unified threat management framework that is developed specifically for home and small networks to address emerging security challenges. In this research, the proposed Smart Unified Threat Management (SUTMS) framework serves as a comprehensive solution for implementing home network security, incorporating firewall, anti-bot, intrusion detection, and anomaly detection engines into a unified system. SUTMS is able to provide 99.99% accuracy with 56.83% memory improvements. IPS stands out as the most resource-intensive UTM service, SUTMS successfully reduces the performance overhead of IDS by integrating it with the flow detection mod- ule. The artifact employs flow analysis to identify network anomalies and categorizes encrypted traffic according to its abnormalities. SUTMS can be scaled by introducing optional functions, i.e., routing and smart logging (utilizing Apriori algorithms). The research also tackles one of the limitations identified by SUTMS through the introduction of a second artifact called Secure Centralized Management System (SCMS). SCMS is a lightweight asset management platform with built-in security intelligence that can seamlessly integrate with a cloud for real-time updates
NetGAP: A Graph-Grammar approach for concept design of networked platforms with extra-functional requirements
During the concept design of complex networked systems, concept developers
have to assure that the choice of hardware modules and the topology of the
target platform will provide adequate resources to support the needs of the
application. For example, future-generation aerospace systems need to consider
multiple requirements, with many trade-offs, foreseeing rapid technological
change and a long time span for realization and service. For that purpose, we
introduce NetGAP, an automated 3-phase approach to synthesize network
topologies and support the exploration and concept design of networked systems
with multiple requirements including dependability, security, and performance.
NetGAP represents the possible interconnections between hardware modules using
a graph grammar and uses a Monte Carlo Tree Search optimization to generate
candidate topologies from the grammar while aiming to satisfy the requirements.
We apply the proposed approach to the synthetic version of a realistic avionics
application use case and show the merits of the solution to support the
early-stage exploration of alternative candidate topologies. The method is
shown to vividly characterize the topology-related trade-offs between
requirements stemming from security, fault tolerance, timeliness, and the
"cost" of adding new modules or links. Finally, we discuss the flexibility of
using the approach when changes in the application and its requirements occur
Anpassen verteilter eingebetteter Anwendungen im laufenden Betrieb
The availability of third-party apps is among the key success factors for software ecosystems: The users benefit from more features and innovation speed, while third-party solution vendors can leverage the platform to create successful offerings.
However, this requires a certain decoupling of engineering activities of the different parties not achieved for distributed control systems, yet.
While late and dynamic integration of third-party components would be required, resulting control systems must provide high reliability regarding real-time requirements, which leads to integration complexity.
Closing this gap would particularly contribute to the vision of software-defined manufacturing, where an ecosystem of modern IT-based control system components could lead to faster innovations due to their higher abstraction and availability of various frameworks.
Therefore, this thesis addresses the research question:
How we can use modern IT technologies and enable independent evolution and easy third-party integration of software components in distributed control systems, where deterministic end-to-end reactivity is required, and especially, how can we apply distributed changes to such systems consistently and reactively during operation?
This thesis describes the challenges and related approaches in detail and points out that existing approaches do not fully address our research question.
To tackle this gap, a formal specification of a runtime platform concept is presented in conjunction with a model-based engineering approach.
The engineering approach decouples the engineering steps of component definition, integration, and deployment.
The runtime platform supports this approach by isolating the components, while still offering predictable end-to-end real-time behavior.
Independent evolution of software components is supported through a concept for synchronous reconfiguration during full operation, i.e., dynamic orchestration of components.
Time-critical state transfer is supported, too, and can lead to bounded quality degradation, at most.
The reconfiguration planning is supported by analysis concepts, including simulation of a formally specified system and reconfiguration, and analyzing potential quality degradation with the evolving dataflow graph (EDFG) method.
A platform-specific realization of the concepts, the real-time container architecture, is described as a reference implementation.
The model and the prototype are evaluated regarding their feasibility and applicability of the concepts by two case studies.
The first case study is a minimalistic distributed control system used in different setups with different component variants and reconfiguration plans to compare the model and the prototype and to gather runtime statistics.
The second case study is a smart factory showcase system with more challenging application components and interface technologies.
The conclusion is that the concepts are feasible and applicable, even though the concepts and the prototype still need to be worked on in future -- for example, to reach shorter cycle times.Eine große Auswahl von Drittanbieter-Lösungen ist einer der Schlüsselfaktoren für Software Ecosystems:
Nutzer profitieren vom breiten Angebot und schnellen Innovationen, während Drittanbieter über die Plattform erfolgreiche Lösungen anbieten können.
Das jedoch setzt eine gewisse Entkopplung von Entwicklungsschritten der Beteiligten voraus, welche fĂĽr verteilte Steuerungssysteme noch nicht erreicht wurde.
Während Drittanbieter-Komponenten möglichst spät -- sogar Laufzeit -- integriert werden müssten, müssen Steuerungssysteme jedoch eine hohe Zuverlässigkeit gegenüber Echtzeitanforderungen aufweisen, was zu Integrationskomplexität führt.
Dies zu lösen würde insbesondere zur Vision von Software-definierter Produktion beitragen, da ein Ecosystem für moderne IT-basierte Steuerungskomponenten wegen deren höherem Abstraktionsgrad und der Vielzahl verfügbarer Frameworks zu schnellerer Innovation führen würde.
Daher behandelt diese Dissertation folgende Forschungsfrage:
Wie können wir moderne IT-Technologien verwenden und unabhängige Entwicklung und einfache Integration von Software-Komponenten in verteilten Steuerungssystemen ermöglichen, wo Ende-zu-Ende-Echtzeitverhalten gefordert ist, und wie können wir insbesondere verteilte Änderungen an solchen Systemen konsistent und im Vollbetrieb vornehmen?
Diese Dissertation beschreibt Herausforderungen und verwandte Ansätze im Detail und zeigt auf, dass existierende Ansätze diese Frage nicht vollständig behandeln.
Um diese Lücke zu schließen, beschreiben wir eine formale Spezifikation einer Laufzeit-Plattform und einen zugehörigen Modell-basierten Engineering-Ansatz.
Dieser Ansatz entkoppelt die Design-Schritte der Entwicklung, Integration und des Deployments von Komponenten.
Die Laufzeit-Plattform unterstĂĽtzt den Ansatz durch Isolation von Komponenten und zugleich Zeit-deterministischem Ende-zu-Ende-Verhalten.
Unabhängige Entwicklung und Integration werden durch Konzepte für synchrone Rekonfiguration im Vollbetrieb unterstützt, also durch dynamische Orchestrierung.
Dies beinhaltet auch Zeit-kritische Zustands-Transfers mit höchstens begrenzter Qualitätsminderung, wenn überhaupt.
Rekonfigurationsplanung wird durch Analysekonzepte unterstützt, einschließlich der Simulation formal spezifizierter Systeme und Rekonfigurationen und der Analyse der etwaigen Qualitätsminderung mit dem Evolving Dataflow Graph (EDFG).
Die Real-Time Container Architecture wird als Referenzimplementierung und Evaluationsplattform beschrieben.
Zwei Fallstudien untersuchen Machbarkeit und NĂĽtzlichkeit der Konzepte.
Die erste verwendet verschiedene Varianten und Rekonfigurationen eines minimalistischen verteilten Steuerungssystems, um Modell und Prototyp zu vergleichen sowie Laufzeitstatistiken zu erheben.
Die zweite Fallstudie ist ein Smart-Factory-Demonstrator, welcher herausforderndere Applikationskomponenten und Schnittstellentechnologien verwendet.
Die Konzepte sind den Studien nach machbar und nützlich, auch wenn sowohl die Konzepte als auch der Prototyp noch weitere Arbeit benötigen -- zum Beispiel, um kürzere Zyklen zu erreichen
Towards a Peaceful Development of Cyberspace - Challenges and Technical Measures for the De-escalation of State-led Cyberconflicts and Arms Control of Cyberweapons
Cyberspace, already a few decades old, has become a matter of course for most of us, part of our everyday life. At the same time, this space and the global infrastructure behind it are essential for our civilizations, the economy and administration, and thus an essential expression and lifeline of a globalized world. However, these developments also create vulnerabilities and thus, cyberspace is increasingly developing into an intelligence and military operational area – for the defense and security of states but also as a component of offensive military planning, visible in the creation of military cyber-departments and the integration of cyberspace into states' security and defense strategies. In order to contain and regulate the conflict and escalation potential of technology used by military forces, over the last decades, a complex tool set of transparency, de-escalation and arms control measures has been developed and proof-tested. Unfortunately, many of these established measures do not work for cyberspace due to its specific technical characteristics. Even more, the concept of what constitutes a weapon – an essential requirement for regulation – starts to blur for this domain. Against this background, this thesis aims to answer how measures for the de-escalation of state-led conflicts in cyberspace and arms control of cyberweapons can be developed. In order to answer this question, the dissertation takes a specifically technical perspective on these problems and the underlying political challenges of state behavior and international humanitarian law in cyberspace to identify starting points for technical measures of transparency, arms control and verification. Based on this approach of adopting already existing technical measures from other fields of computer science, the thesis will provide proof of concepts approaches for some mentioned challenges like a classification system for cyberweapons that is based on technical measurable features, an approach for the mutual reduction of vulnerability stockpiles and an approach to plausibly assure the non-involvement in a cyberconflict as a measure for de-escalation. All these initial approaches and the questions of how and by which measures arms control and conflict reduction can work for cyberspace are still quite new and subject to not too many debates. Indeed, the approach of deliberately self-restricting the capabilities of technology in order to serve a bigger goal, like the reduction of its destructive usage, is yet not very common for the engineering thinking of computer science. Therefore, this dissertation also aims to provide some impulses regarding the responsibility and creative options of computer science with a view to the peaceful development and use of cyberspace
- …