37 research outputs found

    Cryptographic Protocols from Physical Assumptions

    Get PDF
    Moderne Kryptographie erlaubt nicht nur, personenbezogene Daten im Internet zu schützen oder sich für bestimmte Dienste zu authentifizieren, sondern ermöglicht auch das Auswerten einer Funktion auf geheimen Eingaben mehrerer Parteien, ohne dass dabei etwas über diese Eingaben gelernt werden kann (mit der Ausnahme von Informationen, die aus der Ausgabe und eigenen Eingaben effizient abgeleitet werden können). Kryptographische Protokolle dieser Art werden sichere Mehrparteienberechnung genannt und eignen sich für ein breites Anwendungsspektrum, wie z.B. geheime Abstimmungen und Auktionen. Um die Sicherheit solcher Protokolle zu beweisen, werden Annahmen benötigt, die oft komplexitätstheoretischer Natur sind, beispielsweise, dass es schwierig ist, hinreichend große Zahlen zu faktorisieren. Sicherheitsannahmen, die auf physikalischen Prinzipien basieren, bieten im Gegensatz zu komplexitätstheoretischen Annahmen jedoch einige Vorteile: die Protokolle sind meist konzeptionell einfacher, die Sicherheit ist unabhängig von den Berechnungskapazitäten des Angreifers, und die Funktionsweise und Sicherheit ist oft für den Menschen leichter nachvollziehbar. (Zum Beispiel forderte das Bundesverfassungsgericht: „Beim Einsatz elektronischer Wahlgeräte müssen die wesentlichen Schritte der Wahlhandlung und der Ergebnisermittlung vom Bürger zuverlässig und ohne besondere Sachkenntnis überprüft werden können.“ (BVerfG, Urteil des Zweiten Senats vom 03. März 2009)). Beispiele für solche Annahmen sind physikalisch getrennte oder unkorrumpierbare Hardware-Komponenten (vgl. Broadnax et al., 2018), Write-Only-Geräte für Logging, oder frei zu rubbelnde Felder, wie man sie von PIN-Briefen kennt. Auch die aus der Quantentheorie folgende Nicht-Duplizierbarkeit von Quantenzuständen ist eine physikalische Sicherheitsannahme, die z.B. verwendet wird, um nicht-klonbares „Quantengeld“ zu realisieren. In der vorliegenden Dissertation geht es neben Protokollen, die die Sicherheit und Isolation bestimmter einfacher Hardware-Komponenten als Vertrauensanker verwenden, im Besonderen um kryptographischen Protokolle für die sichere Mehrparteienberechnung, die mit Hilfe physikalischer Spielkarten durchgeführt werden. Die Sicherheitsannahme besteht darin, dass die Karten ununterscheidbare Rückseiten haben und, dass bestimmte Mischoperationen sicher durchgeführt werden können. Eine Anwendung dieser Protokolle liegt also in der Veranschaulichung von Kryptographie und in der Ermöglichung sicherer Mehrparteienberechnungen, die gänzlich ohne Computer ausgeführt werden können. Ein Ziel in diesem Bereich der Kryptographie ist es, Protokolle anzugeben, die möglichst wenige Karten benötigen – und sie als optimal in diesem Sinne zu beweisen. Abhängig von Anforderungen an das Laufzeitverhalten (endliche vs. lediglich im Erwartungswert endliche Laufzeit) und an die Praktikabilität der eingesetzten Mischoperationen, ergeben sich unterschiedliche untere Schranken für die mindestens benötigte Kartenanzahl. Im Rahmen der Arbeit wird für jede Kombination dieser Anforderungen ein UND-Protokoll – ein logisches UND zweier in Karten codierter Bits; dieses ist zusammen mit der Negation und dem Kopieren von Bits hinreichend für die Realisierung allgemeiner Schaltkreise – konstruiert oder in der Literatur identifiziert, das mit der minimalen Anzahl an Karten auskommt, und dies auch als Karten-minimal bewiesen. Insgesamt ist UND mit vier (für erwartet endliche Laufzeit (Koch, Walzer und Härtel, 2015; Koch, 2018)), fünf (für praktikable Mischoperationen oder endliche Laufzeit (Koch, Walzer und Härtel, 2015; Koch, 2018)) oder sechs Karten (für endliche Laufzeit und gleichzeitig praktikable Mischoperationen (Kastner et al., 2017)) möglich und optimal. Für die notwendigen Struktureinsichten wurden so-genannte „Zustandsdiagramme“ mit zugehörigen Kalkülregeln entwickelt, die eine graphenbasierte Darstellung aller möglichen Protokolldurchläufe darstellen und an denen Korrektheit und Sicherheit der Protokolle direkt ablesbar sind (Koch, Walzer und Härtel, 2015; Kastner et al., 2017). Dieser Kalkül hat seitdem eine breite Verwendung in der bereichsrelevanten Literatur gefunden. (Beweise für untere Schranken bzgl. der Kartenanzahl werden durch den Kalkül zu Beweisen, die zeigen, dass bestimmte Protokollzustände in einer bestimmten kombinatorischen Graphenstruktur nicht erreichbar sind.) Mit Hilfe des Kalküls wurden Begriffe der Spielkartenkryptographie als C-Programm formalisiert und (unter bestimmten Einschränkungen) mit einem „Software Bounded Model Checking“-Ansatz die Längenminimalität eines kartenminimalen UND-Protokolls bewiesen (Koch, Schrempp und Kirsten, 2019). Darüber hinaus werden konzeptionell einfache Protokolle für den Fall einer sicheren Mehrparteienberechnung angegeben, bei der sogar zusätzlich die zu berechnende Funktion geheim bleiben soll (Koch und Walzer, 2018), und zwar für jedes der folgenden Berechnungsmodelle: (universelle) Schaltkreise, binäre Entscheidungsdiagramme, Turingmaschinen und RAM-Maschinen. Es wird zudem untersucht, wie Karten-basierte Protokolle so ausgeführt werden können, dass die einzige Interaktion darin besteht, dass andere Parteien die korrekte Ausführung überwachen. Dies ermöglicht eine (schwach interaktive) Programm-Obfuszierung, bei der eine Partei ein durch Karten codiertes Programm auf eigenen Eingaben ausführen kann, ohne etwas über dessen interne Funktionsweise zu lernen, das über das Ein-/Ausgabeverhalten hinaus geht. Dies ist ohne derartige physikalische Annahmen i.A. nicht möglich. Zusätzlich wird eine Sicherheit gegen Angreifer, die auch vom Protokoll abweichen dürfen, formalisiert und es wird eine Methode angegeben um unter möglichst schwachen Sicherheitsannahmen ein passiv sicheres Protokoll mechanisch in ein aktiv sicheres zu transformieren (Koch und Walzer, 2017). Eine weitere, in der Dissertation untersuchte physikalische Sicherheitsannahme, ist die Annahme primitiver, unkorrumpierbarer Hardware-Bausteine, wie z.B. einen TAN-Generator. Dies ermöglicht z.B. eine sichere Authentifikation des menschlichen Nutzers über ein korrumpiertes Terminal, ohne dass der Nutzer selbst kryptographische Berechnungen durchführen muss (z.B. große Primzahlen zu multiplizieren). Dies wird am Beispiel des Geldabhebens an einem korrumpierten Geldautomaten mit Hilfe eines als sicher angenommenen zweiten Geräts (Achenbach et al., 2019) und mit möglichst schwachen Anforderungen an die vorhandenen Kommunikationskanäle gelöst. Da das angegebene Protokoll auch sicher ist, wenn es beliebig mit anderen gleichzeitig laufenden Protokollen ausgeführt wird (also sogenannte Universelle Komponierbarkeit aufweist), es modular entworfen wurde, und die Sicherheitsannahme glaubwürdig ist, ist die Funktionsweise für den Menschen transparent und nachvollziehbar. Insgesamt bildet die Arbeit durch die verschiedenen Karten-basierten Protokolle, Kalküle und systematisierten Beweise für untere Schranken bzgl. der Kartenanzahl, sowie durch Ergebnisse zur sicheren Verwendung eines nicht-vertrauenswürdigen Terminals, und einer Einordnung dieser in eine systematische Darstellung der verschiedenen, in der Kryptographie verwendeten physikalischen Annahmen, einen wesentlichen Beitrag zur physikalisch-basierten Kryptographie

    Digital ecosystems

    No full text
    We view Digital Ecosystems to be the digital counterparts of biological ecosystems, which are considered to be robust, self-organising and scalable architectures that can automatically solve complex, dynamic problems. So, this work is concerned with the creation, investigation, and optimisation of Digital Ecosystems, exploiting the self-organising properties of biological ecosystems. First, we created the Digital Ecosystem, a novel optimisation technique inspired by biological ecosystems, where the optimisation works at two levels: a first optimisation, migration of agents which are distributed in a decentralised peer-to-peer network, operating continuously in time; this process feeds a second optimisation based on evolutionary computing that operates locally on single peers and is aimed at finding solutions to satisfy locally relevant constraints. We then investigated its self-organising aspects, starting with an extension to the definition of Physical Complexity to include the evolving agent populations of our Digital Ecosystem. Next, we established stability of evolving agent populations over time, by extending the Chli-DeWilde definition of agent stability to include evolutionary dynamics. Further, we evaluated the diversity of the software agents within evolving agent populations, relative to the environment provided by the user base. To conclude, we considered alternative augmentations to optimise and accelerate our Digital Ecosystem, by studying the accelerating effect of a clustering catalyst on the evolutionary dynamics of our Digital Ecosystem, through the direct acceleration of the evolutionary processes. We also studied the optimising effect of targeted migration on the ecological dynamics of our Digital Ecosystem, through the indirect and emergent optimisation of the agent migration patterns. Overall, we have advanced the understanding of creating Digital Ecosystems, the self-organisation that occurs within them, and the optimisation of their Ecosystem-Oriented Architecture

    An Approach for Building Efficient Composable Simulation Models

    Full text link
    Models are becoming invaluable instruments for comprehending and resolving the problems originating from the interactions between humans, mainly their social and economic systems, and the environment. These interactions between the three systems, i.e. the socio-economic-natural systems, lead to evolving systems that are infamous for being extremely complex, having potentially conflicting goals, and including a considerable amount of uncertainties over how to characterize and manage them. Because models are inextricably linked to the system they attempt to represent, models geared towards addressing complex systems not only need to be functional in terms of their use and expected result but rather, the modeling process in its entirety needs to be credible, practically feasible, and transparent. In order to realize the full potential of models, the modeling workflow needs to be seen as an integral part of the model itself. Poor modeling practices at any stage of the model-building process, from conceptualization to implementation, can lead to adverse consequences when the model is in operation. This can undermine the role of models as enablers for tackling complex problems and lead to skepticism about their effectiveness. Models need to possess a number of qualities in order to be effective enablers for dealing with complex systems and addressing the issues that are associated with them. These qualities include being constructed in a way that supports model reuse and interoperability, having the ability to integrate data, scales, and algorithms across multiple disciplines, and having the ability to handle high degrees of uncertainty. Building models that fulfill these requirements is not an easy endeavor, as it usually entails performing problem description and requirement analysis tasks, assimilating knowledge from different domains, and choosing and integrating appropriate technique(s), among other tasks that require the utilization of a significant amount of time and resources. This study aims to improve the efficiency and rigor of the model-building process by presenting an artifact that facilitates the development of probabilistic models targeting complex socioeconomic-environmental systems. This goal is accomplished in three stages. The first stage deconstructs models that attempt to address complex systems. We use the Sustainable Development Goals (SDG) as a model problem that includes economic, social, and environmental systems. The SDG models are classified and mapped against the desirable characteristics that need to be present in models addressing such a complex issue. The results of stage one are utilized in the second stage to create an Object-Oriented Bayesian Networks (OOBN) model that attempts to represent the complexity of the relationships between the SDGs, long-term sustainability, and the resilience of nations. The OOBN model development process is guided by existing modeling best practices, and the model utility is demonstrated by applying it to three case studies, each relevant to a different policy analysis context. The final section of this study proposes a Pattern Language (PL) for developing OOBN models. The proposed PL consolidates cross-domain knowledge into a set of patterns with a hierarchical structure, allowing its prospective user to develop complex models. Stage three, in addition to the OOBN PL, presents a comprehensive PL validation framework that is used to validate the proposed PL. Finally, the OOBN PL is used to rebuild and address the limitations of the OOBN model presented in stage two. The proposed OOBN PL resulted in a more fit-for-purpose OOBN model, indicating the adequacy and usefulness of such an artifact for enabling modelers to build more effective models

    Deep reinforcement learning methods for automated workflow construction in large scale open distributed systems

    Get PDF
    Large-scale distributed and decentralized systems often require access to multiple services, leading to the construction of complex workflows that can be difficult to design manually. This thesis proposes to use Deep Reinforcement Learning (DRL) techniques to create the optimal workflow without human intervention. The proposed hypothesis is based on using DRL algorithms combined with various styles of encoding such as Symbolic Vector Architecture and Knowledge Graph Embeddings, to handle larger and more complex systems. The approach utilizes both hierarchical and multi-task reinforcement learning. The benefit of using DRL in workflow construction is its ability to adapt to dynamic systems, where services are continuously added or removed, and systems change in quality. Our proposed approach can learn to adapt to changes in the system and find suitable alternatives

    Computing multi-scale organizations built through assembly

    Get PDF
    The ability to generate and control assembling structures built over many orders of magnitude is an unsolved challenge of engineering and science. Many of the presumed transformational benefits of nanotechnology and robotics are based directly on this capability. There are still significant theoretical difficulties associated with building such systems, though technology is rapidly ensuring that the tools needed are becoming available in chemical, electronic, and robotic domains. In this thesis a simulated, general-purpose computational prototype is developed which is capable of unlimited assembly and controlled by external input, as well as an additional prototype which, in structures, can emulate any other computing device. These devices are entirely finite-state and distributed in operation. Because of these properties and the unique ability to form unlimited size structures of unlimited computational power, the prototypes represent a novel and useful blueprint on which to base scalable assembly in other domains. A new assembling model of Computational Organization and Regulation over Assembly Levels (CORAL) is also introduced, providing the necessary framework for this investigation. The strict constraints of the CORAL model allow only an assembling unit of a single type, distributed control, and ensure that units cannot be reprogrammed - all reprogramming is done via assembly. Multiple units are instead structured into aggregate computational devices using a procedural or developmental approach. Well-defined comparison of computational power between levels of organization is ensured by the structure of the model. By eliminating ambiguity, the CORAL model provides a pragmatic answer to open questions regarding a framework for hierarchical organization. Finally, a comparison between the designed prototypes and units evolved using evolutionary algorithms is presented as a platform for further research into novel scalable assembly. Evolved units are capable of recursive pairing ability under the control of a signal, a primitive form of unlimited assembly, and do so via symmetry-breaking operations at each step. Heuristic evidence for a required minimal threshold of complexity is provided by the results, and challenges and limitations of the approach are identified for future evolutionary studies

    A Reinforcement Learning Quality of Service Negotiation Framework For IoT Middleware

    Get PDF
    The Internet of Things (IoT) ecosystem is characterised by heterogeneous devices dynamically interacting with each other to perform a specific task, often without human intervention. This interaction typically occurs in a service-oriented manner and is facilitated by an IoT middleware. The service provision paradigm enables the functionalities of IoT devices to be provided as IoT services to perform actuation tasks in critical-safety systems such as autonomous, connected vehicle system and industrial control systems. As IoT systems are increasingly deployed into an environment characterised by continuous changes and uncertainties, there have been growing concerns on how to resolve the Quality of Service (QoS) contentions between heterogeneous devices with conflicting preferences to guarantee the execution of mission-critical actuation tasks. With IoT devices with different QoS constraints as IoT service providers spontaneously interacts with IoT service consumers with varied QoS requirements, it becomes essential to find the best way to establish and manage the QoS agreement in the middleware as a compromise in the QoS could lead to negative consequences. This thesis presents a QoS negotiation framework, IoTQoSystem, for IoT service-oriented middleware. The QoS framework is underpinned by a negotiation process that is modelled as a Markov Decision Process (MDP). A model-based Reinforcement Learning negotiation strategy is proposed for generating an acceptable QoS solution in a dynamic, multilateral and multi-parameter scenarios. A microservice-oriented negotiation architecture is developed that combines negotiation, monitoring and forecasting to provide a self-managing mechanism for ensuring the successful execution of actuation tasks in an IoT environment. Using a case study, the developed QoS negotiation framework was evaluated using real-world data sets with different negotiation scenarios to illustrate its scalability, reliability and performance

    Modelling an End-to-End Supply Chain System Using Simulation

    Get PDF
    Supply chains (SCs) are an important part of today’s world. Many businesses operate in the global marketplace where individual companies are no longer treated as separate entities, but as a vital part of an end-to-end supply chain (E2E-SC) system. Key challenges and issues in managing E2E-SCs are duly attributed to their extended, complex and systemic nature. In the era of uncertainty, risks and market volatility, decision makers are searching for modelling techniques to be able to understand, to control, design or evaluate their E2E-SC. This research aims to support academics and decision makers by defining a generic simulation modelling approach that can be used for any E2E-SC. This study considers the challenges and issues associated with modelling complex E2E-SC systems using simulation and underlines the key requirements for modelling an E2E-SC. The systematic literature review approach is applied to provide a twofold theoretical contribution [a] an insightful review of various contributions to knowledge surrounding simulation methods within the literature on end-to-end supply chains and [b] to propose a conceptual framework that suggests generic elements required for modelling such systems using simulation. The research adopts a simulation methodology and develops a generic guide to an E2E-SC simulation model creation process. It is a mindful inquiry into the implications relative to a simulation model development process in presence of generic elements from the proposed conceptual framework. The conceptual framework is validated with industry experts and insightful remarks are drawn. In conclusion, it is acknowledged that modelling an E2E-SC system using simulation is a challenge, and this area is not fully exploited by the business. A guide to an E2E-SC simulation model development is a theoretical and practical contribution of this research, immensely sought by businesses, which are continuously tackling day to day issues and challenges, hence often lacking resources and time to focus on modelling. The conceptual framework captures generic elements of the E2E-SC system; however, it also highlights multiple challenges around simulation model development process such as technical constraints and almost impracticability of a true reflection of an E2E-SC system simulation model. The significant contribution of this thesis is the evaluation of the proposed generic guide to E2E-SC simulate model development, which provides the architecture for better strategic supply and demand balancing as new products, price fluctuations, and options for physical network changes can be dynamically incorporated into the model. The research provides an insightful journey through key challenges and issues when modelling E2E-SC systems and contributes with key recommendations for mindful inquiries into E2E-SC simulation models
    corecore