48 research outputs found

    A Prescription for Partial Synchrony

    Get PDF
    Algorithms in message-passing distributed systems often require partial synchrony to tolerate crash failures. Informally, partial synchrony refers to systems where timing bounds on communication and computation may exist, but the knowledge of such bounds is limited. Traditionally, the foundation for the theory of partial synchrony has been real time: a time base measured by counting events external to the system, like the vibrations of Cesium atoms or piezoelectric crystals. Unfortunately, algorithms that are correct relative to many real-time based models of partial synchrony may not behave correctly in empirical distributed systems. For example, a set of popular theoretical models, which we call M_*, assume (eventual) upper bounds on message delay and relative process speeds, regardless of message size and absolute process speeds. Empirical systems with bounded channel capacity and bandwidth cannot realize such assumptions either natively, or through algorithmic constructions. Consequently, empirical deployment of the many M_*-based algorithms risks anomalous behavior. As a result, we argue that real time is the wrong basis for such a theory. Instead, the appropriate foundation for partial synchrony is fairness: a time base measured by counting events internal to the system, like the steps executed by the processes. By way of example, we redefine M_* models with fairness-based bounds and provide algorithmic techniques to implement fairness-based M_* models on a significant subset of the empirical systems. The proposed techniques use failure detectors — system services that provide hints about process crashes — as intermediaries that preserve the fairness constraints native to empirical systems. In effect, algorithms that are correct in M_* models are now proved correct in such empirical systems as well. Demonstrating our results requires solving three open problems. (1) We propose the first unified mathematical framework based on Timed I/O Automata to specify empirical systems, partially synchronous systems, and algorithms that execute within the aforementioned systems. (2) We show that crash tolerance capabilities of popular distributed systems can be denominated exclusively through fairness constraints. (3) We specify exemplar system models that identify the set of weakest system models to implement popular failure detectors

    A Prescription for Partial Synchrony

    Get PDF
    Algorithms in message-passing distributed systems often require partial synchrony to tolerate crash failures. Informally, partial synchrony refers to systems where timing bounds on communication and computation may exist, but the knowledge of such bounds is limited. Traditionally, the foundation for the theory of partial synchrony has been real time: a time base measured by counting events external to the system, like the vibrations of Cesium atoms or piezoelectric crystals. Unfortunately, algorithms that are correct relative to many real-time based models of partial synchrony may not behave correctly in empirical distributed systems. For example, a set of popular theoretical models, which we call M_*, assume (eventual) upper bounds on message delay and relative process speeds, regardless of message size and absolute process speeds. Empirical systems with bounded channel capacity and bandwidth cannot realize such assumptions either natively, or through algorithmic constructions. Consequently, empirical deployment of the many M_*-based algorithms risks anomalous behavior. As a result, we argue that real time is the wrong basis for such a theory. Instead, the appropriate foundation for partial synchrony is fairness: a time base measured by counting events internal to the system, like the steps executed by the processes. By way of example, we redefine M_* models with fairness-based bounds and provide algorithmic techniques to implement fairness-based M_* models on a significant subset of the empirical systems. The proposed techniques use failure detectors — system services that provide hints about process crashes — as intermediaries that preserve the fairness constraints native to empirical systems. In effect, algorithms that are correct in M_* models are now proved correct in such empirical systems as well. Demonstrating our results requires solving three open problems. (1) We propose the first unified mathematical framework based on Timed I/O Automata to specify empirical systems, partially synchronous systems, and algorithms that execute within the aforementioned systems. (2) We show that crash tolerance capabilities of popular distributed systems can be denominated exclusively through fairness constraints. (3) We specify exemplar system models that identify the set of weakest system models to implement popular failure detectors

    Measurable Safety of Automated Driving Functions in Commercial Motor Vehicles

    Get PDF
    With the further development of automated driving, the functional performance increases resulting in the need for new and comprehensive testing concepts. This doctoral work aims to enable the transition from quantitative mileage to qualitative test coverage by aggregating the results of both knowledge-based and data-driven test platforms. The validity of the test domain can be extended cost-effectively throughout the software development process to achieve meaningful test termination criteria

    Measurable Safety of Automated Driving Functions in Commercial Motor Vehicles - Technological and Methodical Approaches

    Get PDF
    Fahrerassistenzsysteme sowie automatisiertes Fahren leisten einen wesentlichen Beitrag zur Verbesserung der Verkehrssicherheit von Kraftfahrzeugen, insbesondere von Nutzfahrzeugen. Mit der Weiterentwicklung des automatisierten Fahrens steigt hierbei die funktionale Leistungsfähigkeit, woraus Anforderungen an neue, gesamtheitliche Erprobungskonzepte entstehen. Um die Absicherung höherer Stufen von automatisierten Fahrfunktionen zu garantieren, sind neuartige Verifikations- und Validierungsmethoden erforderlich. Ziel dieser Arbeit ist es, durch die Aggregation von Testergebnissen aus wissensbasierten und datengetriebenen Testplattformen den Übergang von einer quantitativen Kilometerzahl zu einer qualitativen Testabdeckung zu ermöglichen. Die adaptive Testabdeckung zielt somit auf einen Kompromiss zwischen Effizienz- und Effektivitätskriterien für die Absicherung von automatisierten Fahrfunktionen in der Produktentstehung von Nutzfahrzeugen ab. Diese Arbeit umfasst die Konzeption und Implementierung eines modularen Frameworks zur kundenorientierten Absicherung automatisierter Fahrfunktionen mit vertretbarem Aufwand. Ausgehend vom Konfliktmanagement für die Anforderungen der Teststrategie werden hochautomatisierte Testansätze entwickelt. Dementsprechend wird jeder Testansatz mit seinen jeweiligen Testzielen integriert, um die Basis eines kontextgesteuerten Testkonzepts zu realisieren. Die wesentlichen Beiträge dieser Arbeit befassen sich mit vier Schwerpunkten: * Zunächst wird ein Co-Simulationsansatz präsentiert, mit dem sich die Sensoreingänge in einem Hardware-in-the-Loop-Prüfstand mithilfe synthetischer Fahrszenarien simulieren und/ oder stimulieren lassen. Der vorgestellte Aufbau bietet einen phänomenologischen Modellierungsansatz, um einen Kompromiss zwischen der Modellgranularität und dem Rechenaufwand der Echtzeitsimulation zu erreichen. Diese Methode wird für eine modulare Integration von Simulationskomponenten, wie Verkehrssimulation und Fahrdynamik, verwendet, um relevante Phänomene in kritischen Fahrszenarien zu modellieren. * Danach wird ein Messtechnik- und Datenanalysekonzept für die weltweite Absicherung von automatisierten Fahrfunktionen vorgestellt, welches eine Skalierbarkeit zur Aufzeichnung von Fahrzeugsensor- und/ oder Umfeldsensordaten von spezifischen Fahrereignissen einerseits und permanenten Daten zur statistischen Absicherung und Softwareentwicklung andererseits erlaubt. Messdaten aus länderspezifischen Feldversuchen werden aufgezeichnet und zentral in einer Cloud-Datenbank gespeichert. * Anschließend wird ein ontologiebasierter Ansatz zur Integration einer komplementären Wissensquelle aus Feldbeobachtungen in ein Wissensmanagementsystem beschrieben. Die Gruppierung von Aufzeichnungen wird mittels einer ereignisbasierten Zeitreihenanalyse mit hierarchischer Clusterbildung und normalisierter Kreuzkorrelation realisiert. Aus dem extrahierten Cluster und seinem Parameterraum lassen sich die Eintrittswahrscheinlichkeit jedes logischen Szenarios und die Wahrscheinlichkeitsverteilungen der zugehörigen Parameter ableiten. Durch die Korrelationsanalyse von synthetischen und naturalistischen Fahrszenarien wird die anforderungsbasierte Testabdeckung adaptiv und systematisch durch ausführbare Szenario-Spezifikationen erweitert. * Schließlich wird eine prospektive Risikobewertung als invertiertes Konfidenzniveau der messbaren Sicherheit mithilfe von Sensitivitäts- und Zuverlässigkeitsanalysen durchgeführt. Der Versagensbereich kann im Parameterraum identifiziert werden, um die Versagenswahrscheinlichkeit für jedes extrahierte logische Szenario durch verschiedene Stichprobenverfahren, wie beispielsweise die Monte-Carlo-Simulation und Adaptive-Importance-Sampling, vorherzusagen. Dabei führt die geschätzte Wahrscheinlichkeit einer Sicherheitsverletzung für jedes gruppierte logische Szenario zu einer messbaren Sicherheitsvorhersage. Das vorgestellte Framework erlaubt es, die Lücke zwischen wissensbasierten und datengetriebenen Testplattformen zu schließen, um die Wissensbasis für die Abdeckung der Operational Design Domains konsequent zu erweitern. Zusammenfassend zeigen die Ergebnisse den Nutzen und die Herausforderungen des entwickelten Frameworks für messbare Sicherheit durch ein Vertrauensmaß der Risikobewertung. Dies ermöglicht eine kosteneffiziente Erweiterung der Validität der Testdomäne im gesamten Softwareentwicklungsprozess, um die erforderlichen Testabbruchkriterien zu erreichen

    Measurable Safety of Automated Driving Functions in Commercial Motor Vehicles

    Get PDF
    With the further development of automated driving, the functional performance increases resulting in the need for new and comprehensive testing concepts. This doctoral work aims to enable the transition from quantitative mileage to qualitative test coverage by aggregating the results of both knowledge-based and data-driven test platforms. The validity of the test domain can be extended cost-effectively throughout the software development process to achieve meaningful test termination criteria

    Grand Pwning Unit:Accelerating Microarchitectural Attacks with the GPU

    Get PDF
    Dark silicon is pushing processor vendors to add more specialized units such as accelerators to commodity processor chips. Unfortunately this is done without enough care to security. In this paper we look at the security implications of integrated Graphical Processor Units (GPUs) found in almost all mobile processors. We demonstrate that GPUs, already widely employed to accelerate a variety of benign applications such as image rendering, can also be used to 'accelerate' microarchitectural attacks (i.e., making them more effective) on commodity platforms. In particular, we show that an attacker can build all the necessary primitives for performing effective GPU-based microarchitectural attacks and that these primitives are all exposed to the web through standardized browser extensions, allowing side-channel and Rowhammer attacks from JavaScript. These attacks bypass state-of-the-art mitigations and advance existing CPU-based attacks: we show the first end-to-end microarchitectural compromise of a browser running on a mobile phone in under two minutes by orchestrating our GPU primitives. While powerful, these GPU primitives are not easy to implement due to undocumented hardware features. We describe novel reverse engineering techniques for peeking into the previously unknown cache architecture and replacement policy of the Adreno 330, an integrated GPU found in many common mobile platforms. This information is necessary when building shader programs implementing our GPU primitives. We conclude by discussing mitigations against GPU-enabled attackers

    Assessing the Efficacy of Test Selection, Prioritization, and Batching Strategies in the Presence of Flaky Tests and Parallel Execution at Scale

    Get PDF
    Effective software testing is essential for successful software releases, and numerous test optimization techniques have been proposed to enhance this process. However, existing research primarily concentrates on small datasets, resulting in impractical solutions for large-scale projects. Flaky tests, which significantly affect test optimization results, are often overlooked, and unrealistic approaches are employed to identify them. Furthermore, there is limited research on the impact of parallelization on test optimization techniques, particularly batching, and a lack of comprehensive comparisons among different techniques, including batching, which is an effective but often neglected approach. To address research gaps, we analyzed the Chrome release process and collected a dataset of 276 million test results. In addition to evaluating established test optimization algorithms, we introduced two new algorithms. We also examined the impact of parallelism by varying the number of machines used. Our assessment covered various metrics, including feedback time, failing test detection speed, test execution time, and machine utilization. Our investigation reveals that a significant portion of failures in testing is attributed to flaky tests, resulting in an inflated performance of test prioritization algorithms. Additionally, we observed that test parallelization has a non-linear impact on feedback time, as delays accumulate throughout the entire test queue. When it comes to optimizing feedback time, batching algorithms with adaptive batch sizes prove to be more effective compared to those with constant batch sizes, achieving execution reductions of up to 91%. Furthermore, our findings indicate that the batching technique is on par with the test selection algorithm in terms of effectiveness, while maintaining the advantage of not missing any failures. Practitioners are encouraged to adopt adaptive batching techniques to minimize the number of machines required for testing and reduce feedback time, while effectively managing flaky tests. Analyzing historical data is crucial for determining the threshold at which adding more machines has minimal impact on feedback time, enabling optimization of testing efficiency and resource utilization

    Combatting Advanced Persistent Threat via Causality Inference and Program Analysis

    Get PDF
    Cyber attackers are becoming more and more sophisticated. In particular, Advanced Persistent Threat (APT) is a new class of attack that targets a specifc organization and compromises systems over a long time without being detected. Over the years, we have seen notorious examples of APTs including Stuxnet which disrupted Iranian nuclear centrifuges and data breaches affecting millions of users. Investigating APT is challenging as it occurs over an extended period of time and the attack process is highly sophisticated and stealthy. Also, preventing APTs is diffcult due to ever-expanding attack vectors. In this dissertation, we present proposals for dealing with challenges in attack investigation. Specifcally, we present LDX which conducts precise counter-factual causality inference to determine dependencies between system calls (e.g., between input and output system calls) and allows investigators to determine the origin of an attack (e.g., receiving a spam email) and the propagation path of the attack, and assess the consequences of the attack. LDX is four times more accurate and two orders of magnitude faster than state-of-the-art taint analysis techniques. Moreover, we then present a practical model-based causality inference system, MCI, which achieves precise and accurate causality inference without requiring any modifcation or instrumentation in end-user systems. Second, we show a general protection system against a wide spectrum of attack vectors and methods. Specifcally, we present A2C that prevents a wide range of attacks by randomizing inputs such that any malicious payloads contained in the inputs are corrupted. The protection provided by A2C is both general (e.g., against various attack vectors) and practical (7% runtime overhead)

    A Survey on Data Plane Programming with P4: Fundamentals, Advances, and Applied Research

    Full text link
    With traditional networking, users can configure control plane protocols to match the specific network configuration, but without the ability to fundamentally change the underlying algorithms. With SDN, the users may provide their own control plane, that can control network devices through their data plane APIs. Programmable data planes allow users to define their own data plane algorithms for network devices including appropriate data plane APIs which may be leveraged by user-defined SDN control. Thus, programmable data planes and SDN offer great flexibility for network customization, be it for specialized, commercial appliances, e.g., in 5G or data center networks, or for rapid prototyping in industrial and academic research. Programming protocol-independent packet processors (P4) has emerged as the currently most widespread abstraction, programming language, and concept for data plane programming. It is developed and standardized by an open community and it is supported by various software and hardware platforms. In this paper, we survey the literature from 2015 to 2020 on data plane programming with P4. Our survey covers 497 references of which 367 are scientific publications. We organize our work into two parts. In the first part, we give an overview of data plane programming models, the programming language, architectures, compilers, targets, and data plane APIs. We also consider research efforts to advance P4 technology. In the second part, we analyze a large body of literature considering P4-based applied research. We categorize 241 research papers into different application domains, summarize their contributions, and extract prototypes, target platforms, and source code availability.Comment: Submitted to IEEE Communications Surveys and Tutorials (COMS) on 2021-01-2

    E-business industry developments - 2002/03; Audit risk alerts

    Get PDF
    https://egrove.olemiss.edu/aicpa_indev/1059/thumbnail.jp
    corecore