23 research outputs found

    Code-Reuse Attacks and Defenses

    Get PDF
    Exploitation of memory corruption vulnerabilities in widely used software has been a threat for almost three decades and no end seems to be in sight. In particular, code-reuse techniques such as return-oriented programming offer a robust attack technique that is extensively used to exploit memory corruption vulnerabilities in modern software programs (e.g. web browsers or document viewers). Whereas conventional control-flow attacks (runtime exploits) require the injection of malicious code, code-reuse attacks leverage code that is already present in the address space of an application to undermine the security model of data execution prevention (DEP). In addition, code-reuse attacks in conjunction with memory disclosure attack techniques circumvent the widely applied memory protection model of address space layout randomization (ASLR). To counter this ingenious attack strategy, several proposals for enforcement of control-flow integrity (CFI) and fine-grained code randomization have emerged. In this dissertation, we explore the limitations of existing defenses against code-reuse attacks. In particular, we demonstrate that various coarse-grained CFI solutions can be effectively undermined, even under weak adversarial assumptions. Moreover, we explore a new return-oriented programming attack technique that is solely based on indirect jump and call instructions to evade detection from defenses that perform integrity checks for return addresses. To tackle the limitations of existing defenses, this dissertation introduces the design and implementation of several new countermeasures. First, we present a generic and fine-grained CFI framework for mobile devices targeting ARM-based platforms. This framework preserves static code signatures by instrumenting mobile applications on-the-fly in memory. Second, we tackle the performance and security limitations of existing CFI defenses by introducing hardware-assisted CFI for embedded devices. To this end, we present a CFI-based hardware implementation for Intel Siskiyou Peak using dedicated CFI machine instructions. Lastly, we explore fine-grained code randomization techniques

    Mitigating the imposition of malicious behaviour on code

    Get PDF
    If vulnerabilities in software allow to alter the behaviour of a program, traditional virus scanners do not have a chance because there is no ingress of a malicious program. Even worse, the same, potentially vulnerable software, runs on millions of computers, smart phones, and tablets out there. One program flaw allows for malicious behaviour to be imposed on millions of instances at once. This thesis presents four novel approaches that all mitigate or prevent this type of imposition of malicious behaviour on otherwise benign code. Since attacks have adapted during the writing of this thesis, the counteract techniques presented are tailored towards different stages of the still ongoing cat-and-mouse game and each technique resembles the status quo in defences at that time.Gutartige Programme, welche sich in schädliche verwandeln lassen, stellen eine größere Bedrohung dar, als Programme, die von vornherein bösartig sind. Während bösartige Programme immerhin die klare Absicht des Diebstahls oder der Manipulation von Daten haben, hat ein gutartiges Programm in aller Regel einen Nutzen für den Anwender. Wenn nun aber ein Programmierfehler dazu führen kann, plötzlich das Verhalten eines Programms zu verändern, bleibt dies von traditionellen Virenscanner völlig ungeachtet, weil diese bloß per se schädliche Programme erkennen. Hinzu kommt, dass Software oft weit ver- breitet ist und in identischer Form auf Millionen von Computern Verwendung findet – ein gefundenes Fressen, um Sicherheitslücken millionenfach auszunutzen. Bereits 1972 zeigten Forscher, dass nicht ordnungsgemäß verarbeitete Eingaben eines Programmes dessen Verhalten beliebig ändern können. Programmierfehler, wie beispielsweise das Überschreiten eines Puffers, könnten nachgelagerte Daten überschreiben. Der Morris-Wurm von 1988 zeigte, dass diese Pufferüberläufe gezielt dazu genutzt werden können das Verhalten eines Programms beliebig zu beeinflussen. Laut MITRE Common Weakness Enumeration (CWE) ist diese Art des Angriffs auch im Jahr 2015 noch immer eine der weitverbreitetsten. Diese sog. Laufzeit-Angriffe befinden sich auf Platz 2 ( “OS Command Injection”) und Platz 3 (“classic buffer overflow”) der CWE Rangliste. Sie ermöglichen Angreifern sowohl Eingaben zu steuern, Berechnungen zu verändern oder Ausgaben zu fälschen, beispielsweise mit dem Ziel Online-Banking-Transaktion zu ändern, Spam-Email-Server im Hintergrund zu installieren oder Opfer zu erpressen, indem wertvolle Dateien verschlüsselt werden. Diese Dissertation stellt vier neue Ansätze vor, welche alle auf unterschiedliche Weise bösartige Verhaltensänderungen von eigentlich gutartiger Software ver- hindern. Da auch die Angriffe während des Schreibens dieser Dissertation verbessert wurden, stellen die hier beschriebenen Lösungskandidaten einen iterativen Prozess dar, der über den zeitlichen Verlauf dieser Dissertation in einem stetigen Katz-und-Maus-Spiel stückchenweise verfeinert wurde

    Protecting Systems From Exploits Using Language-Theoretic Security

    Get PDF
    Any computer program processing input from the user or network must validate the input. Input-handling vulnerabilities occur in programs when the software component responsible for filtering malicious input---the parser---does not perform validation adequately. Consequently, parsers are among the most targeted components since they defend the rest of the program from malicious input. This thesis adopts the Language-Theoretic Security (LangSec) principle to understand what tools and research are needed to prevent exploits that target parsers. LangSec proposes specifying the syntactic structure of the input format as a formal grammar. We then build a recognizer for this formal grammar to validate any input before the rest of the program acts on it. To ensure that these recognizers represent the data format, programmers often rely on parser generators or parser combinators tools to build the parsers. This thesis propels several sub-fields in LangSec by proposing new techniques to find bugs in implementations, novel categorizations of vulnerabilities, and new parsing algorithms and tools to handle practical data formats. To this end, this thesis comprises five parts that tackle various tenets of LangSec. First, I categorize various input-handling vulnerabilities and exploits using two frameworks. First, I use the mismorphisms framework to reason about vulnerabilities. This framework helps us reason about the root causes leading to various vulnerabilities. Next, we built a categorization framework using various LangSec anti-patterns, such as parser differentials and insufficient input validation. Finally, we built a catalog of more than 30 popular vulnerabilities to demonstrate the categorization frameworks. Second, I built parsers for various Internet of Things and power grid network protocols and the iccMAX file format using parser combinator libraries. The parsers I built for power grid protocols were deployed and tested on power grid substation networks as an intrusion detection tool. The parser I built for the iccMAX file format led to several corrections and modifications to the iccMAX specifications and reference implementations. Third, I present SPARTA, a novel tool I built that generates Rust code that type checks Portable Data Format (PDF) files. The type checker I helped build strictly enforces the constraints in the PDF specification to find deviations. Our checker has contributed to at least four significant clarifications and corrections to the PDF 2.0 specification and various open-source PDF tools. In addition to our checker, we also built a practical tool, PDFFixer, to dynamically patch type errors in PDF files. Fourth, I present ParseSmith, a tool to build verified parsers for real-world data formats. Most parsing tools available for data formats are insufficient to handle practical formats or have not been verified for their correctness. I built a verified parsing tool in Dafny that builds on ideas from attribute grammars, data-dependent grammars, and parsing expression grammars to tackle various constructs commonly seen in network formats. I prove that our parsers run in linear time and always terminate for well-formed grammars. Finally, I provide the earliest systematic comparison of various data description languages (DDLs) and their parser generation tools. DDLs are used to describe and parse commonly used data formats, such as image formats. Next, I conducted an expert elicitation qualitative study to derive various metrics that I use to compare the DDLs. I also systematically compare these DDLs based on sample data descriptions available with the DDLs---checking for correctness and resilience

    Evaluation with uncertainty

    Get PDF
    Experimental uncertainty arises as a consequence of: (1) bias (systematic error), and (2) variance in measurements. Popular evaluation techniques only account for the variance due to sampling of experimental units, and assume the other sources of uncertainty can be ignored. For example, only the uncertainty due to sampling of topics (queries) and sampling of training:test datasets is considered in standard information retrieval (IR) and classifier system evaluation respectively. However, incomplete relevance judgements, assessor disagreement, non-deterministic systems, and the measurement bias can also cause uncertainty in these experiments. In this thesis, the impact of other sources of uncertainty on evaluating IR and classification experiments are investigated. The uncertainty due to:(1) incomplete relevance judgements in IR test collections,(2) non-determinism in IR systems / classifiers, and (3) high variance of classifiers is analysed using case studies from distributed information retrieval and information security. The thesis illustrates the importance of reducing and accurately accounting for uncertainty when evaluating complex IR and classifier systems. Novel techniques to(1) reduce uncertainty due to test collection bias in IR evaluation and high classifier variance (overfitting) in detecting drive-by download attacks,(2) account for multidimensional variance due to sampling of IR systems instances from non-deterministic IR systems in addition to sampling of topics, and (3) account for repeated measurements due to non-deterministic classification algorithms are introduced

    Understanding and protecting closed-source systems through dynamic analysis

    Get PDF
    In this dissertation, we focus on dynamic analyses that examine the data handled by programs and operating systems in order to divine the undocumented constraints and implementation details that determine their behavior in the field. First, we introduce a novel technique for uncovering the constraints actually used in OS kernels to decide whether a given instance of a kernel data structure is valid. Next, we tackle the semantic gap problem in virtual machine security: we present a pair of systems that allow, on the one hand, automatic extraction of whole-system algorithms for collecting information about a running system, and, on the other, the rapid identification of “hook points” within a system or program where security tools can interpose to be notified of security-relevant events. Finally, we present and evaluate a new dynamic measure of code similarity that examines the content of the data handled by the code, rather than the syntactic structure of the code itself. This problem has implications both for understanding the capabilities of novel malware as well as understanding large binary code bases such as operating system kernels.Ph.D

    Measuring the Semantic Integrity of a Process Self

    Get PDF
    The focus of the thesis is the definition of a framework to protect a process from attacks against the process self, i.e. attacks that alter the expected behavior of the process, by integrating static analysis and run-time monitoring. The static analysis of the program returns a description of the process self that consists of a context-free grammar, which defines the legal system call traces, and a set of invariants on process variables that hold when a system call is issued. Run-time monitoring assures the semantic integrity of the process by checking that its behavior is coherent with the process self returned by the static analysis. The proposed framework can also cover kernel integrity to protect the process from attacks from the kernel-level. The implementation of the run-time monitoring is based upon introspection, a technique that analyzes the state of a computer to rebuild and check the consistency of kernel or user-level data structures. The ability of observing the run-time values of variables reduces the complexity of the static analysis and increases the amount of information that can be extracted on the run-time behavior of the process. To achieve transparency of the controls for the process while avoiding the introduction of special purpose hardware units that access the memory, the architecture of the run-time monitoring adopts virtualization technology and introduces two virtual machines, the monitored and the introspection virtual machines. This approach increases the overall robustness because a distinct virtual machine, the introspection virtual machine, applies introspection in a transparent way both to verify the kernel integrity and to retrieve the status of the process to check the process self. After presenting the framework and its implementation, the thesis discusses some of its applications to increase the security of a computer network. The first application of the proposed framework is the remote attestation of the semantic integrity of a process. Then, the thesis describes a set of extensions to the framework to protect a process from physical attacks by running an obfuscated version of the process code. Finally, the thesis generalizes the framework to support the efficient sharing of an information infrastructure among users and applications with distinct security and reliability requirements by introducing highly parallel overlays

    Aspects of Code Generation and Data Transfer Techniques for Modern Parallel Architectures

    Get PDF
    Im Bereich der Prozessorarchitekturen hat sich der Fokus neuer Entwicklungen von immer höheren Taktfrequenzen hin zu immer mehr Kernen auf einem Chip verschoben. Eine hohe Kernanzahl ermöglicht es unterschiedlich leistungsfähige Kerne anzubieten, und sogar dedizierte Kerne mit speziellen Befehlssätzen. Die Entwicklung für solch heterogene Plattformen ist herausfordernd und benötigt entsprechende Unterstützung von Entwicklungswerkzeugen, wie beispielsweise Übersetzern. Neben ihrer heterogenen Kernstruktur gibt es eine zweite Dimension, die die Entwicklung für solche Architekturen anspruchsvoll macht: ihre Speicherstruktur. Die Aufrechterhaltung von globaler Cache-Kohärenz erschwert das Erreichen hoher Kernzahlen. Hardwarebasierte Cache-Kohärenz-Protokolle skalieren entweder schlecht, oder sind kompliziert und führen zu Problemen bei Ausführungszeit und Energieeffizienz. Eine radikale Lösung dieses Problems stellt die Abschaffung der globalen Cache-Kohärenz dar. Jedoch ist es schwierig, bestehende Programmiermodelle effizient auf solch eine Hardware-Architektur mit schwachen Garantien abzubilden. Der erste Teil dieser Dissertation beschäftigt sich Datentransfertechniken für nicht-cache-kohärente Architekturen mit gemeinsamem Speicher. Diese Architekturen bieten einen gemeinsamen physikalischen Adressraum, implementieren aber keine hardwarebasierte Kohärenz zwischen allen Caches des Systems. Die logische Partitionierung des gemeinsamen Speichers ermöglicht die sichere Programmierung einer solchen Plattform. Im Allgemeinen erzeugt dies die Notwendigkeit Daten zwischen Speicherpartitionen zu kopieren. Wir untersuchen die Übersetzung für invasive Architekturen, einer Familie von nicht-cache-kohärenten Vielkernarchitekturen. Wir betrachten die effiziente Implementierung von Datentransfers sowohl einfacher als auch komplexer Datenstrukturen auf invasiven Architekturen. Insbesondere schlagen wir eine neuartige Technik zum Kopieren komplexer verzeigerter Datenstrukturen vor, die ohne Serialisierung auskommt. Hierzu verallgemeinern wir den Objekt-Klon-Ansatz mit übersetzergesteuerter automatischer software-basierter Kohärenz, sodass er auch im Kontext nicht-kohärenter Caches funktioniert. Wir präsentieren Implementierungen mehrerer Datentransfertechniken im Rahmen eines existierenden Übersetzers und seines Laufzeitsystems. Wir führen eine ausführliche Auswertung dieser Implementierungen auf einem FPGA-basierten Prototypen einer invasiven Architektur durch. Schließlich schlagen wir vor, Hardwareunterstützung für bereichsbasierte Cache-Operationen hinzuzufügen und beschreiben und bewerten mögliche Implementierungen und deren Kosten. Der zweite Teil dieser Dissertation befasst sich mit der Beschleunigung von Shuffle-Code, der bei der Registerzuteilung auftritt, durch die Verwendung von Permutationsbefehlen. Die Aufgabe der Registerzuteilung während der Programmübersetzung ist die Abbildung von Programmvariablen auf Maschinenregister. Während der Registerzuteilung erzeugt der Übersetzer Shuffle-Code, der aus Kopier- und Tauschbefehlen besteht, um Werte zwischen Registern zu transferieren. Abhängig von der Qualität der Registerzuteilung und der Zahl der verfügbaren Register kann eine große Menge an Shuffle-Code erzeugt werden. Wir schlagen vor, die Ausführung von Shuffle-Code mit Hilfe von neuartigen Permutationsbefehlen zu beschleunigen, die die Inhalte von einigen Registern in einem Taktzyklus beliebig permutieren. Um die Machbarkeit dieser Idee zu demonstrieren, erweitern wir zunächst ein bestehendes RISC-Befehlsformat um Permutationsbefehle. Anschließend beschreiben wir, wie die vorgeschlagenen Permutationsbefehle in einer bestehenden RISC-Architektur implementiert werden können. Dann entwickeln wir zwei Verfahren zur Codeerzeugung, die die Permutationsbefehle ausnutzen, um Shuffle-Code zu beschleunigen: eine schnelle Heuristik und einen auf dynamischer Programmierung basierenden optimalen Ansatz. Wir beweisen Qualitäts- und Korrektheitseingeschaften beider Ansätze und zeigen die Optimalität des zweiten Ansatzes. Im Folgenden implementieren wir beide Codeerzeugungsverfahren in einem Übersetzer und untersuchen sowie vergleichen deren Codequalität ausführlich mit Hilfe standardisierter Benchmarks. Zunächst messen wir die genaue Zahl der dynamisch ausgeführten Befehle, welche wir folgend validieren, indem wir Programmlaufzeiten auf einer FPGA-basierten Prototypimplementierung der um Permutationsbefehle erweiterten RISC-Architektur messen. Schließlich argumentieren wir, dass Permutationsbefehle auf modernen Out-Of-Order-Prozessorarchitekturen, die bereits Registerumbenennung unterstützen, mit wenig Aufwand implementierbar sind

    IFPOC Symposium:Discovering antecedents and consequences of complex change recipients' reactions to organizational change.

    Get PDF
    IFPOC symposium: Discovering antecedents and consequences of complex change recipients' reactions to organizational change Chairs: Maria Vakola (Athens University of Economics and Business) & Karen Van Dam (Open University) Discussant: Mel Fugate (American University, Washington, D.C) State of the art Organisations are required to continuously change and develop but there is a high failure rate associated with change implementation success. In the past two decades, change researchers have started to investigate change recipients' reactions to change recognizing the crucial role of these reactions for successful change. This symposium aims at identifying and discussing the complex processes that underlie the relationships among antecedents, reactions and outcomes associated with organizational change. New perspective / contributions This symposium consists of five studies that extend our knowledge in the field by (i) providing an analysis of change recipients' reactions going beyond the dichotomous approaches (acceptance or resistance) (ii) revealing understudied antecedents-reactions and reactions-consequences patterns and relationships (iii) shedding light on the role of contextual factors i.e team climate and individual factors i.e emotion regulation on the adaptation to change. This symposium is based on a combination of both quantitative (i.e diary, survey) and qualitative (i.e interviews) research methodology. Research / practical implications This symposium aims to increase our understanding of the complex processes associated with change recipients' reactions to change. Discovering how these reactions are created and what are their results may reveal important contingencies that can explain how positive organizational outcomes during times of change can be stimulated which is beneficial for both researchers and practitioners

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book
    corecore