15 research outputs found

    Assured Android Execution Environments

    Get PDF
    Current cybersecurity best practices, techniques, tactics and procedures are insufficient to ensure the protection of Android systems. Software tools leveraging formal methods use mathematical means to assure both a design and implementation for a system and these methods can be used to provide security assurances. The goal of this research is to determine methods of assuring isolation when executing Android software in a contained environment. Specifically, this research demonstrates security properties relevant to Android software containers can be formally captured and validated, and that an implementation can be formally verified to satisfy a corresponding specification. A three-stage methodology called The Formal Verification Cycle is presented. This cycle focuses on the iteration over a set of security properties to validate each within a specification and their verification within a software implementation. A security property can be validated when its functional language prototype (e.g. a Haskell coded version of the property) is converted and processed by a formal method (e.g. a theorem proof assistant). This validation of the property enables the definition of the property in a software specification, which can be implemented separately in an imperative programming language (e.g. the Go programming language). Once the implementation is complete another formal method can be used (e.g. symbolic execution) to verify the imperative implementation satisfies the validated specification. Successful completion of this cycle shows a given implementation is equivalent to a functional language prototype, and this cycle assures a specification for the original desired security properties was properly implemented. This research shows an application of this cycle to develop Assured Android Execution Environments

    On the Efficient Design and Testing of Dependable Systems Software

    Get PDF
    Modern computing systems that enable increasingly smart and complex applications permeate our daily lives. We strive for a fully connected and automated world to simplify our lives and increase comfort by offloading tasks to smart devices and systems. We have become dependent on the complex and ever growing ecosystem of software that drives the innovations of our smart technologies. With this dependence on complex software systems arises the question whether these systems are dependable, i.e., whether we can actually trust them to perform their intended functions. As software is developed by human beings, it must be expected to contain faults, and we need strategies and techniques to minimize both their number and the severity of their impact that scale with the increase in software complexity. Common approaches to achieve dependable operation include fault acceptance and fault avoidance strategies. The former gracefully handle faults when they occur during operation, e.g., by isolating and restarting faulty components, whereas the latter try to remove faults before system deployment, e.g., by applying correctness testing and software fault injection (SFI) techniques. On this background, this thesis aims at improving the efficiency of fault isolation for operating system kernel components, which are especially critical for dependable operation, as well as at improving the efficiency of dynamic testing activities to cope with the increasing complexity of software. Using the widely used Linux kernel, we demonstrate that partial fault isolation techniques for kernel software components can be enhanced with dynamic runtime profiles to strike a balance between the expected overheads imposed by the isolation mechanism and the achieved degree of isolation according to user requirements. With the increase in software complexity, comprehensive correctness and robustness assessments using testing and SFI require a substantially increasing number of individual tests whose execution requires a considerable amount of time. We study, considering different levels of the software stack, if modern parallel hardware can be employed to mitigate this increase. In particular, we demonstrate that SFI tests can benefit from parallel execution if such tests are carefully designed and conducted. We furthermore introduce a novel SFI framework to efficiently conduct such experiments. Moreover, we investigate if existing test suites for correctness testing can already benefit from parallel execution and provide an approach that offers a migration path for test suites that have not originally been designed for parallel execution

    OS-level Attacks and Defenses: from Software to Hardware-based Exploits

    Get PDF
    Run-time attacks have plagued computer systems for more than three decades, with control-flow hijacking attacks such as return-oriented programming representing the long-standing state-of-the-art in memory-corruption based exploits. These attacks exploit memory-corruption vulnerabilities in widely deployed software, e.g., through malicious inputs, to gain full control over the platform remotely at run time, and many defenses have been proposed and thoroughly studied in the past. Among those defenses, control-flow integrity emerged as a powerful and effective protection against code-reuse attacks in practice. As a result, we now start to see attackers shifting their focus towards novel techniques through a number of increasingly sophisticated attacks that combine software and hardware vulnerabilities to construct successful exploits. These emerging attacks have a high impact on computer security, since they completely bypass existing defenses that assume either hardware or software adversaries. For instance, they leverage physical effects to provoke hardware faults or force the system into transient micro-architectural states. This enables adversaries to exploit hardware vulnerabilities from software without requiring physical presence or software bugs. In this dissertation, we explore the real-world threat of hardware and software-based run-time attacks against operating systems. While memory-corruption-based exploits have been studied for more than three decades, we show that data-only attacks can completely bypass state-of-the-art defenses such as Control-Flow Integrity which are also deployed in practice. Additionally, hardware vulnerabilities such as Rowhammer, CLKScrew, and Meltdown enable sophisticated adversaries to exploit the system remotely at run time without requiring any memory-corruption vulnerabilities in the system’s software. We develop novel design strategies to defend the OS against hardware-based attacks such as Rowhammer and Meltdown to tackle the limitations of existing defenses. First, we present two novel data-only attacks that completely break current code-reuse defenses deployed in real-world software and propose a randomization-based defense against such data-only attacks in the kernel. Second, we introduce a compiler-based framework to automatically uncover memory-corruption vulnerabilities in real-world kernel code. Third, we demonstrate the threat of Rowhammer-based attacks in security-sensitive applications and how to enable a partitioning policy in the system’s physical memory allocator to effectively and efficiently defend against such attacks. We demonstrate feasibility and real-world performance through our prototype for the popular and widely used Linux kernel. Finally, we develop a side-channel defense to eliminate Meltdown-style cache attacks by strictly isolating the address space of kernel and user memory

    Defining interfaces between hardware and software: Quality and performance

    Get PDF
    One of the most important interfaces in a computer system is the interface between hardware and software. This interface is the contract between the hardware designer and the programmer that defines the functional behaviour of the hardware. This thesis examines two critical aspects of defining the hardware-software interface: quality and performance. The first aspect is creating a high quality specification of the interface as conventionally defined in an instruction set architecture. The majority of this thesis is concerned with creating a specification that covers the full scope of the interface; that is applicable to all current implementations of the architecture; and that can be trusted to accurately describe the behaviour of implementations of the architecture. We describe the development of a formal specification of the two major types of Arm processors: A-class (for mobile devices such as phones and tablets) and M-class (for micro-controllers). These specifications are unparalleled in their scope, applicability and trustworthiness. This thesis identifies and illustrates what we consider the key ingredient in achieving this goal: creating a specification that is used by many different user groups. Supporting many different groups leads to improved quality as each group finds different problems in the specification; and, by providing value to each different group, it helps justify the considerable effort required to create a high quality specification of a major processor architecture. The work described in this thesis led to a step change in Arm's ability to use formal verification techniques to detect errors in their processors; enabled extensive testing of the specification against Arm's official architecture conformance suite; improved the quality of Arm's architecture conformance suite based on measuring the architectural coverage of the tests; supported earlier, faster development of architecture extensions by enabling animation of changes as they are being made; and enabled early detection of problems created from architecture extensions by performing formal validation of the specification against semi-structured natural language specifications. As far as we are aware, no other mainstream processor architecture has this capability. The formal specifications are included in Arm's publicly released architecture reference manuals and the A-class specification is also released in machine-readable form. The second aspect is creating a high performance interface by defining the hardware-software interface of a software-defined radio subsystem using a programming language. That is, an interface that allows software to exploit the potential performance of the underlying hardware. While the hardware-software interface is normally defined in terms of machine code, peripheral control registers and memory maps, we define it using a programming language instead. This higher level interface provides the opportunity for compilers to hide some of the low-level differences between different systems from the programmer: a potentially very efficient way of providing a stable, portable interface without having to add hardware to provide portability between different hardware platforms. We describe the design and implementation of a set of extensions to the C programming language to support programming high performance, energy efficient, software defined radio systems. The language extensions enable the programmer to exploit the pipeline parallelism typically present in digital signal processing applications and to make efficient use of the asymmetric multiprocessor systems designed to support such applications. The extensions consist primarily of annotations that can be checked for consistency and that support annotation inference in order to reduce the number of annotations required. Reducing the number of annotations does not just save programmer effort, it also improves portability by reducing the number of annotations that need to be changed when porting an application from one platform to another. This work formed part of a project that developed a high-performance, energy-efficient, software defined radio capable of implementing the physical layers of the 4G cellphone standard (LTE), 802.11a WiFi and Digital Video Broadcast (DVB) with a power and silicon area budget that was competitive with a conventional custom ASIC solution. The Arm architecture is the largest computer architecture by volume in the world. It behooves us to ensure that the interface it describes is appropriately defined

    Formally designing and implementing cyber security mechanisms in industrial control networks.

    Get PDF
    This dissertation describes progress in the state-of-the-art for developing and deploying formally verified cyber security devices in industrial control networks. It begins by detailing the unique struggles that are faced in industrial control networks and why concepts and technologies developed for securing traditional networks might not be appropriate. It uses these unique struggles and examples of contemporary cyber-attacks targeting control systems to argue that progress in securing control systems is best met with formal verification of systems, their specifications, and their security properties. This dissertation then presents a development process and identifies two technologies, TLA+ and seL4, that can be leveraged to produce a high-assurance embedded security device. The method presented in this dissertation takes an informal design of an embedded device that might be found in a control system and 1) formalizes the design within TLA+, 2) creates and mechanically checks a model built from the formal design, and 3) translates the TLA+ design into a component-based architecture of a native seL4 application. The later chapters of this dissertation describe an application of the process to a security preprocessor embedded device that was designed to add security mechanisms to the network communication of an existing control system. The device and its security properties are formally specified in TLA+ in chapter 4, mechanically checked in chapter 5, and finally its native seL4 architecture is implemented in chapter 6. Finally, the conclusions derived from the research are laid out, as well as some possibilities for expanding the presented method in the future
    corecore