14 research outputs found

    Defense and Attack Techniques against File-based TOCTOU Vulnerabilities: a Systematic Review

    Get PDF
    File-based Time-of-Check to Time-of-Use (TOCTOU) race conditions are a well-known type of security vulnerability. A wide variety of techniques have been proposed to detect, mitigate, avoid, and exploit these vulnerabilities over the past 35 years. However, despite these research efforts, TOCTOU vulnerabilities remain unsolved due to their non-deterministic nature and the particularities of the different filesystems involved in running vulnerable programs, especially in Unix-like operating system environments. In this paper, we present a systematic literature review on defense and attack techniques related to the file-based TOCTOU vulnerability. We apply a reproducible methodology to search, filter, and analyze the most relevant research proposals to define a global and understandable vision of existing solutions. The results of this analysis are finally used to discuss future research directions that can be explored to move towards a universal solution to this type of vulnerability. Autho

    File-based race conditions in UNIX: TOCTOU

    Get PDF
    This work presents an introduction to the Time Of Check to Time Of Use (TOCTOU) vulnerability as well as the development of a user-space library that hooks vulnerable system calls and modifies their behavior.&nbsp

    Automatic detection of safety and security vulnerabilities in open source software

    Get PDF
    Growing software quality requirements have raised the stakes on software safety and security. Building secure software focuses on techniques and methodologies of design and implementation in order to avoid exploitable vulnerabilities. Unfortunately, coding errors have become common with the inexorable growth tendency of software size and complexity. According to the US National Institute of Standards and Technology (NIST), these coding errors lead to vulnerabilities that cost the US economy $60 billion each year. Therefore, tracking security and safety errors is considered as a fundamental cornerstone to deliver software that are free from severe vulnerabilities. The main objective of this thesis is the elaboration of efficient, rigorous, and practical techniques for the safety and security evaluation of source code. To tackle safety errors related to the misuse of type and memory operations, we present a novel type and effect discipline that extends the standard C type system with safety annotations and static safety checks. We define an inter-procedural, flow-sensitive, and alias-sensitive inference algorithm that automatically propagates type annotations and applies safety checks to programs without programmers' interaction. Moreover, we present a dynamic semantics of our C core language that is compliant with the ANSI C standard. We prove the consistency of the static semantics with respect to the dynamic semantics. We show the soundness of our static analysis in detecting our targeted set of safety errors. To tackle system-specific security properties, we present a security verification framework that combines static analysis and model-checking. We base our approach on the GCC compiler and its GIMPLE representation of source code to extract model-checkable abstractions of programs. For the verification process, we use an off-the-shelf pushdown system model-checker, and turn it into a fully-fledged security verification framework. We also allow programmers to define a wide range of security properties using an automata-based specification approach. To demonstrate the efficiency and the scalability of our approach, we conduct extensive experiments and case studies on large scale open-source software to verify their compliance with a representative set of the CERT standard secure coding rules

    From experiment to design – fault characterization and detection in parallel computer systems using computational accelerators

    Get PDF
    This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of program states that included dynamically allocated memory (to be spatially comprehensive). In GPUs, we used fault injection studies to demonstrate the importance of detecting silent data corruption (SDC) errors that are mainly due to the lack of fine-grained protections and the massive use of fault-insensitive data. This dissertation also presents transparent fault tolerance frameworks and techniques that are directly applicable to hybrid computers built using only commercial off-the-shelf hardware components. This dissertation shows that by developing understanding of the failure characteristics and error propagation paths of target programs, we were able to create fault tolerance frameworks and techniques that can quickly detect and recover from hardware faults with low performance and hardware overheads

    Data-driven resiliency assessment of medical cyber-physical systems

    Get PDF
    Advances in computing, networking, and sensing technologies have resulted in the ubiquitous deployment of medical cyber-physical systems in various clinical and personalized settings. The increasing complexity and connectivity of such systems, the tight coupling between their cyber and physical components, and the inevitable involvement of human operators in supervision and control have introduced major challenges in ensuring system reliability, safety, and security. This dissertation takes a data-driven approach to resiliency assessment of medical cyber-physical systems. Driven by large-scale studies of real safety incidents involving medical devices, we develop techniques and tools for (i) deeper understanding of incident causes and measurement of their impacts, (ii) validation of system safety mechanisms in the presence of realistic hazard scenarios, and (iii) preemptive real-time detection of safety hazards to mitigate adverse impacts on patients. We present a framework for automated analysis of structured and unstructured data from public FDA databases on medical device recalls and adverse events. This framework allows characterization of the safety issues originated from computer failures in terms of fault classes, failure modes, and recovery actions. We develop an approach for constructing ontology models that enable automated extraction of safety-related features from unstructured text. The proposed ontology model is defined based on device-specific human-in-the-loop control structures in order to facilitate the systems-theoretic causality analysis of adverse events. Our large-scale analysis of FDA data shows that medical devices are often recalled because of failure to identify all potential safety hazards, use of safety mechanisms that have not been rigorously validated, and limited capability in real-time detection and automated mitigation of hazards. To address those problems, we develop a safety hazard injection framework for experimental validation of safety mechanisms in the presence of accidental failures and malicious attacks. To reduce the test space for safety validation, this framework uses systems-theoretic accident causality models in order to identify the critical locations within the system to target software fault injection. For mitigation of safety hazards at run time, we present a model-based analysis framework that estimates the consequences of control commands sent from the software to the physical system through real-time computation of the system’s dynamics, and preemptively detects if a command is unsafe before its adverse consequences manifest in the physical system. The proposed techniques are evaluated on a real-world cyber-physical system for robot-assisted minimally invasive surgery and are shown to be more effective than existing methods in identifying system vulnerabilities and deficiencies in safety mechanisms as well as in preemptive detection of safety hazards caused by malicious attacks

    TOWARDS REDESIGNING WEB BROWSERS WITH SECURITY PRINCIPLES

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Operating system auditing and monitoring

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore