19 research outputs found

    The Taint Rabbit: Optimizing Generic Taint Analysis with Dynamic Fast Path Generation

    Full text link
    Generic taint analysis is a pivotal technique in software security. However, it suffers from staggeringly high overhead. In this paper, we explore the hypothesis whether just-in-time (JIT) generation of fast paths for tracking taint can enhance the performance. To this end, we present the Taint Rabbit, which supports highly customizable user-defined taint policies and combines a JIT with fast context switching. Our experimental results suggest that this combination outperforms notable existing implementations of generic taint analysis and bridges the performance gap to specialized trackers. For instance, Dytan incurs an average overhead of 237x, while the Taint Rabbit achieves 1.7x on the same set of benchmarks. This compares favorably to the 1.5x overhead delivered by the bitwise, non-generic, taint engine LibDFT

    Models and approaches to attack surface analysis for fuzz testing of the Linux kernel

    Get PDF
    The purpose of the study was to analyze possible methods for determining the attack surface in relation to solving the problem of fuzzing testing the kernel of operating systems of the Linux family and selecting the most suitable one. To evaluate and compare various models and practical approaches to attack surface analysis, as well as assess the possibility of combining them, theoretical research methods such as analysis, comparison, and deduction were used. Existing models and approaches to analyzing the attack surface of the Linux kernel are assessed and compared. A solution is proposed for the practical determination of the attack surface for effective testing of the kernel using the fuzzing method, which combines the studied approaches. The results of the study can be used to practically construct an attack surface, which will allow us to more accurately determine the goals of fuzz testing of the Linux kernel

    Blind Spots: Automatically detecting ignored program inputs

    Full text link
    A blind spot is any input to a program that can be arbitrarily mutated without affecting the program's output. Blind spots can be used for steganography or to embed malware payloads. If blind spots overlap file format keywords, they indicate parsing bugs that can lead to differentials. This paper formalizes the operational semantics of blind spots, leading to a technique that automatically detects blind spots based on dynamic information flow tracking. An efficient implementation is introduced an evaluated against a corpus of over a thousand diverse PDFs. There are zero false-positive blind spot classifications and the missed detection rate is bounded above by 11%. On average, at least 5% of each PDF file is completely ignored by the parser. Our results show promise that this technique is an efficient automated means to detect parser bugs and differentials. Nothing in the technique is tied to PDF in general, so it can be immediately applied to other notoriously difficult-to-parse formats like ELF, X.509, and XML

    Approaches to determining the attack surface for fuzzing the Linux kernel

    Get PDF
    The purpose of the study was to analyze possible methods for determining the attack surface in relation to solving the problem of fuzzing testing the kernel of operating systems of the Linux family and to select the most suitable one. To evaluate and compare various models and practical approaches to attack surface analysis, as well as assess the possibility of combining them, theoretical research methods such as analysis, comparison, and deduction were used. An assessment and comparison of existing models and approaches to analyzing the attack surface of the Linux operating system kernel was carried out. A solution is proposed for the practical determination of the attack surface for effective testing of the kernel using the fuzzing method, which combines the studied approaches. The results of the study can be used to practically construct an attack surface, which will allow us to more accurately determine the goals of fuzz testing of the Linux kernel

    CipherTrace: automatic detection of ciphers from execution traces to neutralize ransomware

    Get PDF
    In 2021, the largest US pipeline system for refined oil products suffered a 6-day shutdown due to a ransomware attack [1]. In 2023, the sensitive systems of the US Marshals Service were attacked by a ransomware [2]. One of the most effective ways to fight ransomware is to extract the secret keys. The challenge of detecting and identifying cryptographic primitives has been around for over a decade. Many tools have been proposed, but the vast majority of them use templates or signatures, and their support for different operating systems and processor architectures is rather limited; neither have there been enough tools capable of extracting the secret keys. In this paper, we present CipherTrace, a generic and automated system to detect and identify the class of cipher algorithms in binary programs, and additionally, locate and extract the secret keys and cryptographic states accessed by the cipher. We focus on product ciphers, and evaluate CipherTrace using four standard cipher algorithms, four different hashing algorithms, and five of the most recent and popular ransomware specimens. Our results show that CipherTrace is capable of fully dissecting Fixed S-Box block ciphers (e.g. AES and Serpent) and can extract the secret keys and other cryptographic artefacts, regardless of the operating system, implementation, or input- or key-size, and without using signatures or templates. We show a significant improvement in performance and functionality compared to the closely related works. CipherTrace helps in fighting ransomware, and aids analysts in their malware analysis and reverse engineering efforts

    HardTaint: Production-Run Dynamic Taint Analysis via Selective Hardware Tracing

    Full text link
    Dynamic taint analysis (DTA), as a fundamental analysis technique, is widely used in security, privacy, and diagnosis, etc. As DTA demands to collect and analyze massive taint data online, it suffers extremely high runtime overhead. Over the past decades, numerous attempts have been made to lower the overhead of DTA. Unfortunately, the reductions they achieved are marginal, causing DTA only applicable to the debugging/testing scenarios. In this paper, we propose and implement HardTaint, a system that can realize production-run dynamic taint tracking. HardTaint adopts a hybrid and systematic design which combines static analysis, selective hardware tracing and parallel graph processing techniques. The comprehensive evaluations demonstrate that HardTaint introduces only around 9% runtime overhead which is an order of magnitude lower than the state-of-the-arts, while without sacrificing any taint detection capability

    Designing Robust API Monitoring Solutions

    Get PDF
    racing the sequence of library calls and system calls that a program makes is very helpful to characterize its interactions with the surrounding environment and, ultimately, its semantics. However, due to the entanglements of real-world software stacks, accomplishing this task can be surprisingly challenging as we take accuracy, reliability, and transparency into the equation. In this article, we identify six challenges that API monitoring solutions should overcome in order to manage these dimensions effectively and outline actionable design points for building robust API tracers that can be used even for security research. We then detail and evaluate SNIPER, an open-source API tracing system available in two variants based on dynamic binary instrumentation (for simplified in-guest deployment) and hardware-assisted virtualization (realizing the first general user-space tracer of this kind), respectively

    Cyber Security

    Get PDF
    This open access book constitutes the refereed proceedings of the 17th International Annual Conference on Cyber Security, CNCERT 2021, held in Beijing, China, in AJuly 2021. The 14 papers presented were carefully reviewed and selected from 51 submissions. The papers are organized according to the following topical sections: ​data security; privacy protection; anomaly detection; traffic analysis; social network security; vulnerability detection; text classification
    corecore