2 research outputs found

    HotFuzz: Discovering Algorithmic Denial-of-Service Vulnerabilities Through Guided Micro-Fuzzing

    Full text link
    Contemporary fuzz testing techniques focus on identifying memory corruption vulnerabilities that allow adversaries to achieve either remote code execution or information disclosure. Meanwhile, Algorithmic Complexity (AC)vulnerabilities, which are a common attack vector for denial-of-service attacks, remain an understudied threat. In this paper, we present HotFuzz, a framework for automatically discovering AC vulnerabilities in Java libraries. HotFuzz uses micro-fuzzing, a genetic algorithm that evolves arbitrary Java objects in order to trigger the worst-case performance for a method under test. We define Small Recursive Instantiation (SRI) as a technique to derive seed inputs represented as Java objects to micro-fuzzing. After micro-fuzzing, HotFuzz synthesizes test cases that triggered AC vulnerabilities into Java programs and monitors their execution in order to reproduce vulnerabilities outside the fuzzing framework. HotFuzz outputs those programs that exhibit high CPU utilization as witnesses for AC vulnerabilities in a Java library. We evaluate HotFuzz over the Java Runtime Environment (JRE), the 100 most popular Java libraries on Maven, and challenges contained in the DARPA Space and Time Analysis for Cybersecurity (STAC) program. We evaluate SRI's effectiveness by comparing the performance of micro-fuzzing with SRI, measured by the number of AC vulnerabilities detected, to simply using empty values as seed inputs. In this evaluation, we verified known AC vulnerabilities, discovered previously unknown AC vulnerabilities that we responsibly reported to vendors, and received confirmation from both IBM and Oracle. Our results demonstrate that micro-fuzzing finds AC vulnerabilities in real-world software, and that micro-fuzzing with SRI-derived seed inputs outperforms using empty values.Comment: Network and Distributed Systems Security (NDSS) Symposium, San Diego, CA, USA, February 202

    Human-centric verification for software safety and security

    Get PDF
    Software forms a critical part of our lives today. Verifying software to avoid violations of safety and security properties is a necessary task. It is also imperative to have an assurance that the verification process was correct. We propose a human-centric approach to software verification. This involves enabling human-machine collaboration to detect vulnerabilities and to prove the correctness of the verification. We discuss two classes of vulnerabilities. The first class is Algorithmic Complexity Vulnerabilities (ACV). ACVs are a class of software security vulnerabilities that cause denial-of-service attacks. The description of an ACV is not known a priori. The problem is equivalent to searching for a needle in the haystack when we don\u27t know what the needle looks like. We present a novel approach to detect ACVs in web applications. We present a case study audit from DARPA\u27s Space/Time Analysis for Cybersecurity (STAC) program to illustrate our approach. The second class of vulnerabilities is Memory Leaks. Although the description of the Memory Leak (ML) problem is known, a proof of the correctness of the verification is needed to establish trust in the results. We present an approach inspired by the works of Alan Perlis to compute evidence of the verification which can be scrutinized by a human to prove the correctness of the verification. We present a novel abstraction, the Evidence Graph, that succinctly captures the verification evidence and show how to compute the evidence. We evaluate our approach against ML instances in the Linux kernel and report improvement over the state-of-the-art results. We also present two case studies to illustrate how the Evidence Graph can be used to prove the correctness of the verification
    corecore