210 research outputs found

    Shadow symbolic execution for testing software patches

    Get PDF
    While developers are aware of the importance of comprehensively testing patches, the large effort involved in coming up with relevant test cases means that such testing rarely happens in practice. Furthermore, even when test cases are written to cover the patch, they often exercise the same behaviour in the old and the new version of the code. In this article, we present a symbolic execution-based technique that is designed to generate test inputs that cover the new program behaviours introduced by a patch. The technique works by executing both the old and the new version in the same symbolic execution instance, with the old version shadowing the new one. During this combined shadow execution, whenever a branch point is reached where the old and the new version diverge, we generate a test case exercising the divergence and comprehensively test the new behaviours of the new version. We evaluate our technique on the Coreutils patches from the CoREBench suite of regression bugs, and show that it is able to generate test inputs that exercise newly added behaviours and expose some of the regression bugs

    Software testing or the bugs’ nightmare

    Get PDF
    Software development is not error-free. For decades, bugs –including physical ones– have become a significant development problem requiring major maintenance efforts. Even in some cases, solving bugs led to increment them. One of the main reasons for bug’s prominence is their ability to hide. Finding them is difficult and costly in terms of time and resources. However, software testing made significant progress identifying them by using different strategies that combine knowledge from every single part of the program. This paper humbly reviews some different approaches from software testing that discover bugs automatically and presents some different state-of-the-art methods and tools currently used in this area. It covers three testing strategies: search-based methods, symbolic execution, and fuzzers. It also provides some income about the application of diversity in these areas, and common and future challenges on automatic test generation that still need to be addressed

    Sydr-Fuzz: Continuous Hybrid Fuzzing and Dynamic Analysis for Security Development Lifecycle

    Full text link
    Nowadays automated dynamic analysis frameworks for continuous testing are in high demand to ensure software safety and satisfy the security development lifecycle~(SDL) requirements. The security bug hunting efficiency of cutting-edge hybrid fuzzing techniques outperforms widely utilized coverage-guided fuzzing. We propose an enhanced dynamic analysis pipeline to leverage productivity of automated bug detection based on hybrid fuzzing. We implement the proposed pipeline in the continuous fuzzing toolset Sydr-Fuzz which is powered by hybrid fuzzing orchestrator, integrating our DSE tool Sydr with libFuzzer and AFL++. Sydr-Fuzz also incorporates security predicate checkers, crash triaging tool Casr, and utilities for corpus minimization and coverage gathering. The benchmarking of our hybrid fuzzer against alternative state-of-the-art solutions demonstrates its superiority over coverage-guided fuzzers while remaining on the same level with advanced hybrid fuzzers. Furthermore, we approve the relevance of our approach by discovering 85 new real-world software flaws within the OSS-Sydr-Fuzz project. Finally, we open Casr source code to the community to facilitate examination of the existing crashes

    A New Paradigm to Address Threats for Virtualized Services

    Get PDF
    With the uptaking of virtualization technologies and the growing usage of public cloud infrastructures, an ever larger number of applications run outside of the traditional enterprise’s perimeter, and require new security paradigms that fit the typical agility and elasticity of cloud models in service creation and management. Though some recent proposals have integrated security appliances in the logical application topology, we argue that this approach is sub-optimal. Indeed, we believe that embedding security agents in virtualization containers and delegating the control logic to the software orchestrator provides a much more effective, flexible, and scalable solution to the problem. In this paper, we motivate our mindset and outline a novel framework for assessing cyber-threats of virtualized applications and services. We also review existing technologies that build the foundation of our proposal, which we are going to develop in the context of a joint research project

    P4Testgen: An Extensible Test Oracle For P4

    Full text link
    We present P4Testgen, a test oracle for the P4-16 language that supports automatic generation of packet tests for any P4-programmable device. Given a P4 program and sufficient time, P4Testgen generates tests that cover every reachable statement in the input program. Each generated test consists of an input packet, control-plane configuration, and output packet(s), and can be executed in software or on hardware. Unlike prior work, P4Testgen is open source and extensible, making it a general resource for the community. P4Testgen not only covers the full P4-16 language specification, it also supports modeling the semantics of an entire packet-processing pipeline, including target-specific behaviors-i.e., whole-program semantics. Handling aspects of packet processing that lie outside of the official specification is critical for supporting real-world targets (e.g., switches, NICs, end host stacks). In addition, P4Testgen uses taint tracking and concolic execution to model complex externs (e.g., checksums and hash functions) that have been omitted by other tools, and ensures the generated tests are correct and deterministic. We have instantiated P4Testgen to build test oracles for the V1model, eBPF, and the Tofino (TNA and T2NA) architectures; each of these extensions only required effort commensurate with the complexity of the target. We validated the tests generated by P4Testgen by running them across the entire P4C program test suite as well as the Tofino programs supplied with Intel's P4 Studio. In just a few months using the tool, we discovered and confirmed 25 bugs in the mature, production toolchains for BMv2 and Tofino, and are conducting ongoing investigations into further faults uncovered by P4Testgen

    Parallel bug-finding in concurrent programs via reduced interleaving instances

    Get PDF
    Concurrency poses a major challenge for program verification, but it can also offer an opportunity to scale when subproblems can be analysed in parallel. We exploit this opportunity here and use a parametrizable code-to-code translation to generate a set of simpler program instances, each capturing a reduced set of the original program’s interleavings. These instances can then be checked independently in parallel. Our approach does not depend on the tool that is chosen for the final analysis, is compatible with weak memory models, and amplifies the effectiveness of existing tools, making them find bugs faster and with fewer resources. We use Lazy-CSeq as an off-the-shelf final verifier to demonstrate that our approach is able, already with a small number of cores, to find bugs in the hardest known concurrency benchmarks in a matter of minutes, whereas other dynamic and static tools fail to do so in hours
    • …
    corecore