33 research outputs found

    Keeping Secrets: Multi-objective Genetic Improvement for Detecting and Reducing Information Leakage

    Get PDF
    Information leaks in software can unintentionally reveal private data, yet they are hard to detect and fix. Although several methods have been proposed to detect leakage, such as static verificationbased approaches, they require specialist knowledge, and are timeconsuming. Recently, HyperGI introduced a dynamic, hypertestbased approach that detects and produces potential fixes for information leakage. Its fitness function tries to balance information leakage and program correctness, but as the authors of that work point out, there may be a tradeoff between keeping program semantics and reducing information leakage. In this work we ask if it is possible to automatically detect and repair information leakage in more realistic programs without requiring specialist knowledge. Our approach, called LeakReducer explicitly encodes the tradeoff between program correctness and information leakage as a multi-objective optimisation problem. We apply LeakReducer to a set of leaky programs including the well known Heartbleed bug. It is comparable with HyperGI on their toy applications. In addition, we demonstrate it can find and reduce leakage in real applications and we see diverse solutions on our Pareto front. Upon investigation we find that having a Pareto front helps with some types of information leakage, but not all

    Differentially Testing Soundness and Precision of Program Analyzers

    Full text link
    In the last decades, numerous program analyzers have been developed both by academia and industry. Despite their abundance however, there is currently no systematic way of comparing the effectiveness of different analyzers on arbitrary code. In this paper, we present the first automated technique for differentially testing soundness and precision of program analyzers. We used our technique to compare six mature, state-of-the art analyzers on tens of thousands of automatically generated benchmarks. Our technique detected soundness and precision issues in most analyzers, and we evaluated the implications of these issues to both designers and users of program analyzers

    Hyperfuzzing: black-box security hypertesting with a grey-box fuzzer

    Full text link
    Information leakage is a class of error that can lead to severe consequences. However unlike other errors, it is rarely explicitly considered during the software testing process. LeakFuzzer advances the state of the art by using a noninterference security property together with a security flow policy as an oracle. As the tool extends the state of the art fuzzer, AFL++, LeakFuzzer inherits the advantages of AFL++ such as scalability, automated input generation, high coverage and low developer intervention. The tool can detect the same set of errors that a normal fuzzer can detect, with the addition of being able to detect violations of secure information flow policies. We evaluated LeakFuzzer on a diverse set of 10 C and C++ benchmarks containing known information leaks, ranging in size from just 80 to over 900k lines of code. Seven of these are taken from real-world CVEs including Heartbleed and a recent error in PostgreSQL. Given 20 24-hour runs, LeakFuzzer can find 100% of the leaks in the SUTs whereas existing techniques using such as the CBMC model checker and AFL++ augmented with different sanitizers can only find 40% at best.Comment: 11 pages, 4 figure

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access two-volume set constitutes the proceedings of the 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2021, which was held during March 27 – April 1, 2021, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2021. The conference was planned to take place in Luxembourg and changed to an online format due to the COVID-19 pandemic. The total of 41 full papers presented in the proceedings was carefully reviewed and selected from 141 submissions. The volume also contains 7 tool papers; 6 Tool Demo papers, 9 SV-Comp Competition Papers. The papers are organized in topical sections as follows: Part I: Game Theory; SMT Verification; Probabilities; Timed Systems; Neural Networks; Analysis of Network Communication. Part II: Verification Techniques (not SMT); Case Studies; Proof Generation/Validation; Tool Papers; Tool Demo Papers; SV-Comp Tool Competition Papers

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book

    Runtime Monitoring for Uncertain Times

    Get PDF
    In Runtime Verification (RV), monitors check programs for correct operation at execution time. Also called Runtime Monitoring, RV offers advantages over other approaches to program verification. Efficient monitoring is possible for programs where static checking is cost-prohibitive. Runtime monitors may test for execution faults like hardware failure, as well as logical faults. Unlike simple log checking, monitors are typically constructed using formal languages and methods that precisely define expectations and guarantees. Despite the advantages of RV, however, adoption remains low. Applying Runtime Monitoring techniques to real systems requires addressing practical concerns that have garnered little attention from researchers. System operators need monitors that provide immediate diagnostic information before and after failures, that are simple to operate over distributed systems, and that remain reliable when communication is not. These challenges are solvable, and solving them is a necessary step towards widespread RV deployment. This thesis provides solutions to these and other barriers to practical Runtime Monitoring. We address the need for reporting diagnostic information from monitored programs with nfer, a language and system for event stream abstraction. Nfer supports the automatic extraction of the structure of real-time software and includes integrations with popular programming languages. We also provide for the operation of nfer and other monitoring tools over distributed systems with Palisade, a framework built for low-latency detection of embedded system anomalies. Finally, we supply a method to ensure program properties may be monitored despite unreliable communication channels. We classify monitorable properties over general unreliable conditions and define an algorithm for when more specific conditions are known

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 12224 and 12225 constitutes the refereed proceedings of the 32st International Conference on Computer Aided Verification, CAV 2020, held in Los Angeles, CA, USA, in July 2020.* The 43 full papers presented together with 18 tool papers and 4 case studies, were carefully reviewed and selected from 240 submissions. The papers were organized in the following topical sections: Part I: AI verification; blockchain and Security; Concurrency; hardware verification and decision procedures; and hybrid and dynamic systems. Part II: model checking; software verification; stochastic systems; and synthesis. *The conference was held virtually due to the COVID-19 pandemic
    corecore