23,734 research outputs found

    Formal framework for reasoning about the precision of dynamic analysis

    Get PDF
    Dynamic program analysis is extremely successful both in code debugging and in malicious code attacks. Fuzzing, concolic, and monkey testing are instances of the more general problem of analysing programs by dynamically executing their code with selected inputs. While static program analysis has a beautiful and well established theoretical foundation in abstract interpretation, dynamic analysis still lacks such a foundation. In this paper, we introduce a formal model for understanding the notion of precision in dynamic program analysis. It is known that in sound-by-construction static program analysis the precision amounts to completeness. In dynamic analysis, which is inherently unsound, precision boils down to a notion of coverage of execution traces with respect to what the observer (attacker or debugger) can effectively observe about the computation. We introduce a topological characterisation of the notion of coverage relatively to a given (fixed) observation for dynamic program analysis and we show how this coverage can be changed by semantic preserving code transformations. Once again, as well as in the case of static program analysis and abstract interpretation, also for dynamic analysis we can morph the precision of the analysis by transforming the code. In this context, we validate our model on well established code obfuscation and watermarking techniques. We confirm the efficiency of existing methods for preventing control-flow-graph extraction and data exploit by dynamic analysis, including a validation of the potency of fully homomorphic data encodings in code obfuscation

    Automated Program Analysis for Novice Programmers

    Get PDF
    [EN] This paper describes how to adapt a static code analyzer to provide feedback novice programmers and their teachers. Current analyzers have been built to give feedback to experienced programmers who work on software projects or systems. The type of feedback and the type of analysis of these tools focusses on mistakes that are relevant within that context, and help with debugging software system. When teaching novice programmers this type of advice is often not particularly useful. It would be instead more useful to use these techniques to identify problem in the understanding of students of important programming concepts. This paper first explores in what respect static analyzers support the learning and teaching of programming, and what can be implemented based on existing static analysis technology. It presents an extension of static analyzer PMD to create feedback that is more valuable to novice programmers. To answer the question if these techniques are able to find conceptual mistakes that are characteristic for novice programmers make, we ran it over a number of student projects, and compared these results with publicly available mature software projects.Blok, T.; Fehnker, A. (2017). Automated Program Analysis for Novice Programmers. En Proceedings of the 3rd International Conference on Higher Education Advances. Editorial Universitat Politècnica de València. 1138-1146. https://doi.org/10.4995/HEAD17.2017.5533OCS1138114
    corecore