433 research outputs found

    A Survey of Symbolic Execution Techniques

    Get PDF
    Many security and software testing applications require checking whether certain properties of a program hold for any possible usage scenario. For instance, a tool for identifying software vulnerabilities may need to rule out the existence of any backdoor to bypass a program's authentication. One approach would be to test the program using different, possibly random inputs. As the backdoor may only be hit for very specific program workloads, automated exploration of the space of possible inputs is of the essence. Symbolic execution provides an elegant solution to the problem, by systematically exploring many possible execution paths at the same time without necessarily requiring concrete inputs. Rather than taking on fully specified input values, the technique abstractly represents them as symbols, resorting to constraint solvers to construct actual instances that would cause property violations. Symbolic execution has been incubated in dozens of tools developed over the last four decades, leading to major practical breakthroughs in a number of prominent software reliability applications. The goal of this survey is to provide an overview of the main ideas, challenges, and solutions developed in the area, distilling them for a broad audience. The present survey has been accepted for publication at ACM Computing Surveys. If you are considering citing this survey, we would appreciate if you could use the following BibTeX entry: http://goo.gl/Hf5FvcComment: This is the authors pre-print copy. If you are considering citing this survey, we would appreciate if you could use the following BibTeX entry: http://goo.gl/Hf5Fv

    Concolic Execution and Test Case Generation in Prolog

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-17822-6_10Symbolic execution extends concrete execution by allowing symbolic input data and then exploring all feasible execution paths. It has been defined and used in the context of many different programming languages and paradigms. A symbolic execution engine is at the heart of many program analysis and transformation techniques, like partial evaluation, test case generation or model checking, to name a few. Despite its relevance, traditional symbolic execution also suffers from several drawbacks. For instance, the search space is usually huge (often infinite) even for the simplest programs. Also, symbolic execution generally computes an overapproximation of the concrete execution space, so that false positives may occur. In this paper, we propose the use of a variant of symbolic execution, called concolic execution, for test case generation in Prolog. Our technique aims at full statement coverage. We argue that this technique computes an underapproximation of the concrete execution space (thus avoiding false positives) and scales up better to medium and large Prolog applications.This work has been partially supported by the EU (FEDER) and the Spanish Ministerio de Economía y Competitividad (Secretaría de Estado de Investigación, Desarrollo e Innovación) under grant TIN2013-44742-C4-1-R and by the Generalitat Valenciana under grant PROMETEO/2011/052.Vidal Oriola, GF. (2015). Concolic Execution and Test Case Generation in Prolog. En Logic-Based Program Synthesis and Transformation. Springer. 167-181. https://doi.org/10.1007/978-3-319-17822-6_10S167181Albert, E., Arenas, P., Gómez-Zamalloa, M., Rojas, J.M.: Test case generation by symbolic execution: basic concepts, a CLP-based instance, and actor-based concurrency. In: Bernardo, M., Damiani, F., Hähnle, R., Johnsen, E.B., Schaefer, I. (eds.) SFM 2014. LNCS, vol. 8483, pp. 263–309. Springer, Heidelberg (2014)Belli, F., Jack, O.: Implementation-based analysis and testing of Prolog programs. In: ISSTA, pp. 70–80. ACM (1993)Clarke, L.A.: A program testing system. In: Proceedings of the 1976 Annual Conference (ACM’76), Houston, pp. 488–491 (1976)De Schreye, D., Glück, R., Jørgensen, J., Leuschel, M., Martens, B., Sørensen, M.H.: Conjunctive partial deduction: foundations, control, algorithms, and experiments. J. Log. Program. 41(2&3), 231–277 (1999)Giesl, J., Ströder, T., Schneider-Kamp, P., Emmes, F., Fuhs, C.: Symbolic evaluation graphs and term rewriting: a general methodology for analyzing logic programs. In: PPDP’12, pp. 1–12. ACM (2012)Godefroid, P., Klarlund, N., Sen, K.: DART: directed automated random testing. In: Proceedings of PLDI’05, pp. 213–223. ACM (2005)Godefroid, P., Levin, M.Y., Molnar, D.A.: Sage: whitebox fuzzing for security testing. Commun. ACM 55(3), 40–44 (2012)Gómez-Zamalloa, M., Albert, E., Puebla, G.: Test case generation for object-oriented imperative languages in CLP. TPLP 10(4–6), 659–674 (2010)King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976)Leuschel, M.: The DPPD (Dozens of Problems for Partial Deduction) Library of Benchmarks. http://www.ecs.soton.ac.uk/mal/systems/dppd.html (2007)Lloyd, J.W.: Foundations of Logic Programming, 2nd edn. Springer, Berlin (1987)Lloyd, J.W., Shepherdson, J.C.: Partial evaluation in logic programming. J. Log. Program. 11, 217–242 (1991)Martens, B., Gallagher, J.: Ensuring global termination of partial deduction while allowing flexible polyvariance. In: Proceedings of ICLP’95, pp. 597–611. MIT Press (1995)Pasareanu, C.S., Rungta, N.: Symbolic PathFinder: symbolic execution of Java bytecode. In: Pecheur, C., Andrews, J., Di Nitto, E. (eds.) ASE, pp. 179–180. ACM (2010)Rojas, J.M., Gómez-Zamalloa, M.: A framework for guided test case generation in constraint logic programming. In: Albert, E. (ed.) Proceedings of LOPSTR. LNCS, vol. 7844, pp. 176–193. Springer, Heidelberg (2013)Sen, K., Marinov, D., Agha, G.: CUTE: a concolic unit testing engine for C. In: Proceedings of ESEC/SIGSOFT FSE 2005, pp. 263–272. ACM (2005)Ströder, T., Emmes, F., Schneider-Kamp, P., Giesl, J., Fuhs, C.: A linear operational semantics for termination and complexity analysis of ISO\sf ISO Prolog\sf Prolog . In: Vidal, G. (ed.) LOPSTR’11. LNCS, vol. 7225, pp. 237–252. Springer, Heidelberg (2012

    Test generation for high coverage with abstraction refinement and coarsening (ARC)

    Get PDF
    Testing is the main approach used in the software industry to expose failures. Producing thorough test suites is an expensive and error prone task that can greatly benefit from automation. Two challenging problems in test automation are generating test input and evaluating the adequacy of test suites: the first amounts to producing a set of test cases that accurately represent the software behavior, the second requires defining appropriate metrics to evaluate the thoroughness of the testing activities. Structural testing addresses these problems by measuring the amount of code elements that are executed by a test suite. The code elements that are not covered by any execution are natural candidates for generating further test cases, and the measured coverage rate can be used to estimate the thoroughness of the test suite. Several empirical studies show that test suites achieving high coverage rates exhibit a high failure detection ability. However, producing highly covering test suites automatically is hard as certain code elements are executed only under complex conditions while other might be not reachable at all. In this thesis we propose Abstraction Refinement and Coarsening (ARC), a goal oriented technique that combines static and dynamic software analysis to automatically generate test suites with high code coverage. At the core of our approach there is an abstract program model that enables the synergistic application of the different analysis components. In ARC we integrate Dynamic Symbolic Execution (DSE) and abstraction refinement to precisely direct test generation towards the coverage goals and detect infeasible elements. ARC includes a novel coarsening algorithm for improved scalability. We implemented ARC-B, a prototype tool that analyses C programs and produces test suites that achieve high branch coverage. Our experiments show that the approach effectively exploits the synergy between symbolic testing and reachability analysis outperforming state of the art test generation approaches. We evaluated ARC-B on industry relevant software, and exposed previously unknown failures in a safety-critical software component

    Taming the Static Analysis Beast

    Get PDF
    While industrial-strength static analysis over large, real-world codebases has become commonplace, so too have difficult-to-analyze language constructs, large libraries, and popular frameworks. These features make constructing and evaluating a novel, sound analysis painful, error-prone, and tedious. We motivate the need for research to address these issues by highlighting some of the many challenges faced by static analysis developers in today\u27s software ecosystem. We then propose our short- and long-term research agenda to make static analysis over modern software less burdensome
    • …
    corecore