10 research outputs found

    BLISS: Improved Symbolic Execution by Bounded Lazy Initialization with SAT Support

    Get PDF
    In this article we present BLISS, a novel technique that builds upon BLI, extending it with field bound refinement and satisfiability checks. Field bounds are refined while a symbolic structure is concretized, avoiding cases that, due to the concrete part of the heap and the field bounds, can be deemed redundant. Satisfiability checks on refined symbolic heaps allow us to prune these heaps as soon as it can be confirmed that they cannot be extended to any valid concrete heap. Compared to LI and BLI, BLISS reduces the time required by LI by up to 4 orders of magnitude for the most complex data structures. Moreover, the number of partially symbolic structures obtained by exploring program paths is reduced by BLISS by over 50%, with reductions of over 90% in some cases (compared to LI). BLISS uses less memory than LI and BLI, which enables the exploration of states unreachable by previous techniques.Sociedad Argentina de Informática e Investigación Operativa (SADIO

    Efficient Bounded Model Checking of Heap-Manipulating Programs using Tight Field Bounds

    Get PDF
    Software model checkers are able to exhaustively explore different bounded program executions arising from various sources of nondeterminism. These tools provide statements to produce non-determinis- tic values for certain variables, thus forcing the corresponding model checker to consider all possible values for these during verification. While these statements offer an effective way of verifying programs handling basic data types and simple structured types, they are inappropriate as a mechanism for nondeterministic generation of pointers, favoring the use of insertion routines to produce dynamic data structures when verifying, via model checking, programs handling such data types. We present a technique to improve model checking of programs handling heap-allocated data types, by taming the explosion of candidate structures that can be built when non-deterministically initializing heap object fields. The technique exploits precomputed relational bounds, that disregard values deemed invalid by the structure’s type invariant, thus reducing the state space to be explored by the model checker. Precomputing the relational bounds is a challenging costly task too, for which we also present an efficient algorithm, based on incremental SAT solving. We implement our approach on top of the CBMC bounded model checker, and show that, for a number of data structures implementations, we can handle significantly larger input structures and detect faults that CBMC is unable to detect.Sociedad Argentina de Informática e Investigación Operativ

    Distributed techniques for efficient bounded verification

    Get PDF
    El análisis formal de artefactos de software suele dividirse en dos clases de enfoques: pesados y livianos. Los métodos pesados ofrecen plena certeza del resultado obtenido pero requieren usuarios expertos. Los métodos livianos son más fáciles de aprender y se materializan en herramientas totalmente automáticas, pero la validez de sus resultados es parcial. Por ejemplo, en las técnicas de verificación exhaustiva acotada, la validez del resultado devuelto por la herramienta automática siempre está limitada por alguna noción de alcance o tamaño máximo configurable por el usuario. Para incrementar el grado de confianza en el resultado, el usuario sólo debe aumentar ese alcance y volver a ejecutar la herramienta. Sin embargo, el costo computacional del análisis automático es casi siempre exponencial en dicho alcance. En esta tesis presentamos una serie de técnicas y herramientas cuyo objetivo es mejorar la escalabilidad del análisis exhaustivo acotado de artefactos de software. En particular, nos interesa poder aprovechar la disponibilidad de hardware de bajo costo (como por ejemplo clusters de PCs, existentes en muchas empresas e instituciones) para extender la frontera de lo tratable mediante esta clase de enfoques. Por una parte presentamos transcoping, un enfoque incremental para explorar problemas de verificación exhaustiva acotada en tamaños pequeños y extrapolar la información recolectada para asistir la toma automática de decisiones en tamaños mayores del mismo problema. Mostramos su aplicación al análisis distribuido de modelos Alloy, así como a la toma de decisiones en la generación de casos de test basada en invariantes híbridos. También presentamos Ranger, otra técnica distinta para distribuir el análisis de modelos Alloy, que divide el problema en subproblemas de menor complejidad linealizando el espacio de potenciales contraejemplos y partiéndolo en intervalos disjuntos. Por otra parte, construyendo sobre la noción de cotas ajustadas para campos de la técnica TACO, presentamos MUCHO-TACO, una técnica para distribuir la verificación de programas Java anotados con contratos JML, basada en la herramienta secuencial TACO. Por último presentamos BLISS, un conjunto de técnicas para refinar la búsqueda de estructuras válidas durante la ejecución simbólica, basadas en Symbolic PathFinder.Formal analysis of software artifacts is often divided into two kinds of methods: heavyweight and lightweight. The former offer complete certainty in the result, but require interaction with highly trained expert users. The latter are easier to learn and supported by fully automated tools, but the validity of their results is typically partial. For instance, in bounded exhaustive analysis techniques, the validity of the result is always limited by some notion of scope or maximum size provided by the user. To increase the level of confidence of the result, the user can simply increase the scope of the analysis and run the tool again. However, the computational cost of such automated analyses is almost always exponential in said scope. In this thesis we present a series of techniques and tools with the common goal of improving the scalability of bounded exhaustive analysis of software artifacts. In particular, we are interested in leveraging the availability of low-cost hardware (such as PC clusters, which are currently available in many companies and institutions) in order to push the tractability barrier of bounded exhaustive analysis techniques. We present transcoping, an incremental approach that explores bounded exhaustive verification problems at small sizes, gathers information, then extrapolates it in order to make better-informed decisions at larger sizes of the same problems. We show its application to the distributed analysis of Alloy models, as well as to the generation of bounded exhaustive test suites using hybrid invariants. We then present Ranger, another technique to distribute the analysis of Alloy models which splits the problem into subproblems of lower complexity by linearizing the space of potential counterexamples and dividing it into disjoint intervals. Building on the notion of tight field bounds from the TACO technique, we present MUCHO-TACO, a technique for distributed verification of Java programs annotated with JML contracts, based on the sequential TACO tool. We also present BLISS, a set of techniques to improve the search for valid structures during symbolic execution for non-primitive inputs based on Symbolic PathFinder.Fil:Rosner, Nicolás Leandro. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales; Argentina

    Transcranial Doppler Pulsatility Index: What it is and What it Isn't.

    No full text
    BACKGROUND: Transcranial Doppler (TCD) pulsatility index (PI) has traditionally been interpreted as a descriptor of distal cerebrovascular resistance (CVR). We sought to evaluate the relationship between PI and CVR in situations, where CVR increases (mild hypocapnia) and decreases (plateau waves of intracranial pressure-ICP). METHODS: Recordings from patients with head-injury undergoing monitoring of arterial blood pressure (ABP), ICP, cerebral perfusion pressure (CPP), and TCD assessed cerebral blood flow velocities (FV) were analyzed. The Gosling pulsatility index (PI) was compared between baseline and ICP plateau waves (n = 20 patients) or short term (30-60 min) hypocapnia (n = 31). In addition, a modeling study was conducted with the "spectral" PI (calculated using fundamental harmonic of FV) resulting in a theoretical formula expressing the dependence of PI on balance of cerebrovascular impedances. RESULTS: PI increased significantly (p < 0.001) while CVR decreased (p < 0.001) during plateau waves. During hypocapnia PI and CVR increased (p < 0.001). The modeling formula explained more than 65% of the variability of Gosling PI and 90% of the variability of the "spectral" PI (R = 0.81 and R = 0.95, respectively). CONCLUSION: TCD pulsatility index can be easily and quickly assessed but is usually misinterpreted as a descriptor of CVR. The mathematical model presents a complex relationship between PI and multiple haemodynamic variables
    corecore