398 research outputs found

    Succinct Representations for Abstract Interpretation

    Full text link
    Abstract interpretation techniques can be made more precise by distinguishing paths inside loops, at the expense of possibly exponential complexity. SMT-solving techniques and sparse representations of paths and sets of paths avoid this pitfall. We improve previously proposed techniques for guided static analysis and the generation of disjunctive invariants by combining them with techniques for succinct representations of paths and symbolic representations for transitions based on static single assignment. Because of the non-monotonicity of the results of abstract interpretation with widening operators, it is difficult to conclude that some abstraction is more precise than another based on theoretical local precision results. We thus conducted extensive comparisons between our new techniques and previous ones, on a variety of open-source packages.Comment: Static analysis symposium (SAS), Deauville : France (2012

    Efficient Generation of Correctness Certificates for the Abstract Domain of Polyhedra

    Full text link
    Polyhedra form an established abstract domain for inferring runtime properties of programs using abstract interpretation. Computations on them need to be certified for the whole static analysis results to be trusted. In this work, we look at how far we can get down the road of a posteriori verification to lower the overhead of certification of the abstract domain of polyhedra. We demonstrate methods for making the cost of inclusion certificate generation negligible. From a performance point of view, our single-representation, constraints-based implementation compares with state-of-the-art implementations

    AID Overlapping and Polη Hotspots Are Key Features of Evolutionary Variation Within the Human Antibody Heavy Chain (IGHV) Genes

    Get PDF
    © Copyright © 2020 Tang, Bagnara, Chiorazzi, Scharff and MacCarthy. Somatic hypermutation (SHM) of the immunoglobulin variable (IgV) loci is a key process in antibody affinity maturation. The enzyme activation-induced deaminase (AID), initiates SHM by creating C → U mismatches on single-stranded DNA (ssDNA). AID has preferential hotspot motif targets in the context of WRC/GYW (W = A/T, R = A/G, Y = C/T) and particularly at WGCW overlapping hotspots where hotspots appear opposite each other on both strands. Subsequent recruitment of the low-fidelity DNA repair enzyme, Polymerase eta (Polη), during mismatch repair, creates additional mutations at WA/TW sites. Although there are more than 50 functional immunoglobulin heavy chain variable (IGHV) segments in humans, the fundamental differences between these genes and their ability to respond to all possible foreign antigens is still poorly understood. To better understand this, we generated profiles of WGCW hotspots in each of the human IGHV genes and found the expected high frequency in complementarity determining regions (CDRs) that encode the antigen binding sites but also an unexpectedly high frequency of WGCW in certain framework (FW) sub-regions. Principal Components Analysis (PCA) of these overlapping AID hotspot profiles revealed that one major difference between IGHV families is the presence or absence of WGCW in a sub-region of FW3 sometimes referred to as “CDR4.” Further differences between members of each family (e.g., IGHV1) are primarily determined by their WGCW densities in CDR1. We previously suggested that the co-localization of AID overlapping and Polη hotspots was associated with high mutability of certain IGHV sub-regions, such as the CDRs. To evaluate the importance of this feature, we extended the WGCW profiles, combining them with local densities of Polη (WA) hotspots, thus describing the co-localization of both types of hotspots across all IGHV genes. We also verified that co-localization is associated with higher mutability. PCA of the co-localization profiles showed CDR1 and CDR2 as being the main contributors to variance among IGHV genes, consistent with the importance of these sub-regions in antigen binding. Our results suggest that AID overlapping (WGCW) hotspots alone or in conjunction with Polη (WA/TW) hotspots are key features of evolutionary variation between IGHV genes

    A user-friendly forest model with a multiplicative mathematical structure: a Bayesian approach to calibration

    Get PDF
    Forest models are being increasingly used to study ecosystem functioning, through the reproduction of carbon fluxes and productivity in very different forests all over the world. Over the last two decades, the need for simple and “easy to use” models for practical applications, characterized by few parameters and equations, has become clear, and some have been developed for this purpose. These models aim to represent the main drivers underlying forest ecosystem processes while being applicable to the widest possible range of forest ecosystems. Recently, it has also become clear that model performance should not be assessed only in terms of accuracy of estimations and predictions, but also in terms of estimates of model uncertainties. Therefore, the Bayesian approach has increasingly been applied to calibrate forest models, with the aim of estimating the uncertainty of their results, and of comparing their performances. Some forest models, considered to be user-friendly, rely on a multiplicative or quasimultiplicative mathematical structure, which is known to cause problems during the calibration process, mainly due to high correlations between parameters. In a Bayesian framework using a Markov Chain Monte Carlo sampling this is likely to impair the reaching of a proper convergence of the chains and the sampling from the correct posterior distribution. Here we show two methods to reach proper convergence when using a forest model with a multiplicative structure, applying different algorithms with different number of iterations during the Markov Chain Monte Carlo or a two-steps calibration. The results showed that recently proposed algorithms for adaptive calibration do not confer a clear advantage over the Metropolis–Hastings Random Walk algorithm for the forest model used here. Moreover, the calibration remains time consuming and mathematically difficult, so advantages of using a fast and user-friendly model can be lost due to the calibration process that is needed to obtain reliable results

    Automatic Abstraction for Congruences

    Get PDF
    One approach to verifying bit-twiddling algorithms is to derive invariants between the bits that constitute the variables of a program. Such invariants can often be described with systems of congruences where in each equation cx=dmodm\vec{c} \cdot \vec{x} = d \mod m, (unknown variable m)isapoweroftwo, is a power of two, \vec{c}isavectorofintegercoefficients,and is a vector of integer coefficients, and \vec{x}$ is a vector of propositional variables (bits). Because of the low-level nature of these invariants and the large number of bits that are involved, it is important that the transfer functions can be derived automatically. We address this problem, showing how an analysis for bit-level congruence relationships can be decoupled into two parts: (1) a SAT-based abstraction (compilation) step which can be automated, and (2) an interpretation step that requires no SAT-solving. We exploit triangular matrix forms to derive transfer functions efficiently, even in the presence of large numbers of bits. Finally we propose program transformations that improve the analysis results

    Pruning Algorithms for Pretropisms of Newton Polytopes

    Full text link
    Pretropisms are candidates for the leading exponents of Puiseux series that represent solutions of polynomial systems. To find pretropisms, we propose an exact gift wrapping algorithm to prune the tree of edges of a tuple of Newton polytopes. We prefer exact arithmetic not only because of the exact input and the degrees of the output, but because of the often unpredictable growth of the coordinates in the face normals, even for polytopes in generic position. We provide experimental results with our preliminary implementation in Sage that compare favorably with the pruning method that relies only on cone intersections.Comment: exact, gift wrapping, Newton polytope, pretropism, tree pruning, accepted for presentation at Computer Algebra in Scientific Computing, CASC 201

    LNCS

    Get PDF
    We present two algorithmic approaches for synthesizing linear hybrid automata from experimental data. Unlike previous approaches, our algorithms work without a template and generate an automaton with nondeterministic guards and invariants, and with an arbitrary number and topology of modes. They thus construct a succinct model from the data and provide formal guarantees. In particular, (1) the generated automaton can reproduce the data up to a specified tolerance and (2) the automaton is tight, given the first guarantee. Our first approach encodes the synthesis problem as a logical formula in the theory of linear arithmetic, which can then be solved by an SMT solver. This approach minimizes the number of modes in the resulting model but is only feasible for limited data sets. To address scalability, we propose a second approach that does not enforce to find a minimal model. The algorithm constructs an initial automaton and then iteratively extends the automaton based on processing new data. Therefore the algorithm is well-suited for online and synthesis-in-the-loop applications. The core of the algorithm is a membership query that checks whether, within the specified tolerance, a given data set can result from the execution of a given automaton. We solve this membership problem for linear hybrid automata by repeated reachability computations. We demonstrate the effectiveness of the algorithm on synthetic data sets and on cardiac-cell measurements

    Polyhedral Analysis using Parametric Objectives

    Get PDF
    The abstract domain of polyhedra lies at the heart of many program analysis techniques. However, its operations can be expensive, precluding their application to polyhedra that involve many variables. This paper describes a new approach to computing polyhedral domain operations. The core of this approach is an algorithm to calculate variable elimination (projection) based on parametric linear programming. The algorithm enumerates only non-redundant inequalities of the projection space, hence permits anytime approximation of the output

    THE DERMAL CHROMATOPHORE UNIT

    Full text link

    Verifying parameterized timed security protocols

    Get PDF
    Quantitative timing is often explicitly used in systems for better security, e.g., the credentials for automatic website logon often has limited lifetime. Verifying timing relevant security protocols in these systems is very challenging as timing adds another dimension of complexity compared with the untimed protocol verification. In our previous work, we proposed an approach to check the correctness of the timed authentication in security protocols with fixed timing constraints. However, a more difficult question persists, i.e., given a particular protocol design, whether the protocol has security flaws in its design or it can be configured secure with proper parameter values? In this work, we answer this question by proposing a parameterized verification framework, where the quantitative parameters in the protocols can be intuitively specified as well as automatically analyzed. Given a security protocol, our verification algorithm either produces the secure constraints of the parameters, or constructs an attack that works for any parameter values. The correctness of our algorithm is formally proved. We implement our method into a tool called PTAuth and evaluate it with several security protocols. Using PTAuth, we have successfully found a timing attack in Kerberos V which is unreported before.No Full Tex
    corecore