172 research outputs found

    Generalized Points-to Graphs: A New Abstraction of Memory in the Presence of Pointers

    Full text link
    Flow- and context-sensitive points-to analysis is difficult to scale; for top-down approaches, the problem centers on repeated analysis of the same procedure; for bottom-up approaches, the abstractions used to represent procedure summaries have not scaled while preserving precision. We propose a novel abstraction called the Generalized Points-to Graph (GPG) which views points-to relations as memory updates and generalizes them using the counts of indirection levels leaving the unknown pointees implicit. This allows us to construct GPGs as compact representations of bottom-up procedure summaries in terms of memory updates and control flow between them. Their compactness is ensured by the following optimizations: strength reduction reduces the indirection levels, redundancy elimination removes redundant memory updates and minimizes control flow (without over-approximating data dependence between memory updates), and call inlining enhances the opportunities of these optimizations. We devise novel operations and data flow analyses for these optimizations. Our quest for scalability of points-to analysis leads to the following insight: The real killer of scalability in program analysis is not the amount of data but the amount of control flow that it may be subjected to in search of precision. The effectiveness of GPGs lies in the fact that they discard as much control flow as possible without losing precision (i.e., by preserving data dependence without over-approximation). This is the reason why the GPGs are very small even for main procedures that contain the effect of the entire program. This allows our implementation to scale to 158kLoC for C programs

    ParaDox: Eliminating Voltage Margins via Heterogeneous Fault Tolerance.

    Get PDF
    Providing reliability is becoming a challenge for chip manufacturers, faced with simultaneously trying to improve miniaturization, performance and energy efficiency. This leads to very large margins on voltage and frequency, designed to avoid errors even in the worst case, along with significant hardware expenditure on eliminating voltage spikes and other forms of transient error, causing considerable inefficiency in power consumption and performance. We flip traditional ideas about reliability and performance around, by exploring the use of error resilience for power and performance gains. ParaMedic is a recent architecture that provides a solution for reliability with low overheads via automatic hardware error recovery. It works by splitting up checking onto many small cores in a heterogeneous multicore system with hardware logging support. However, its design is based on the idea that errors are exceptional. We transform ParaMedic into ParaDox, which shows high performance in both error-intensive and scarce-error scenarios, thus allowing correct execution even when undervolted and overclocked. Evaluation within error-intensive simulation environments confirms the error resilience of ParaDox and the low associated recovery cost. We estimate that compared to a non-resilient system with margins, ParaDox can reduce energy-delay product by 15% through undervolting, while completely recovering from any induced errors

    Type-Inference Based Short Cut Deforestation (nearly) without Inlining

    Get PDF
    Deforestation optimises a functional program by transforming it into another one that does not create certain intermediate data structures. In [ICFP'99] we presented a type-inference based deforestation algorithm which performs extensive inlining. However, across module boundaries only limited inlining is practically feasible. Furthermore, inlining is a non-trivial transformation which is therefore best implemented as a separate optimisation pass. To perform short cut deforestation (nearly) without inlining, Gill suggested to split definitions into workers and wrappers and inline only the small wrappers, which transfer the information needed for deforestation. We show that Gill's use of a function build limits deforestation and note that his reasons for using build do not apply to our approach. Hence we develop a more general worker/wrapper scheme without build. We give a type-inference based algorithm which splits definitions into workers and wrappers. Finally, we show that we can deforest more expressions with the worker/wrapper scheme than the algorithm with inlining

    From MinX to MinC: Semantics-Driven Decompilation of Recursive Datatypes

    Get PDF
    Reconstructing the meaning of a program from its binary executable is known as reverse engineering; it has a wide range of applications in software security, exposing piracy, legacy systems, etc. Since reversing is ultimately a search for meaning, there is much interest in inferring a type (a meaning) for the elements of a binary in a consistent way. Unfortunately existing approaches do not guarantee any semantic relevance for their reconstructed types. This paper presents a new and semantically-founded approach that provides strong guarantees for the reconstructed types. Key to our approach is the derivation of a witness program in a high-level language alongside the reconstructed types. This witness has the same semantics as the binary, is type correct by construction, and it induces a (justifiable) type assignment on the binary. Moreover, the approach effectively yields a type-directed decompiler. We formalise and implement the approach for reversing Minx, an abstraction of x86, to MinC, a type-safe dialect of C with recursive datatypes. Our evaluation compiles a range of textbook C algorithms to MinX and then recovers the original structures

    Изменение микроструктуры пружинного Сr-Ni сплава после старения

    Get PDF
    Установлено, что старение закаленного сплава 47ХНМ при температуре 500 °С в течение 5...10 ч не приводит к распаду пересыщенного твердого раствора, при повышении температуры старения до 600 °С начинают проявляться признаки распада в частицах ?-фазы гомогенного типа. Показано, что после старения при 700 °С закаленных образцов интенсивно развивается прерывистый распад с выделением некогерентной ?-фазы на основе хрома, причем объемная доля его возрастает с увеличением времени старения, достигая максимальных значений за 5...10 ч старения

    A polymorphic type system with subtypes for Prolog

    Full text link

    Exploiting Term Hiding to Reduce Run-time Checking Overhead

    Full text link
    One of the most attractive features of untyped languages is the flexibility in term creation and manipulation. However, with such power comes the responsibility of ensuring the correctness of these operations. A solution is adding run-time checks to the program via assertions, but this can introduce overheads that are in many cases impractical. While static analysis can greatly reduce such overheads, the gains depend strongly on the quality of the information inferred. Reusable libraries, i.e., library modules that are pre-compiled independently of the client, pose special challenges in this context. We propose a technique which takes advantage of module systems which can hide a selected set of functor symbols to significantly enrich the shape information that can be inferred for reusable libraries, as well as an improved run-time checking approach that leverages the proposed mechanisms to achieve large reductions in overhead, closer to those of static languages, even in the reusable-library context. While the approach is general and system-independent, we present it for concreteness in the context of the Ciao assertion language and combined static/dynamic checking framework. Our method maintains the full expressiveness of the assertion language in this context. In contrast to other approaches it does not introduce the need to switch the language to a (static) type system, which is known to change the semantics in languages like Prolog. We also study the approach experimentally and evaluate the overhead reduction achieved in the run-time checks.Comment: 26 pages, 10 figures, 2 tables; an extension of the paper version accepted to PADL'18 (includes proofs, extra figures and examples omitted due to space reasons

    A data-driven approach for predicting printability in metal additive manufacturing processes

    Get PDF
    Metal powder-bed fusion additive manufacturing technologies offer numerous benefits to the manufacturing industry. However, the current approach to printability analysis, determining which components are likely to build unsuccessfully, prior to manufacture, is based on ad-hoc rules and engineering experience. Consequently, to allow full exploitation of the benefits of additive manufacturing, there is a demand for a fully systematic approach to the problem. In this paper we focus on the impact of geometry in printability analysis. For the first time, we detail a machine learning framework for determining the geometric limits of printability in additive manufacturing processes. This framework consists of three main components. First, we detail how to construct strenuous test artefacts capable of pushing an additive manufacturing process to its limits. Secondly, we explain how to measure the printability of an additively manufactured test artefact. Finally, we construct a predictive model capable of estimating the printability of a given artefact before it is additively manufactured. We test all steps of our framework, and show that our predictive model approaches an estimate of the maximum performance obtainable due to inherent stochasticity in the underlying additive manufacturing process. © 2020, The Author(s)

    Inference of Well-Typings for Logic Programs with Application to Termination Analysis

    Get PDF
    This paper develops a method to infer a polymorphic well-typing for a logic program. One of the main motivations is to contribute to a better automation of termination analysis in logic programs, by deriving types from which norms can automatically be constructed. Previous work on type-based termination analysis used either types declared by the user, or automatically generated monomorphic types describing the success set of predicates. Declared types are typically more precise and result in stronger termination conditions than those obtained with inferred types. Our type inference procedure involves solving set constraints generated from the program and derives a well-typing in contrast to a success-set approximation. Experiments show that our automatically inferred well-typings are close to the declared types and thus result in termination conditions that are as good as those obtained with declared types for all our experiments to date. We describe the method, its implementation and experiments with termination analysis based on the inferred types

    From Boolean Equalities to Constraints

    Get PDF
    Although functional as well as logic languages use equality to discriminate between logically different cases, the operational meaning of equality is different in such languages. Functional languages reduce equational expressions to their Boolean values, True or False, logic languages use unification to check the validity only and fail otherwise. Consequently, the language Curry, which amalgamates functional and logic programming features, offers two kinds of equational expressions so that the programmer has to distinguish between these uses. We show that this distinction can be avoided by providing an analysis and transformation method that automatically selects the appropriate operation. Without this distinction in source programs, the language design can be simplified and the execution of programs can be optimized. As a consequence, we show that one kind of equational expressions is sufficient and unification is nothing else than an optimization of Boolean equality
    corecore