506 research outputs found

    Enhanced sharing analysis techniques: a comprehensive evaluation

    Get PDF
    Sharing, an abstract domain developed by D. Jacobs and A. Langen for the analysis of logic programs, derives useful aliasing information. It is well-known that a commonly used core of techniques, such as the integration of Sharing with freeness and linearity information, can significantly improve the precision of the analysis. However, a number of other proposals for refined domain combinations have been circulating for years. One feature that is common to these proposals is that they do not seem to have undergone a thorough experimental evaluation even with respect to the expected precision gains. In this paper we experimentally evaluate: helping Sharing with the definitely ground variables found using Pos, the domain of positive Boolean formulas; the incorporation of explicit structural information; a full implementation of the reduced product of Sharing and Pos; the issue of reordering the bindings in the computation of the abstract mgu; an original proposal for the addition of a new mode recording the set of variables that are deemed to be ground or free; a refined way of using linearity to improve the analysis; the recovery of hidden information in the combination of Sharing with freeness information. Finally, we discuss the issue of whether tracking compoundness allows the computation of more sharing information

    Non-Strict Independence-Based Program Parallelization Using Sharing and Freeness Information.

    Get PDF
    The current ubiquity of multi-core processors has brought renewed interest in program parallelization. Logic programs allow studying the parallelization of programs with complex, dynamic data structures with (declarative) pointers in a comparatively simple semantic setting. In this context, automatic parallelizers which exploit and-parallelism rely on notions of independence in order to ensure certain efficiency properties. “Non-strict” independence is a more relaxed notion than the traditional notion of “strict” independence which still ensures the relevant efficiency properties and can allow considerable more parallelism. Non-strict independence cannot be determined solely at run-time (“a priori”) and thus global analysis is a requirement. However, extracting non-strict independence information from available analyses and domains is non-trivial. This paper provides on one hand an extended presentation of our classic techniques for compile-time detection of non-strict independence based on extracting information from (abstract interpretation-based) analyses using the now well understood and popular Sharing + Freeness domain. This includes algorithms for combined compile-time/run-time detection which involve special run-time checks for this type of parallelism. In addition, we propose herein novel annotation (parallelization) algorithms, URLP and CRLP, which are specially suited to non-strict independence. We also propose new ways of using the Sharing + Freeness information to optimize how the run-time environments of goals are kept apart during parallel execution. Finally, we also describe the implementation of these techniques in our parallelizing compiler and recall some early performance results. We provide as well an extended description of our pictorial representation of sharing and freeness information

    Optimality in Goal-Dependent Analysis of Sharing

    Full text link
    We face the problems of correctness, optimality and precision for the static analysis of logic programs, using the theory of abstract interpretation. We propose a framework with a denotational, goal-dependent semantics equipped with two unification operators for forward unification (calling a procedure) and backward unification (returning from a procedure). The latter is implemented through a matching operation. Our proposal clarifies and unifies many different frameworks and ideas on static analysis of logic programming in a single, formal setting. On the abstract side, we focus on the domain Sharing by Jacobs and Langen and provide the best correct approximation of all the primitive semantic operators, namely, projection, renaming, forward and backward unification. We show that the abstract unification operators are strictly more precise than those in the literature defined over the same abstract domain. In some cases, our operators are more precise than those developed for more complex domains involving linearity and freeness. To appear in Theory and Practice of Logic Programming (TPLP

    Testing Low Complexity Affine-Invariant Properties

    Full text link
    Invariance with respect to linear or affine transformations of the domain is arguably the most common symmetry exhibited by natural algebraic properties. In this work, we show that any low complexity affine-invariant property of multivariate functions over finite fields is testable with a constant number of queries. This immediately reproves, for instance, that the Reed-Muller code over F_p of degree d < p is testable, with an argument that uses no detailed algebraic information about polynomials except that low degree is preserved by composition with affine maps. The complexity of an affine-invariant property P refers to the maximum complexity, as defined by Green and Tao (Ann. Math. 2008), of the sets of linear forms used to characterize P. A more precise statement of our main result is that for any fixed prime p >=2 and fixed integer R >= 2, any affine-invariant property P of functions f: F_p^n -> [R] is testable, assuming the complexity of the property is less than p. Our proof involves developing analogs of graph-theoretic techniques in an algebraic setting, using tools from higher-order Fourier analysis.Comment: 38 pages, appears in SODA '1

    Freeness of hyperplane arrangements and related topics

    Full text link
    This is the expanded notes of the lecture by the author in "Arrangements in Pyrenees", June 2012. We are discussing relations of freeness and splitting problems of vector bundles, several techniques proving freeness of hyperplane arrangements, K. Saito's theory of primitive derivations for Coxeter arrangements, their application to combinatorial problems and related conjectures.Comment: 28 page

    Effectiveness of global analysis in strict independence-based automatic program parallelization

    Full text link
    This paper presents a study of the effectiveness of global analysis in the parallelization of logic programs using strict independence. A number of well-known approximation domains are selected and tlieir usefulness for the application in hand is explained. Also, methods for using the information provided by such domains to improve parallelization are proposed. Local and global analyses are built using these domains and such analyses are embedded in a complete parallelizing compiler. Then, the performance of the domains (and the system in general) is assessed for this application through a number of experiments. We argĂĽe that the results offer significant insight into the characteristics of these domains, the demands of the application, and the tradeoffs involved

    Effectiveness of abstract interpretation in automatic parallelization: a case study in logic programming

    Get PDF
    We report on a detailed study of the application and effectiveness of program analysis based on abstract interpretation to automatic program parallelization. We study the case of parallelizing logic programs using the notion of strict independence. We first propose and prove correct a methodology for the application in the parallelization task of the information inferred by abstract interpretation, using a parametric domain. The methodology is generic in the sense of allowing the use of different analysis domains. A number of well-known approximation domains are then studied and the transformation into the parametric domain defined. The transformation directly illustrates the relevance and applicability of each abstract domain for the application. Both local and global analyzers are then built using these domains and embedded in a complete parallelizing compiler. Then, the performance of the domains in this context is assessed through a number of experiments. A comparatively wide range of aspects is studied, from the resources needed by the analyzers in terms of time and memory to the actual benefits obtained from the information inferred. Such benefits are evaluated both in terms of the characteristics of the parallelized code and of the actual speedups obtained from it. The results show that data flow analysis plays an important role in achieving efficient parallelizations, and that the cost of such analysis can be reasonable even for quite sophisticated abstract domains. Furthermore, the results also offer significant insight into the characteristics of the domains, the demands of the application, and the trade-offs involved

    Automatic compile-time parallelization of CLP programs by analysis and transformation to a concurrent constraint language.

    Full text link
    The concept of independence has been recently generalized to the constraint logic programming (CLP) paradigm. Also, several abstract domains specifically designed for CLP languages, and whose information can be used to detect the generalized independence conditions, have been recently defined. As a result we are now in a position where automatic parallelization of CLP programs is feasible. In this paper we study the task of automatically parallelizing CLP programs based on such analyses, by transforming them to explicitly concurrent programs in our parallel CC platform (CIAO) as well as to AKL. We describe the analysis and transformation process, and study its efficiency, accuracy, and effectiveness in program parallelization. The information gathered by the analyzers is evaluated not only in terms of its accuracy, i.e. its ability to determine the actual dependencies among the program variables, but also of its effectiveness, measured in terms of code reduction in the resulting parallelized programs. Given that only a few abstract domains have been already defined for CLP, and that none of them were specifically designed for dependency detection, the aim of the evaluation is not only to asses the effectiveness of the available domains, but also to study what additional information it would be desirable to infer, and what domains would be appropriate for further improving the parallelization process
    • …
    corecore