123 research outputs found

    Separating Incremental and Non-Incremental Bottom-Up Compilation

    Get PDF
    The aim of a compiler is, given a function represented in some language, to generate an equivalent representation in a target language L. In bottom-up (BU) compilation of functions given as CNF formulas, constructing the new representation requires compiling several subformulas in L. The compiler starts by compiling the clauses in L and iteratively constructs representations for new subformulas using an "Apply" operator that performs conjunction in L, until all clauses are combined into one representation. In principle, BU compilation can generate representations for any subformulas and conjoin them in any way. But an attractive strategy from a practical point of view is to augment one main representation - which we call the core - by conjoining to it the clauses one at a time. We refer to this strategy as incremental BU compilation. We prove that, for known relevant languages L for BU compilation, there is a class of CNF formulas that admit BU compilations to L that generate only polynomial-size intermediate representations, while their incremental BU compilations all generate an exponential-size core

    Tips for making the most of 64-bit architectures in langage design, libraries or garbage collection

    Full text link
    The 64-bit architectures that have become standard today offer unprecedented low-level programming possibilities. For the first time in the history of computing, the size of address registers far exceeded the physical capacity of their bus.After a brief reminder of the possibilities offered by the small size of addresses compared to the available 64 bits,we develop three concrete examples of how the vacant bits of these registers can be used.Among these examples, two of them concern the implementation of a library for a new statically typed programming language.Firstly, the implementation of multi-precision integers, with the aim of improving performance in terms of both calculation speed and RAM savings.The second example focuses on the library's handling of UTF-8 character strings.Here, the idea is to make indexing easier by ignoring the physical size of each UTF-8 characters.Finally, the third example is a possible enhancement of garbage collectors, in particular the mark \& sweep for the object marking phase

    Dip-coating of suspensions

    Full text link
    Withdrawing a plate from a suspension leads to the entrainment of a coating layer of fluid and particles on the solid surface. In this article, we study the Landau-Levich problem in the case of a suspension of non-Brownian particles at moderate volume fraction 10%<ϕ<41%10\% < \phi < 41\%. We observe different regimes depending on the withdrawal velocity UU, the volume fraction of the suspension ϕ\phi, and the diameter of the particles 2 a2\,a. Our results exhibit three coating regimes. (i) At small enough capillary number CaCa, no particles are entrained, and only a liquid film coats the plate. (ii) At large capillary number, we observe that the thickness of the entrained film of suspension is captured by the Landau-Levich law using the effective viscosity of the suspension η(ϕ)\eta(\phi). (iii) At intermediate capillary numbers, the situation becomes more complicated with a heterogeneous coating on the substrate. We rationalize our experimental findings by providing the domain of existence of these three regimes as a function of the fluid and particles properties

    Risk ratio, odds ratio, risk difference... Which causal measure is easier to generalize?

    Full text link
    There are many measures to report so-called treatment or causal effect: absolute difference, ratio, odds ratio, number needed to treat, and so on. The choice of a measure, e.g. absolute versus relative, is often debated because it leads to different appreciations of the same phenomenon; but it also implies different heterogeneity of treatment effect. In addition some measures but not all have appealing properties such as collapsibility, matching the intuition of a population summary. We review common measures, and their pros and cons typically brought forward. Doing so, we clarify notions of collapsibility and treatment effect heterogeneity, unifying different existing definitions. But our main contribution is to propose to reverse the thinking: rather than starting from the measure, we propose to start from a non-parametric generative model of the outcome. Depending on the nature of the outcome, some causal measures disentangle treatment modulations from baseline risk. Therefore, our analysis outlines an understanding what heterogeneity and homogeneity of treatment effect mean, not through the lens of the measure, but through the lens of the covariates. Our goal is the generalization of causal measures. We show that different sets of covariates are needed to generalize a effect to a different target population depending on (i) the causal measure of interest, (ii) the nature of the outcome, and (iii) a conditional outcome model or local effects are used to generalize

    Optimized late binding: the SmallEiffel example

    Get PDF
    International audiencePoster showing late binding mecanisms as implemented in the SmallEiffel compiler

    Reweighting the RCT for generalization: finite sample error and variable selection

    Full text link
    Randomized Controlled Trials (RCTs) may suffer from limited scope. In particular, samples may be unrepresentative: some RCTs over- or under- sample individuals with certain characteristics compared to the target population, for which one wants conclusions on treatment effectiveness. Re-weighting trial individuals to match the target population can improve the treatment effect estimation. In this work, we establish the exact expressions of the bias and variance of such reweighting procedures -- also called Inverse Propensity of Sampling Weighting (IPSW) -- in presence of categorical covariates for any sample size. Such results allow us to compare the theoretical performance of different versions of IPSW estimates. Besides, our results show how the performance (bias, variance, and quadratic risk) of IPSW estimates depends on the two sample sizes (RCT and target population). A by-product of our work is the proof of consistency of IPSW estimates. Results also reveal that IPSW performances are improved when the trial probability to be treated is estimated (rather than using its oracle counterpart). In addition, we study choice of variables: how including covariates that are not necessary for identifiability of the causal effect may impact the asymptotic variance. Including covariates that are shifted between the two samples but not treatment effect modifiers increases the variance while non-shifted but treatment effect modifiers do not. We illustrate all the takeaways in a didactic example, and on a semi-synthetic simulation inspired from critical care medicine

    Analyse simple de types dans les tableaux et optimisation du ramasse-miettes.

    Get PDF
    International audienceThis article starts with the presentation of a simple technique, using type flow analysis and filling up order, to predict the {\it{}content} of arrays. Applied first on low-level arrays indexed from 0, our technique is then extended to deal with higher level data structures using arrays, like variable index arrays as well as circular arrays and hash-maps. The main aim of our technique is to allow the propagation of the type flow information through array read-write expressions, thus opening the last gate to a global type flow analysis. Beside the improvement of type prediction useful for dynamic binding and type security of object-oriented languages, our technique makes it possible to optimize memory management. Indeed, thanks to the filling up order, the garbage collector (GC) only inspects the used part of arrays, avoiding collection of unused objects referenced by the supply part of arrays. Furthermore, the supply part of arrays does not even require initialization or cleaning after use. Measurements we present show the global improvement on type flow analysis and the real gain during the mark and sweep of arrays.Cet article commence en présentant une technique très simple, par analyse de flots de types et ordre de remplissage imposé, permettant de prédire le {\it{}contenu} des tableaux. D'abord présentée pour les tableaux primitifs indexés à partir de 0, notre technique est ensuite étendue pour prendre en compte les autres structures de données de plus haut niveau: tableaux à indexation variable, tableaux circulaires et tables de hachage. Le résultat essentiel consiste à pouvoir faire suivre l'information de flots de types déjà collectée pour le reste du code source au travers des expressions qui manipulent des tableaux, permettant ainsi de procéder à une analyse de flots de types vraiment globale. En plus de l'amélioration de la prédiction de types utile pour la liaison dynamique et la sécurité des langages à objets, notre technique permet d'optimiser la gestion mémoire. En effet, grâce à la technique de remplissage utilisée, le ramasse-miettes (GC) n'inspecte que les zones utiles des tableaux en évitant de collecter des objets inaccessibles, référencés par les zones de réserve. Ces zones de réserve n'ont par ailleurs nullement besoin d'être initialisées avant utilisation ni nettoyées aprés usage. Les mesures présentées permettent de se rendre compte de l'impact de cette technique, aussi bien en terme de qualité de l'analyse de flots, qu'en terme de gain au niveau de la gestion mémoire durant le marquage et le balayage des tableaux

    Adding external iterators to an existing Eiffel class library

    Get PDF
    Colloque avec actes et comité de lecture./http://ieeexplore.ieee.org/This paper discusses common iteration schemes and highlights the interest of using explicit iterators. The advantages of external iterators are compared to those of internalized iterators. The integration of an iterator class hierarchy to an existing library without modifying the latter is detailed. This integration brings an extra level of abstraction to the library, which thus becomes more flexible, more adapted to certain design patterns and hence can be used in a higher-level way. Such an integration is not only possible, but can even be done in an optimized way, taking into account the specific structure of the collection traversed. A slight extension of existing class libraries can also be implemented that does not cause any compatibility problem and does not break existing code, but allows even further abstraction and makes it easier for the developer to use high-level, optimized, external iterators

    Vers un usage plus sûr de l'aliasing avec Eiffel

    Get PDF
    Colloque avec actes et comité de lecture. nationale./http://www.hermes-science.comNational audienceLe code source du compilateur SmallEiffel fait un usage intensif de l'aliasing afin d'atteindre les meilleures performances, tant en termes de mémoire que de vitesse d'exécution. Cette technique semble très appropriée à la compilation mais peut aussi s'appliquer à une large gamme d'applications. Grâce aux capacités de programmation par contrat du langage Eiffel, l'aliasing peut être géré d'une façon assez sûre. Le modèle de conception singleton se révèle également crucial pour l'implantation d'objets fournisseurs d'alias. Nous présentons ici une implantation efficace de ce modèle rendue possible par certains idiomes d'Eiffel
    • …
    corecore