11 research outputs found

    Removing superfluous versions in polyvariant specialization of prolog programs.

    Get PDF
    Polyvariant specialization allows generating múltiple versions of a procedure, which can then be separately optimized for different uses. Since allowing a high degree of polyvariance often results in more optimized code, polyvariant specializers, such as most partial evaluators, can genérate a large number of versions. This can produce unnecessarily large residual programs. Also, large programs can be slower due to cache miss effects. A possible solution to this problem is to introduce a minimization step which identifies sets of equivalent versions, and replace all occurrences of such versions by a single one. In this work we present a unifying view of the problem of superfluous polyvariance. It includes both partial deduction and abstract múltiple specialization. As regards partial deduction, we extend existing approaches in several ways. First, previous work has dealt with puré logic programs and a very limited class of builtins. Herein we propose an extensión to traditional characteristic trees which can be used in the presence of calis to external predicates. This includes all builtins, librarles, other user modules, etc. Second, we propose the possibility of collapsing versions which are not strictly equivalent. This allows trading time for space and can be useful in the context of embedded and pervasive systems. This is done by residualizing certain computations for external predicates which would otherwise be performed at specialization time. Third, we provide an experimental evaluation of the potential gains achievable using minimization which leads to interesting conclusions

    Poly-controlled partial evaluation and its application to resource-aware program specialization

    Get PDF
    La Evaluación Parcial es una técnica automática para la optimización de programas. Su objetivo principal es el de especializar un programa con respecto a parte de sus datos de entrada, los que se conocen como datos estáticos. La calidad del código generado por la evaluación parcial de programas lógicos depende, en gran medida, de la estrategia de control que se haya empleado. Desafortunadamente, aún estamos lejos de contar con una estrategia de control suficientemente sofisticada como para comportarse de manera óptima para cualquier programa. La principal contribución de esta tesis es el desarrollo de la Evaluación Parcial Policontrolada, un novedoso entorno para la evaluación parcial de programas lógicos, el cual es policontrolado en el sentido de que puede tomar en cuenta conjuntos de reglas de control global y local, en lugar de emplear una única combinación predeterminada (como es el caso de la evaluación parcial tradicional). Este entorno es más flexible que los enfoques existentes, ya que permite asignar diferentes reglas de control local y global a diferentes patrones de llamada. De este modo, es posible obtener programas especializados que no pueden ser generados usando evaluación parcial tradicional. En consecuencia, el entorno de evaluación parcial policontrolada puede generar conjuntos de programas especializados, en lugar de un único programa. A través de técnicas auto-ajustables, es posible hacer que este enfoque sea completamente automático. Dichas técnicas permiten medir la calidad de los diferentes programas especializados obtenidos. Este entorno es consciente de los recursos, en el sentido de que cada una de las soluciones obtenidas a través de la evaluación parcial policontrolada es valorada utilizando funciones de adecuación, las que pueden tener en cuenta factores tales como el tamaño de los programas especializados, o la cantidad de memoria que consumen, además de la velocidad del programa especializado que es el factor habitualmente considerado en otros entornos de evaluación parcial. Este entorno de evaluación parcial policontrolada ha sido implementado en el sistema CiaoPP, y evaluado con numerosos programas de prueba. Los resultados experimentales muestran que nuestra propuesta obtiene en muchos casos mejores especializaciones que aquellas generadas usando la evaluación parcial tradicional, especialmente cuando la especialización es consciente de los recursos. Otra de las principales contribuciones de esta tesis es la presentación de una visión unificada del problema de eliminar la polivarianza superflua en la evaluación parcial y en la especialización abstracta múltiple, a través del uso de un paso de minimización, el cual agrupa versiones equivalentes de predicados. Este paso se puede aplicar en la especialización de cualquier programa Prolog, inclusive aquellos que contienen llamadas a predicados predefinidos o predicados externos. Además, ofrecemos la posibilidad de agrupar versiones que no sean estrictamente equivalentes, con el propósito de obtener programas más pequeños

    Abstract multiple specialization and its application to program parallelization.

    Get PDF
    Program specialization optimizes programs for known valúes of the input. It is often the case that the set of possible input valúes is unknown, or this set is infinite. However, a form of specialization can still be performed in such cases by means of abstract interpretation, specialization then being with respect to abstract valúes (substitutions), rather than concrete ones. We study the múltiple specialization of logic programs based on abstract interpretation. This involves in principie, and based on information from global analysis, generating several versions of a program predicate for different uses of such predicate, optimizing these versions, and, finally, producing a new, "multiply specialized" program. While múltiple specialization has received theoretical attention, little previous evidence exists on its practicality. In this paper we report on the incorporation of múltiple specialization in a parallelizing compiler and quantify its effects. A novel approach to the design and implementation of the specialization system is proposed. The resulting implementation techniques result in identical specializations to those of the best previously proposed techniques but require little or no modification of some existing abstract interpreters. Our results show that, using the proposed techniques, the resulting "abstract múltiple specialization" is indeed a relevant technique in practice. In particular, in the parallelizing compiler application, a good number of run-time tests are eliminated and invariants extracted automatically from loops, resulting generally in lower overheads and in several cases in increased speedups

    Programmiersprachen und Rechenkonzepte

    Get PDF
    Seit 1984 veranstaltet die GI-Fachgruppe "Programmiersprachen und Rechenkonzepte" regelmäßig im Frühjahr einen Workshop im Physikzentrum Bad Honnef. Das Treffen dient in erster Linie dem gegenseitigen Kennenlernen, dem Erfahrungsaustausch, der Diskussion und der Vertiefung gegenseitiger Kontakte. In diesem Forum werden Vorträge und Demonstrationen sowohl bereits abgeschlossener als auch noch laufender Arbeiten vorgestellt, unter anderem (aber nicht ausschließlich) zu Themen wie - Sprachen, Sprachparadigmen, - Korrektheit von Entwurf und Implementierung, -Werkzeuge, -Software-/Hardware-Architekturen, -Spezifikation, Entwurf, - Validierung, Verifikation, - Implementierung, Integration, - Sicherheit (Safety und Security), - eingebettete Systeme, - hardware-nahe Programmierung. In diesem Technischen Bericht sind einige der präsentierten Arbeiten zusammen gestellt

    Analysis Techniques for Concurrent Programming Languages

    Full text link
    Los lenguajes concurrentes est an cada d a m as presentes en nuestra sociedad, tanto en las nuevas tecnolog as como en los sistemas utilizados de manera cotidiana. M as a un, dada la actual distribuci on de los sistemas y su arquitectura interna, cabe esperar que este hecho siga siendo una realidad en los pr oximos a~nos. En este contexto, el desarrollo de herramientas de apoyo al desarrollo de programas concurrentes se vuelve esencial. Adem as, el comportamiento de los sistemas concurrentes es especialmente dif cil de analizar, por lo que cualquier herramienta que ayude en esta tarea, a un cuando sea limitada, ser a de gran utilidad. Por ejemplo, podemos encontrar herramientas para la depuraci on, an alisis, comprobaci on, optimizaci on, o simpli caci on de programas. Muchas de ellas son ampliamente utilizadas por los programadores hoy en d a. El prop osito de esta tesis es introducir, a trav es de diferentes lenguajes de programaci on concurrentes, t ecnicas de an alisis que puedan ayudar a mejorar la experiencia del desarrollo y publicaci on de software para modelos concurrentes. En esta tesis se introducen tanto an alisis est aticos (aproximando todas las posibles ejecuciones) como din amicos (considerando una ejecuci on en concreto). Los trabajos aqu propuestos di eren lo su ciente entre s para constituir ideas totalmente independientes, pero manteniendo un nexo com un: el hecho de ser un an alisis para un lenguaje concurrente. Todos los an alisis presentados han sido de nidos formalmente y se ha probado su correcci on, asegurando que los resultados obtenidos tendr an el grado de abilidad necesario en sistemas que lo requieran, como por ejemplo, en sistemas cr ticos. Adem as, se incluye la descripci on de las herramientas software que implementan las diferentes ideas propuestas. Esto le da al trabajo una utilidad m as all a del marco te orico, permitiendo poner en pr actica y probar con ejemplos reales los diferentes an alisis. Todas las ideas aqu presentadas constituyen, por s mismas, propuestas aplicables en multitud de contextos y problemas actuales. Adem as, individualmente sirven de punto de partida para otros an alisis derivados, as como para la adaptaci on a otros lenguajes de la misma familia. Esto le da un valor a~nadido a este trabajo, como bien atestiguan algunos trabajos posteriores que ya se est an bene ciando de los resultados obtenidos en esta tesis.Concurrent languages are increasingly present in our society, both in new technologies and in the systems used on a daily basis. Moreover, given the current systems distribution and their internal architecture, one can expect that this remains so in the coming years. In this context, the development of tools to support the implementation of concurrent programs becomes essential. Futhermore, the behavior of concurrent systems is particularly difficult to analyse, so that any tool that helps in this task, even if in a limited way, will be very useful. For example, one can find tools for debugging, analysis, testing, optimisation, or simplification of programs, which are widely used by programmers nowadays. The purpose of this thesis is to introduce, through various concurrent programming languages, some analysis techniques that can help to improve the experience of the software development and release for concurrent models. This thesis introduces both static (approximating all possible executions) and dynamic (considering a specific execution) analysis. The topics considered here differ enough from each other to be fully independent. Nevertheless, they have a common link: they can be used to analyse properties of a concurrent programming language. All the analyses presented here have been formally defined and their correctness have been proved, ensuring that the results will have the reliability degree which is needed for some systems (for instance, for critical systems). It also includes a description of the software tools that implement the different ideas proposed. This gives the work a usefulness well beyond the theoretical aspect, allowing us to put it in practice and to test the different analyses with real-world examples All the ideas here presented are, by themselves, approaches that can be applied in many current contexts and problems. Moreover, individually they serve as a starting point for other derived analysis, as well as for the adaptation to other languages of the same family. This gives an added value to this work, a fact confirmed by some later works that are already benefiting from the results obtained in this thesis.Tamarit Muñoz, S. (2013). Analysis Techniques for Concurrent Programming Languages [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31651TESI

    Removing superfluous versions in polyvariant specialization of prolog programs

    No full text
    Abstract. Polyvariant specialization allows generating multiple versions of a procedure, which can then be separately optimized for different uses. Since allowing a high degree of polyvariance often results in more optimized code, polyvariant specializers, such as most partial evaluators, can generate a large number of versions. This can produce unnecessarily large residual programs. Also, large programs can be slower due to cache miss effects. A possible solution to this problem is to introduce a minimization step which identifies sets of equivalent versions, and replace all occurrences of such versions by a single one. In this work we present a unifying view of the problem of superfluous polyvariance. It includes both partial deduction and abstract multiple specialization. As regards partial deduction, we extend existing approaches in several ways. First, previous work has dealt with pure logic programs and a very limited class of builtins. Herein we propose an extension to traditional characteristic trees which can be used in the presence of calls to external predicates. This includes all builtins, libraries, other user modules, etc. Second, we propose the possibility of collapsing versions which are not strictly equivalent. This allows trading time for space and can be useful in the context of embedded and pervasive systems. This is done by residualizing certain computations for external predicates which would otherwise be performed at specialization time. Third, we provide an experimental evaluation of the potential gains achievable using minimization which leads to interesting conclusions.

    Foundations of Security Analysis and Design III, FOSAD 2004/2005- Tutorial Lectures

    Get PDF
    he increasing relevance of security to real-life applications, such as electronic commerce and Internet banking, is attested by the fast-growing number of research groups, events, conferences, and summer schools that address the study of foundations for the analysis and the design of security aspects. This book presents thoroughly revised versions of eight tutorial lectures given by leading researchers during two International Schools on Foundations of Security Analysis and Design, FOSAD 2004/2005, held in Bertinoro, Italy, in September 2004 and September 2005. The lectures are devoted to: Justifying a Dolev-Yao Model under Active Attacks, Model-based Security Engineering with UML, Physical Security and Side-Channel Attacks, Static Analysis of Authentication, Formal Methods for Smartcard Security, Privacy-Preserving Database Systems, Intrusion Detection, Security and Trust Requirements Engineering

    Proceedings of the Workshop on Change of Representation and Problem Reformulation

    Get PDF
    The proceedings of the third Workshop on Change of representation and Problem Reformulation is presented. In contrast to the first two workshops, this workshop was focused on analytic or knowledge-based approaches, as opposed to statistical or empirical approaches called 'constructive induction'. The organizing committee believes that there is a potential for combining analytic and inductive approaches at a future date. However, it became apparent at the previous two workshops that the communities pursuing these different approaches are currently interested in largely non-overlapping issues. The constructive induction community has been holding its own workshops, principally in conjunction with the machine learning conference. While this workshop is more focused on analytic approaches, the organizing committee has made an effort to include more application domains. We have greatly expanded from the origins in the machine learning community. Participants in this workshop come from the full spectrum of AI application domains including planning, qualitative physics, software engineering, knowledge representation, and machine learning

    Propagation Networks: A Flexible and Expressive Substrate for Computation

    Get PDF
    PhD thesisI propose a shift in the foundations of computation. Practically all ideas of general-purpose computation today are founded either on execution of sequences of atomic instructions, i.e., assembly languages, or on evaluation of tree-structured expressions, i.e., most higher level programming languages. Both have served us well in the past, but it is increasingly clear that we need something more. I suggest that we can build general-purpose computation on propagation of information through networks of stateful cells interconnected with stateless autonomous asynchronous computing elements. Various forms of this general idea have been used with great success for various special purposes; perhaps the most immediate example is constraint propagation in constraint satisfaction systems. These special-purpose systems, however, are all complex and all different, and neither compose well, nor interoperate well, nor generalize well. A foundational layer is missing. The key insight in this work is that a cell should not be seen as storing a value, but as accumulating information about a value. The cells should never forget information -- such monotonicity prevents race conditions in the behavior of the network. Monotonicity of information need not be a severe restriction: for example, carrying reasons for believing each thing makes it possible to explore but thenpossibly reject tentative hypotheses, thus appearing to undo something, while maintaining monotonicity. Accumulating information is a broad enough design principle to encompass arbitrary computation. The object of this dissertation is therefore to architect a general-purpose computing system based on propagation networks; to subsume expression evaluation under propagation just as instruction execution is subsumed under expression evaluation; to demonstrate that a general-purpose propagation system can recover all the benefits that have been derived from special-purpose propagation systems, allow them to compose andinteroperate, and offer further expressive power beyond what we have known in the past; and finally to contemplate the lessons that such a fundamental shift can teach us about the deep nature of computation.My graduate career in general, and this work in particular, have been sponsored in part by a National Science Foundation Graduate Research Fellowship, by the Disruptive Technology Office as part of the AQUAINT Phase 3 research program, by the Massachusetts Institute of Technology, by Google, Inc., and by the National Science Foundation Cybertrust (05-518) program.Doctor of Philosoph
    corecore