254 research outputs found

    Structured editing of literate programs

    Get PDF

    One-pass transformations of attributed program trees

    Get PDF
    The classical attribute grammar framework can be extended by allowing the specification of tree transformation rules. A tree transformation rule consists of an input template, an output template, enabling conditions which are predicates on attribute instances of the input template, and re-evaluation rules which define the values of attribute instances of the output template. A tree transformation may invalidate attribute instances which are needed for additional transformations.\ud \ud In this paper we investigate whether consecutive tree transformations and attribute re-evaluations are safely possible during a single pass over the derivation tree. This check is made at compiler generation time rather than at compilation time.\ud \ud A graph theoretic characterization of attribute dependencies is given, showing in which cases the recomputation of attribute instances can be done in parallel with tree transformations

    Diseño de un entorno de desarrollo Java para gramáticas de atributos y de Christiansen

    Full text link
    Este trabajo fin de grado soluciona una necesidad de desarrollo del grupo de investigación en el que me integro. En el contexto de la tesis de Marina de la Cruz Echeandía, tutora de este trabajo, se presentaba Geema: una aplicación que implementa su propuesta para la programación automática evolutiva conocida como evolución con gramáticas de Christinansen. Estas gramáticas son una extensión adaptable de las de atributos. En aquella tesis no tuvo cabida la realización de un entorno de desarrollo para la especificación de las gramáticas que pudiera ser utilizado como módulo de entrada de Geema. El presente trabajo soluciona ese problema. Los entornos de desarrollo de los lenguajes de programación modernos suelen añadir funcionalidad extra a la mera edición de los programas: resaltado de sintaxis, autocompletado con símbolos definidos, e incluso algunas herramientas que faciliten el correcto desarrollo como compilación en tiempo de edición, etc. Las gramáticas de atributos son un modelo clásico con muchos más años de historia y de desarrollo que las de Christiansen. Uno de sus principales problemas de diseño es la aparición de dependencias entre atributos circulares. Su detección general es un bien conocido problema NP. Uno de los principales atractivos del entorno de desarrollo propuesto en este trabajo es la incorporación de test de corrección de las gramáticas. Nos centramos en los aplicables a gramáticas de atributos, en concreto a una versión simplificada y conservadora de test de circularidades y la detección de problemas de inicialización de atributos. De esta manera el presente trabajo soluciona una necesidad de desarrollo de las herramientas del grupo de investigación (edición de gramáticas de Christiansen) y añade la funcionalidad relevante de la detección de problemas de diseño de la misma (considerada como gramática de atributos)

    Incremental Semantic Evaluation for Interactive Systems: Inertia, Pre-emption, and Relations

    Get PDF
    Although schemes for incremental semantic evaluation have been explored and refined for more than two decades, the demands of user interaction continue to outstrip the capabilities of these schemes. The feedback produced by a semantic evaluator must support the user's programming activities: it must be structured in a way that provides the user with meaningful insight into the program (directly, or via other tools in the environment) and it must be timely. In this paper we extend an incremental attribute evaluation scheme with three techniques to better meet these demands within the context of a modeless editing system with a flexible tool integration paradigm. Efficient evaluation in the presence of syntax errors (which arise often under modeless editing) is supported by giving semantic attributes inertia: a tendency to not change unless necessary. Pre-emptive evaluation helps to reduce the delays associated with a sequence of edits, allowing an evaluator to "keep pace" with the user. Relations provide a general means to capture semantic structure (for the user, other tools, and as attributes within an evaluation) and are treated efficiently using a form of differential propagation. The combination of these three techniques meets the demands of user interaction; leaving out any one does not

    Reformulating Space Syntax: The Automatic Definition and Generation of Axial Lines and Axial Maps

    Get PDF
    Space syntax is a technique for measuring the relative accessibility of different locations in a spatial system which has been loosely partitioned into convex spaces.These spaces are approximated by straight lines, called axial lines, and the topological graph associated with their intersection is used to generate indices of distance, called integration, which are then used as proxies for accessibility. The most controversial problem in applying the technique involves the definition of these lines. There is no unique method for their generation, hence different users generate different sets of lines for the same application. In this paper, we explore this problem, arguing that to make progress, there need to be unambiguous, agreed procedures for generating such maps. The methods we suggest for generating such lines depend on defining viewsheds, called isovists, which can be approximated by their maximum diameters,these lengths being used to form axial maps similar to those used in space syntax. We propose a generic algorithm for sorting isovists according to various measures,approximating them by their diameters and using the axial map as a summary of the extent to which isovists overlap (intersect) and are accessible to one another. We examine the fields created by these viewsheds and the statistical properties of the maps created. We demonstrate our techniques for the small French town of Gassin used originally by Hillier and Hanson (1984) to illustrate the theory, exploring different criteria for sorting isovists, and different axial maps generated by changing the scale of resolution. This paper throws up as many problems as it solves but we believe it points the way to firmer foundations for space syntax

    The automatic definition and generation of axial lines and axial maps

    Get PDF

    A graph theoretic approach to scene matching

    Get PDF
    The ability to match two scenes is a fundamental requirement in a variety of computer vision tasks. A graph theoretic approach to inexact scene matching is presented which is useful in dealing with problems due to imperfect image segmentation. A scene is described by a set of graphs, with nodes representing objects and arcs representing relationships between objects. Each node has a set of values representing the relations between pairs of objects, such as angle, adjacency, or distance. With this method of scene representation, the task in scene matching is to match two sets of graphs. Because of segmentation errors, variations in camera angle, illumination, and other conditions, an exact match between the sets of observed and stored graphs is usually not possible. In the developed approach, the problem is represented as an association graph, in which each node represents a possible mapping of an observed region to a stored object, and each arc represents the compatibility of two mappings. Nodes and arcs have weights indicating the merit or a region-object mapping and the degree of compatibility between two mappings. A match between the two graphs corresponds to a clique, or fully connected subgraph, in the association graph. The task is to find the clique that represents the best match. Fuzzy relaxation is used to update the node weights using the contextual information contained in the arcs and neighboring nodes. This simplifies the evaluation of cliques. A method of handling oversegmentation and undersegmentation problems is also presented. The approach is tested with a set of realistic images which exhibit many types of sementation errors

    Reliably composable language extensions

    Get PDF
    University of Minnesota Ph.D. dissertation. May 2017. Major: Computer Science. Advisor: Eric Van Wyk. 1 computer file (PDF); x, 300 pages.Many programming tasks are dramatically simpler when an appropriate domain-specific language can be used to accomplish them. These languages offer a variety of potential advantages, including programming at a higher level of abstraction, custom analyses specific to the problem domain, and the ability to generate very efficient code. But they also suffer many disadvantages as a result of their implementation techniques. Fully separate languages (such as YACC, or SQL) are quite flexible, but these are distinct monolithic entities and thus we are unable to draw on the features of several in combination to accomplish a single task. That is, we cannot compose their domain-specific features. "Embedded" DSLs (such as parsing combinators) accomplish something like a different language, but are actually implemented simply as libraries within a flexible host language. This approach allows different libraries to be imported and used together, enabling composition, but it is limited in analysis and translation capabilities by the host language they are embedded within. A promising combination of these two approaches is to allow a host language to be directly extended with new features (syntactic and semantic.) However, while there are plausible ways to attempt to compose language extensions, they can easily fail, making this approach unreliable. Previous methods of assuring reliable composition impose onerous restrictions, such as throwing out entirely the ability to introduce new analysis. This thesis introduces reliably composable language extensions as a technique for the implementation of DSLs. This technique preserves most of the advantages of both separate and "embedded" DSLs. Unlike many prior approaches to language extension, this technique ensures composition of multiple language extensions will succeed, and preserves strong properties about the behavior of the resulting composed compiler. We define an analysis on language extensions that guarantees the composition of several extensions will be well-defined, and we further define a set of testable properties that ensure the resulting compiler will behave as expected, along with a principle that assigns "blame" for bugs that may ultimately appear as a result of composition. Finally, to concretely compare our approach to our original goals for reliably composable language extension, we use these techniques to develop an extensible C compiler front-end, together with several example composable language extensions
    corecore