10,071 research outputs found

    Learning Minimal and Maximal Rules from Observations of Graph Transformations

    Get PDF
    Graph transformations have been used to model services and systems where rules describe pre and post conditions of operations changing a complex state. However, despite their intuitive nature, creating such models is a time-consuming and error-prone process. In this paper we investigate the possibility of extracting rules from observations of transformations, i.e., pairs of input and output graphs resulting from successful transformations and individual input graphs were they have failed. From such positive and negative examples, minimal rules are extracted, to be extended by context that is present in all positive examples and missing in at least one negative example. The result is are a maximal and a required rule, jointly with the minimal rule defining the range of possible rules that could have created the observed transformations. We report on an implementation of the approach, evaluate its accuracy, scalability and limitations, and discuss applications to reverse engineering visual constructs from observations of object states of components under test

    Heuristics for The Whitehead Minimization Problem

    Full text link
    In this paper we discuss several heuristic strategies which allow one to solve the Whitehead's minimization problem much faster (on most inputs) than the classical Whitehead algorithm. The mere fact that these strategies work in practice leads to several interesting mathematical conjectures. In particular, we conjecture that the length of most non-minimal elements in a free group can be reduced by a Nielsen automorphism which can be identified by inspecting the structure of the corresponding Whitehead Graph

    Inference via low-dimensional couplings

    Full text link
    We investigate the low-dimensional structure of deterministic transformations between random variables, i.e., transport maps between probability measures. In the context of statistics and machine learning, these transformations can be used to couple a tractable "reference" measure (e.g., a standard Gaussian) with a target measure of interest. Direct simulation from the desired measure can then be achieved by pushing forward reference samples through the map. Yet characterizing such a map---e.g., representing and evaluating it---grows challenging in high dimensions. The central contribution of this paper is to establish a link between the Markov properties of the target measure and the existence of low-dimensional couplings, induced by transport maps that are sparse and/or decomposable. Our analysis not only facilitates the construction of transformations in high-dimensional settings, but also suggests new inference methodologies for continuous non-Gaussian graphical models. For instance, in the context of nonlinear state-space models, we describe new variational algorithms for filtering, smoothing, and sequential parameter inference. These algorithms can be understood as the natural generalization---to the non-Gaussian case---of the square-root Rauch-Tung-Striebel Gaussian smoother.Comment: 78 pages, 25 figure

    What May Visualization Processes Optimize?

    Full text link
    In this paper, we present an abstract model of visualization and inference processes and describe an information-theoretic measure for optimizing such processes. In order to obtain such an abstraction, we first examined six classes of workflows in data analysis and visualization, and identified four levels of typical visualization components, namely disseminative, observational, analytical and model-developmental visualization. We noticed a common phenomenon at different levels of visualization, that is, the transformation of data spaces (referred to as alphabets) usually corresponds to the reduction of maximal entropy along a workflow. Based on this observation, we establish an information-theoretic measure of cost-benefit ratio that may be used as a cost function for optimizing a data visualization process. To demonstrate the validity of this measure, we examined a number of successful visualization processes in the literature, and showed that the information-theoretic measure can mathematically explain the advantages of such processes over possible alternatives.Comment: 10 page

    Model Checking Paxos in Spin

    Full text link
    We present a formal model of a distributed consensus algorithm in the executable specification language Promela extended with a new type of guards, called counting guards, needed to implement transitions that depend on majority voting. Our formalization exploits abstractions that follow from reduction theorems applied to the specific case-study. We apply the model checker Spin to automatically validate finite instances of the model and to extract preconditions on the size of quorums used in the election phases of the protocol.Comment: In Proceedings GandALF 2014, arXiv:1408.556
    corecore