1,441 research outputs found

    Ten virtues of structured graphs

    Get PDF
    This paper extends the invited talk by the first author about the virtues of structured graphs. The motivation behind the talk and this paper relies on our experience on the development of ADR, a formal approach for the design of styleconformant, reconfigurable software systems. ADR is based on hierarchical graphs with interfaces and it has been conceived in the attempt of reconciling software architectures and process calculi by means of graphical methods. We have tried to write an ADR agnostic paper where we raise some drawbacks of flat, unstructured graphs for the design and analysis of software systems and we argue that hierarchical, structured graphs can alleviate such drawbacks

    An Algebra of Hierarchical Graphs and its Application to Structural Encoding

    Get PDF
    We define an algebraic theory of hierarchical graphs, whose axioms characterise graph isomorphism: two terms are equated exactly when they represent the same graph. Our algebra can be understood as a high-level language for describing graphs with a node-sharing, embedding structure, and it is then well suited for defining graphical representations of software models where nesting and linking are key aspects. In particular, we propose the use of our graph formalism as a convenient way to describe configurations in process calculi equipped with inherently hierarchical features such as sessions, locations, transactions, membranes or ambients. The graph syntax can be seen as an intermediate representation language, that facilitates the encodings of algebraic specifications, since it provides primitives for nesting, name restriction and parallel composition. In addition, proving soundness and correctness of an encoding (i.e. proving that structurally equivalent processes are mapped to isomorphic graphs) becomes easier as it can be done by induction over the graph syntax

    What May Visualization Processes Optimize?

    Full text link
    In this paper, we present an abstract model of visualization and inference processes and describe an information-theoretic measure for optimizing such processes. In order to obtain such an abstraction, we first examined six classes of workflows in data analysis and visualization, and identified four levels of typical visualization components, namely disseminative, observational, analytical and model-developmental visualization. We noticed a common phenomenon at different levels of visualization, that is, the transformation of data spaces (referred to as alphabets) usually corresponds to the reduction of maximal entropy along a workflow. Based on this observation, we establish an information-theoretic measure of cost-benefit ratio that may be used as a cost function for optimizing a data visualization process. To demonstrate the validity of this measure, we examined a number of successful visualization processes in the literature, and showed that the information-theoretic measure can mathematically explain the advantages of such processes over possible alternatives.Comment: 10 page

    An efficient iterative method for looped pipe network hydraulics free of flow-corrections

    Get PDF
    The original and improved versions of the Hardy Cross iterative method with related modifications are today widely used for the calculation of fluid flow through conduits in loop-like distribution networks of pipes with known node fluid consumptions. Fluid in these networks is usually natural gas for distribution in municipalities, water in waterworks or hot water in district heating systems, air in ventilation systems in buildings and mines, etc. Since the resistances in these networks depend on flow, the problem is not linear like in electrical circuits, and an iterative procedure must be used. In both versions of the Hardy Cross method, in the original and in the improved one, the initial result of calculations in the iteration procedure is not flow, but rather a correction of flow. Unfortunately, these corrections should be added to or subtracted from flow calculated in the previous iteration according to complicated algebraic rules. Unlike the Hardy Cross method, which requires complicated formulas for flow corrections, the new Node-loop method does not need these corrections, as flow is computed directly. This is the main advantage of the new Node-loop method, as the number of iterations is the same as in the modified Hardy Cross method. Consequently, a complex algebraic scheme for the sign of the flow correction is avoided, while the final results remain accurate.Web of Science42art. no. 7

    S+Net: extending functional coordination with extra-functional semantics

    Get PDF
    This technical report introduces S+Net, a compositional coordination language for streaming networks with extra-functional semantics. Compositionality simplifies the specification of complex parallel and distributed applications; extra-functional semantics allow the application designer to reason about and control resource usage, performance and fault handling. The key feature of S+Net is that functional and extra-functional semantics are defined orthogonally from each other. S+Net can be seen as a simultaneous simplification and extension of the existing coordination language S-Net, that gives control of extra-functional behavior to the S-Net programmer. S+Net can also be seen as a transitional research step between S-Net and AstraKahn, another coordination language currently being designed at the University of Hertfordshire. In contrast with AstraKahn which constitutes a re-design from the ground up, S+Net preserves the basic operational semantics of S-Net and thus provides an incremental introduction of extra-functional control in an existing language.Comment: 34 pages, 11 figures, 3 table

    Trustworthy Refactoring via Decomposition and Schemes: A Complex Case Study

    Get PDF
    Widely used complex code refactoring tools lack a solid reasoning about the correctness of the transformations they implement, whilst interest in proven correct refactoring is ever increasing as only formal verification can provide true confidence in applying tool-automated refactoring to industrial-scale code. By using our strategic rewriting based refactoring specification language, we present the decomposition of a complex transformation into smaller steps that can be expressed as instances of refactoring schemes, then we demonstrate the semi-automatic formal verification of the components based on a theoretical understanding of the semantics of the programming language. The extensible and verifiable refactoring definitions can be executed in our interpreter built on top of a static analyser framework.Comment: In Proceedings VPT 2017, arXiv:1708.0688
    • ā€¦
    corecore