101,200 research outputs found

    Network algebra for synchronous dataflow

    Get PDF
    We develop an algebraic theory of synchronous dataflow networks. First, a basic algebraic theory of networks, called BNA (Basic Network Algebra), is introduced. This theory captures the basic algebraic properties of networks. For synchronous dataflow networks, it is subsequently extended with additional constants for the branching connections that occur between the cells of synchronous dataflow networks and axioms for these additional constants. We also give two models of the resulting theory, the one based on stream transformers and the other based on processes as considered in process algebra.Comment: 24 page

    Extending a multi-set relational algebra to a parallel environment

    Get PDF
    Parallel database systems will very probably be the future for high-performance data-intensive applications. In the past decade, many parallel database systems have been developed, together with many languages and approaches to specify operations in these systems. A common background is still missing, however. This paper proposes an extended relational algebra for this purpose, based on the well-known standard relational algebra. The extended algebra provides both complete database manipulation language features, and data distribution and process allocation primitives to describe parallelism. It is defined in terms of multi-sets of tuples to allow handling of duplicates and to obtain a close connection to the world of high-performance data processing. Due to its algebraic nature, the language is well suited for optimization and parallelization through expression rewriting. The proposed language can be used as a database manipulation language on its own, as has been done in the PRISMA parallel database project, or as a formal basis for other languages, like SQL

    AlSub: Fully Parallel and Modular Subdivision

    Full text link
    In recent years, mesh subdivision---the process of forging smooth free-form surfaces from coarse polygonal meshes---has become an indispensable production instrument. Although subdivision performance is crucial during simulation, animation and rendering, state-of-the-art approaches still rely on serial implementations for complex parts of the subdivision process. Therefore, they often fail to harness the power of modern parallel devices, like the graphics processing unit (GPU), for large parts of the algorithm and must resort to time-consuming serial preprocessing. In this paper, we show that a complete parallelization of the subdivision process for modern architectures is possible. Building on sparse matrix linear algebra, we show how to structure the complete subdivision process into a sequence of algebra operations. By restructuring and grouping these operations, we adapt the process for different use cases, such as regular subdivision of dynamic meshes, uniform subdivision for immutable topology, and feature-adaptive subdivision for efficient rendering of animated models. As the same machinery is used for all use cases, identical subdivision results are achieved in all parts of the production pipeline. As a second contribution, we show how these linear algebra formulations can effectively be translated into efficient GPU kernels. Applying our strategies to 3\sqrt{3}, Loop and Catmull-Clark subdivision shows significant speedups of our approach compared to state-of-the-art solutions, while we completely avoid serial preprocessing.Comment: Changed structure Added content Improved description

    Turing Automata and Graph Machines

    Full text link
    Indexed monoidal algebras are introduced as an equivalent structure for self-dual compact closed categories, and a coherence theorem is proved for the category of such algebras. Turing automata and Turing graph machines are defined by generalizing the classical Turing machine concept, so that the collection of such machines becomes an indexed monoidal algebra. On the analogy of the von Neumann data-flow computer architecture, Turing graph machines are proposed as potentially reversible low-level universal computational devices, and a truly reversible molecular size hardware model is presented as an example
    corecore