Skip to main content
Article thumbnail
Location of Repository

A mathematical theory of synchronous concurrent algorithms

By Benjamin Criveli Thompson


A synchronous concurrent algorithm is an algorithm that is described as a network of intercommunicating processes or modules whose concurrent actions are synchronised with respect to a global clock. Synchronous algorithms include systolic algorithms; these are algorithms that are well-suited to implementation in VLSI technologies.\ud \ud This thesis provides a mathematical theory for the design and analysis of synchronous algorithms. The theory includes the formal specification of synchronous algorithms; techniques for proving the correctness and performance or time-complexity of synchronous algorithms, and formal accounts of the simulation and top-down design of synchronous algorithms.\ud \ud The theory is based on the observation that a synchronous algorithm can be specified in a natural way as a simultaneous primitive recursive function over an abstract data type; these functions were first studied by J. V. Tucker and J. I. Zucker. The class of functions is described via a formal syntax and semantics, and this leads to the definition of a functional algorithmic notation called PR. A formal account of synchronous algorithms and their behaviour is achieved by showing that synchronous algorithms can be specified in PR. A formal account of the performance of synchronous algorithms is achieved via a mathematical account of the time taken to evaluate a function defined by simultaneous primitive recursion.\ud \ud A synchronous algorithm, when specified in PR, can be transformed into a program in a language called FPIT. FPIT is a language based on abstract data types and on the multiple or concurrent assignment statement. The transformation from PR to FPIT is phrased as a compiler that is proved correct; compiling the PR-representation of a synchronous algorithm thus yields a provably correct simulation of the algorithm. It is proved that FPIT is just what is needed to implement PR by defining a second compiler, this time from FPIT back into PR, which is again proved correct, and thus PR and FPIT are formally\ud computationally equivalent. Furthermore, an autonomous account of the length of computation of FPIT programs is given, and the two compilers are shown to be performance preserving; thus PR and FPIT are computationally equivalent in an especially strong sense.\ud \ud The theory involves a formal account of the top-down design of synchronous algorithms that is phrased in terms of correctness and performance preserving transformations between synchronous algorithms specified at different levels of data abstraction. A new definition of what it means for one abstract data type to be 'implemented' over another is given. This definition generalises the idea of a computable algebra due to A. I. Mal'cev and M. 0. Rabin. It is proved that if one data type D is implementable over\ud another data type D', then there exists correctness and performance preserving compiler mapping high level PR-programs over D to low level PR-programs over D'.\ud \ud The compilers from PR to FPIT and from FPIT to PR are defined explicitly, and our compilerexistence\ud proof is constructive, and so this work is the basis of theoretically well-founded software tools\ud for the design and analysis of synchronous algorithms

Publisher: School of Computing (Leeds)
Year: 1987
OAI identifier:

Suggested articles


  1. A doi
  2. (1981). A 32-bit VLSI Chip",
  3. (1980). A Calculus of Communicating Systems, doi
  4. (1980). A Complexity Theory for VLSI",
  5. (1979). A Computational Logic, doi
  6. (1985). A Derivation of a Distributed Implementation of Warshall's Algorithm", doi
  7. (1976). A Discipline of Programming,
  8. (1983). A Formal Basis for the Analysis of Circuit Timing",
  9. (1986). A Formal Model for the Hierarchical Design of Synchronous and Systolic Algorithms", doi
  10. A Formulation of the Simple Theory of Types", doi
  11. (1987). A Graph-Theoretic Model of Synchronous Computation".
  12. (1943). A Logical Calculus of the Ideas Immanent in Nervous Activity", doi
  13. (1984). A Mathematical Model for the Verification of Systolic Networks",
  14. (1984). A Switch-Level Simulator for MOS Digital Systems", doi
  15. (1987). A Systolic Algorithm for Matrix-Vector Multiplication",
  16. (1984). A Taxonomy of Parallel Sorting", doi
  17. (1983). A Temporal Logic for Multi-Level Reasoning About Hardware",
  18. (1985). A Transformational Model of VLSI Systolic Design", doi
  19. (1981). A Wavefront Notation Tool for VLSI Array Design", doi
  20. (1984). ADVIS: A Software Package for the Design of Systolic Arrays", doi
  21. Algebra of Communicating Processes", doi
  22. (1968). Algebraic Properties of Structures",
  23. (1986). Algebraic Specifications of Computable and Semicomputable Data Types", doi
  24. (1986). Algorithmic Procedures, Generalised Turing Algorithms, and Elementary Recursion Theory", doi
  25. (1983). An Algebra for VLSI Algorithm Design",
  26. (1986). An example of stepwise refinement of distributed programs: quiescence detection", doi
  27. An Experimental Study of a Timing Assumption in VLSI Complex.
  28. (1978). An Initial Algebra Approach to the Specification, Correctness and Implementation of Abstract Data Types",
  29. (1986). An Investigation into the Role of PRO in Laying Out Synchronous Systems",
  30. (1983). Automatic and Hierarchical Verification of Circuits using Temporal Logic", doi
  31. (1983). Bibliography on abstract data types. doi
  32. (1987). Boltzmann Machines and their Applications", doi
  33. Can Programming be Liberated from the von Neumann Style? A Functional Style and its Algebra of Programs", doi
  34. (1968). Cellular Automata,
  35. (1982). CIRCAL: A Calculus for Circuit Description", doi
  36. (1985). Communicating Sequential Processes, Prentice-Hall International, doi
  37. (1982). Complexity Theory and the Operational Structure of Algebraic Programming Systems", doi
  38. (1960). Computable Algebra, General Theory and the Theory of Computable Fields", doi
  39. (1961). Constructive Algebras",
  40. (1983). Design and Complexity of VLSI Algorithms",
  41. (1986). Design Transformation and Chip Planning",
  42. (1956). Effective Procedures in Field Theory", doi
  43. (1987). Efficient Systolic Arrays for the Solution of Toeplitz Systems: An Illustration of a Methodology for the Construction of Systolic Architectures in VLSI",
  44. (1983). Equivalence of The Arbiter, The Synchronizer, The Latch, and The Inertial Delay", doi
  45. (1986). Executing Temporal Logic Program,
  46. (1987). Formal Verification and Implementation of a Microprocessor", in doi
  47. (1985). Fundamentals of Algebraic Specification I- Examples and Initial Semantics", doi
  48. (1980). Hardware Specification with Temporal Logic: An Example", doi
  49. (1986). Hardware Verification using Higher-Order Logic", doi
  50. (1987). Hardware Verification Workshop, doi
  51. (1984). Hoare Logics for the Run-Time Analysis of Programs",
  52. (1987). HOL: A Proof Generating System for Higher-Order Logic", in doi
  53. (1980). HOPE: An Experimental Applicative Language", doi
  54. (1985). Initiality, Induction, and Computability",
  55. (1980). Introduction to VLSI Systems, Addison-Wesley
  56. (1961). Iterative Arrays of Logical Circuits,
  57. (1977). Microelectronics and Computer Science", doi
  58. (1987). Models and Logics for MOS Circuits", in doi
  59. More Advice On Structuring Compilers and Proving Them Correct", doi
  60. (1983). North-Holland,
  61. (1936). On Computable Numbers, with an Application to the Entscheidungsproblem", doi
  62. (1985). On Mapping Homogeneous Graphs on a Linear Array-processor Model",
  63. (1986). On Synthesising Systolic Arrays form Recurrence Equations with Linear Dependencies", doi
  64. On the Specification, Implementation, and Verification of Data Types",
  65. (1983). Optimising Synchronous Systems",
  66. (1987). Overview of SAGA and CONDENSE", doi
  67. (1987). Papers of John von doi
  68. (1985). Parallel Sorting Algorithms, doi
  69. (1983). Partially Ordered Computations, with Applications to VLSI Design",
  70. (1986). Partitioning and Mapping Algorithms onto Fixed Size VLSI Algorithms", doi
  71. (1969). Perceptrons: An Introduction to Computational Geometry, doi
  72. (1973). Principles of Programming Languages, doi
  73. (1987). Program Correctness over Abstract Data Types with Error-State Semantics, North-Holland (in press),
  74. (1986). Proving Systolic Algorithms Correct", doi
  75. (1983). Reasoning About Synchronous Systems",
  76. (1967). Recursive Functions, doi
  77. References to the Literature on VLSI Algorithmics and Related Mathematical and Practical Issues",
  78. (1980). Semantics Directed Compiler Generation, Springer-Verlag (LNCS 259),
  79. (1968). Sorting Networks and Their Applications", doi
  80. (1984). Spacetime Representations of Systolic Computational Structures", doi
  81. (1985). Special-Purpose VLSI Architectures: General Discussion and a Case Study",
  82. (1983). Specification and Proof of a Regular Language Recogniser in Synchronous CCS",
  83. (1986). Specification and Verification using Higher-Order Logic: A Case Study", doi
  84. (1987). Specification, Derivation, and Verification of Concurrent Line Drawing Algorithms and Architectures", doi
  85. (1982). Springer-Verlag (LNCS 201),
  86. (1987). Springer-Verlag (LNCS 258),
  87. (1980). Springer-Verlag (LNCS 259),
  88. (1984). Synthesis of Digital Circuits from Recursion Equations,
  89. (1987). Synthesis of Systolic Arrays for Inductive Problems", doi
  90. (1986). Synthesising Non-uniform Systolic Designs",
  91. (1987). Synthesising Systolic Arrays using DIASTOL",
  92. (1980). System Timing, (Chapter 7 of Mead and Conway [1980].
  93. Systolic Algorithms as Programs". doi
  94. Systolic Array Synthesis by Static Analysis of Program Dependencies", doi
  95. (1979). Systolic Arrays (for VLSI)",
  96. (1981). Temporal Specification of Self-Timed Systems", doi
  97. (1967). The Complexity of Loop Programs, doi
  98. (1987). The Concurrent Assignment Representation of Synchronous Systems", doi
  99. (1985). The Design of Optimal Systolic Arrays", doi
  100. (1986). The FM5801: A Verified Microprocessor",
  101. (1987). The Formal Specification of a Digital Correlator I: Abstract User Process",
  102. (1963). The Main Features of CPL", doi
  103. (1978). The Multiple Assignment Statement", doi
  104. (1967). The Organisation of Computations for Uniform Recurrence Equations", doi
  105. The Systematic Design of Systolic Arrays",
  106. (1984). The VERITAS Theorem-Prover and its Formal Specification",
  107. Theoretical Considerations in Algorithm Design", doi
  108. (1975). Theory of Program Structures: Schemes, Semantics, Verification, Springer-Verlag (LNCS 36), doi
  109. (1984). Unifying VLSI Array Design with Linear Transformations of Space-Time",
  110. (1888). Was Sind und Was Sollen die Zahlen?, doi
  111. (1982). Why Systolic Architectures? ",
  112. (1982). Winning Ways, doi
  113. (1983). µFP: An Algebraic Design Language", PhD. Thesis,

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.