128 research outputs found
Abstract State Machines 1988-1998: Commented ASM Bibliography
An annotated bibliography of papers which deal with or use Abstract State
Machines (ASMs), as of January 1998.Comment: Also maintained as a BibTeX file at http://www.eecs.umich.edu/gasm
On the synthesis and processing of high quality audio signals by parallel computers
This work concerns the application of new computer architectures to the creation and manipulation of high-quality audio bandwidth signals. The configuration of both the hardware and software in such systems falls under consideration in the three major sections which present increasing levels of algorithmic concurrency. In the first section, the programs which are described are distributed in identical copies across an array of processing elements; these programs run autonomously, generating data independently, but with control parameters peculiar to each copy: this type of concurrency is referred to as isonomic}The central section presents a structure which distributes tasks across an arbitrary network of processors; the flow of control in such a program is quasi- indeterminate, and controlled on a demand basis by the rate of completion of the slave tasks and their irregular interaction with the master. Whilst that interaction is, in principle, deterministic, it is also data-dependent; the dynamic nature of task allocation demands that no a priori knowledge of the rate of task completion be required. This type of concurrency is called dianomic? Finally, an architecture is described which will support a very high level of algorithmic concurrency. The programs which make efficient use of such a machine are designed not by considering flow of control, but by considering flow of data. Each atomic algorithmic unit is made as simple as possible, which results in the extensive distribution of a program over very many processing elements. Programs designed by considering only the optimum data exchange routes are said to exhibit systolic^ concurrency. Often neglected in the study of system design are those provisions necessary for practical implementations. It was intended to provide users with useful application programs in fulfilment of this study; the target group is electroacoustic composers, who use digital signal processing techniques in the context of musical composition. Some of the algorithms in use in this field are highly complex, often requiring a quantity of processing for each sample which exceeds that currently available even from very powerful computers. Consequently, applications tend to operate not in 'real-time' (where the output of a system responds to its input apparently instantaneously), but by the manipulation of sounds recorded digitally on a mass storage device. The first two sections adopt existing, public-domain software, and seek to increase its speed of execution significantly by parallel techniques, with the minimum compromise of functionality and ease of use. Those chosen are the general- purpose direct synthesis program CSOUND, from M.I.T., and a stand-alone phase vocoder system from the C.D.P..(^4) In each case, the desired aim is achieved: to increase speed of execution by two orders of magnitude over the systems currently in use by composers. This requires substantial restructuring of the programs, and careful consideration of the best computer architectures on which they are to run concurrently. The third section examines the rationale behind the use of computers in music, and begins with the implementation of a sophisticated electronic musical instrument capable of a degree of expression at least equal to its acoustic counterparts. It seems that the flexible control of such an instrument demands a greater computing resource than the sound synthesis part. A machine has been constructed with the intention of enabling the 'gestural capture' of performance information in real-time; the structure of this computer, which has one hundred and sixty high-performance microprocessors running in parallel, is expounded; and the systolic programming techniques required to take advantage of such an array are illustrated in the Occam programming language
Formal methods and tools for the development of distributed and real time systems : Esprit Project 3096 (SPEC)
The Basic Research Action No. 3096, Formal Methods snd Tools for the Development of Distributed and Real Time Systems, is funded in the Area of Computer Science, under the ESPRIT Programme of the European Community. The coordinating institution is the Department of Computing Science, Eindhoven University of Technology, and the participating Institutions are the Institute of Computer Science of Crete. the Swedish Institute of Computer Science, the Programmimg Research Group of the University of Oxford, and the Computer Science Departments of the University of Manchester, Imperial
College. Weizmann Institute of Science, Eindhoven University of Technology, IMAG Grenoble. Catholic University of Nijmegen, and the University of Liege. This document contains the synopsis. and part of the sections on objectives and area of advance, on baseline and rationale, on research goals, and on organisation of the action, as contained in the original proposal, submitted June, 198S. The section on the state of the art (18 pages) and the full list of references (21 pages) of the original proposal have been deleted because of limitation of available space
On computing all maximal cliques distributedly
A distributed algorithm is presented for generating all maximal cIiques in a network graph, based on the sequential version of Tsukiyama et al. [TIAS77]. The time complexity of the proposed approach is restricted to the induced neighborhood of a node, and the communication complexity is O( md) where m is the number of connections, and d is the maximum degree in the graph. Messages are O(log n ) bits long, where n is the number of nodes (processors) in the system. As an appIication, a distributed algorithm for constructing the clique graph k( G) from a given network graph G is developed within the scope of dynamic transformations of topologies
The symbiosis of concurrency and verification: teaching and case studies
Concurrency is beginning to be accepted as a core knowledge area in the undergraduate CS
curriculum—no longer isolated, for example, as a support mechanism in a module on operating systems or
reserved as an advanced discipline for later study. Formal verification of system properties is often considered a
difficult subject area, requiring significant mathematical knowledge and generally restricted to smaller systems
employing sequential logic only. This paper presents materials, methods and experiences of teaching concurrency
and verification as a unified subject, as early as possible in the curriculum, so that they become fundamental elements
of our software engineering tool kit—to be used together every day as a matter of course. Concurrency and
verification should live in symbiosis. Verification is essential for concurrent systems as testing becomes especially
inadequate in the face of complex non-deterministic (and, therefore, hard to repeat) behaviours. Concurrency
should simplify the expression of most scales and forms of computer system by reflecting the concurrency of the
worlds in which they operate (and, therefore, have to model); simplified expression leads to simplified reasoning
and, hence, verification. Our approach lets these skills be developed without requiring students to be trained in
the underlying formal mathematics. Instead, we build on the work of those who have engineered that necessary
mathematics into the concurrency models we use (CSP, ?-calculus), the model checker (FDR) that lets us explore
and verify those systems, and the programming languages/libraries (occam-?, Go, JCSP, ProcessJ) that let us
design and build efficient executable systems within these models. This paper introduces a workflow methodology
for the development and verification of concurrent systems; it also presents and reflects on two open-ended case
studies, using this workflow, developed at the authors’ two universities. Concerns analysed include safety (don’t do
bad things), liveness (do good things) and low probability deadlock (that testing fails to discover). The necessary
technical background is given to make this paper self-contained and its work simple to reproduce and extend
A Slotted Ring Test Bed for the Study of ATM Network Congestion Management
This thesis addresses issues raised by the proposed Broadband Integrated Services Digital Network which will provide a flexible combination of integrated services traffic through its cell-based Asynchronbus Transport Mode (ATM). The introduction of a cell-based, connection-oriented, transport mode brings with it new technical challenges for network management. The routing of cells, their service at switching centres, and problems of cell congestion not encountered in the existing network, are some of the key issues.
The thesis describes the development of a hardware slotted ring testbed for the investigation of congestion management in an ATM network. The testbed is designed to incorporate a modified form of the ORWELL protocol to control media access. The media access protocol is analysed to give a model for maximum throughput and reset interval under various traffic distributions. The results from the models are compared with measurements carried out on the testbed, where cell arrival statistics are also varied. It is shown that the maximum throughput of the testbed is dependent on both traffic distribution and cell arrival statistics.
The testbed is used for investigations in a heterogeneous traffic environment where two classes of traffic with different cell arrival statistics and quality of service requirements are defined. The effect of prioritisation, media access protocol, traffic intensity, and traffic source statistics were investigated by determining an Admissible Load Region (ALR) for a network station. Conclusions drawn from this work suggest that there are many problems associated with the reliable definition of an ALR because of the number of variable parameters which could shift the ALR boundary. A suggested direction for further work is to explore bandwidth reservation and the concept of equivalent capacity of a connection, and how this can be linked to source control parameters
Tools and Techniques for Decision Tree Learning
Decision tree learning is an important field of machine learning. In this study we examine both formal and practical aspects of decision tree learning. We aim at answering to two important needs: The need for better motivated decision tree learners and an environment facilitating experimentation with inductive learning algorithms. As results we obtain new practical tools and useful techniques for decision tree learning. First, we derive the practical decision tree learner Rank based on the Findmin protocol of Ehrenfeucht and Haussler. The motivation for the changes introduced to the method comes from empirical experience, but we prove the correctness of the modifications in the probably approximately correct learning framework. The algorithm is enhanced by extending it to operate in the multiclass situations, making it capable of working within the incremental setting, and providing noise tolerance into it. Together these modifications entail practicability through a formal development..
UTP, Circus, and Isabelle
We dedicate this paper with great respect and friendship to He Jifeng on the occasion of his 80th birthday. Our research group owes much to him. The authors have over 150 publications on unifying theories of programming (UTP), a research topic Jifeng created with Tony Hoare. Our objective is to recount the history of Circus (a combination of Z, CSP, Dijkstra’s guarded command language, and Morgan’s refinement calculus) and the development of Isabelle/UTP. Our paper is in two parts. (1) We first discuss the activities needed to model systems: we need to formalise data models and their behaviours. We survey our work on these two aspects in the context of Circus. (2) Secondly, we describe our practical implementation of UTP in Isabelle/HOL. Mechanising UTP theories is the basis of novel verification tools. We also discuss ongoing and future work related to (1) and (2). Many colleagues have contributed to these works, and we acknowledge their support
- …