556 research outputs found
Design of testbed and emulation tools
The research summarized was concerned with the design of testbed and emulation tools suitable to assist in projecting, with reasonable accuracy, the expected performance of highly concurrent computing systems on large, complete applications. Such testbed and emulation tools are intended for the eventual use of those exploring new concurrent system architectures and organizations, either as users or as designers of such systems. While a range of alternatives was considered, a software based set of hierarchical tools was chosen to provide maximum flexibility, to ease in moving to new computers as technology improves and to take advantage of the inherent reliability and availability of commercially available computing systems
An occam Style Communications System for UNIX Networks
This document describes the design of a communications system which provides occam style communications primitives under a Unix environment, using TCP/IP protocols, and any number of other protocols deemed suitable as underlying transport layers. The system will integrate with a low overhead scheduler/kernel without incurring significant costs to the execution of processes within the run time environment. A survey of relevant occam and occam3 features and related research is followed by a look at the Unix and TCP/IP facilities which determine our working constraints, and a description of the T9000 transputer's Virtual Channel Processor, which was instrumental in our formulation. Drawing from the information presented here, a design for the communications system is subsequently proposed. Finally, a preliminary investigation of methods for lightweight access control to shared resources in an environment which does not provide support for critical sections, semaphores, or busy waiting, is made. This is presented with relevance to mutual exclusion problems which arise within the proposed design. Future directions for the evolution of this project are discussed in conclusion
OS Support for Portable Bulk Synchronous Parallel Programs
Predictability -- the ability to foretell that an implementation will not violate a set of specified reliability and timeliness requirements -- is a crucial, highly desirable property of responsive embedded systems. This paper overviews a development methodology for responsive systems, which enhances predictability by eliminating potential hazards resulting from physically-unsound specifications.
The backbone of our methodology is the Time-constrained Reactive Automaton (TRA) formalism, which adopts a fundamental notion of space and time that restricts expressiveness in a way that allows the specification of only reactive, spontaneous, and causal computation. Using the TRA model, unrealistic systems – possessing properties such as clairvoyance, caprice, infinite capacity, or perfect timing -- cannot even be specified. We argue that this "ounce of prevention" at the specification level is likely to spare a lot of time and energy in the development cycle of responsive systems -- not to mention the elimination of potential hazards that would have gone, otherwise, unnoticed.
The TRA model is presented to system developers through the Cleopatra programming language. Cleopatra features a C-like imperative syntax for the description of computation, which makes it easier to incorporate in applications already using C. It is event-driven, and thus appropriate for embedded process control applications. It is object-oriented and compositional, thus advocating modularity and reusability. Cleopatra is semantically sound; its objects can be transformed, mechanically and unambiguously, into formal TRA automata for verification purposes, which can be pursued using model-checking or theorem proving techniques. Since 1989, an ancestor of Cleopatra has been in use as a specification and simulation language for embedded time-critical robotic processes.ARPA (F19628-92-C-0113); NSF (CDA-9308833
Investigate and classify various types of computer architecture
Issued as Final report, Project no. G-36-60
An open interface for parallelization of traffic simulation
In this paper, we present the implementation of a parallel road traffic simulation using the concept of Lane Cut Points (LCPs) in the Spider programming environment. LCPs are storage buffers inserted into lane data structures at the road network partition edges. Vehicles enter a partition at the edges from an LCP and exit a partition edge into an LCP at the end of every simulation step. Spider, a parallel programming environment, which runs on PVM, coordinates the execution of the parallel traffic simulation
Medienphilosophie und Bildungsphilosophie – Ein Plädoyer für Schnittstellenerkundungen
Zur Relation von Medien und Bildung ist in den letzten Jahren aus verschiedensten Perspektiven unterschiedlich Gehaltvolles gesagt und geschrieben worden. Ähnlich wie bei vielen pädagogischen oder psychologischen Themen gilt auch hier, dass sich die sprechenden und schreibenden Akteure ungeachtet ihrer Bildung, Aus-, Vor- oder Unbildung, und ungeachtet der Reichweiten ihrer Sprachhandlungen allemal kompetent fühlen und kein Mangel an vollmundigen Diagnosen, Ankündigungen und Vorschlägen für Massnahmen besteht. Bermerkenswert erscheint mir aber auch, dass das Verhältnis von Medien- und Bildungsphilosophie bislang weitgehend ein Desiderat geblieben ist. Das bedeutet nicht, dass es der Sache nach keine einschlägigen Hinweise in der Philosophiegeschichte oder bei den „Klassikern der Pädagogik“ (Scheuerl 1979, Tenorth 2003) zu ?nden gäbe. Das bedeutet auch nicht, dass keine entsprechenden aktuellen Diskurse in Gang gekommen wären. Ganz im Gegenteil, im aktuellen Tagungsgeschehen sind etliche thematisch relevante Akzentsetzungen auszumachen.1 Es spricht allerdings vieles dafür, dass im Mainstream der Bildungstheorie und -philosophie der „mediatic turn“ noch nicht angekommen ist. Medien kommen beispielsweise im Zusammenhang der jüngsten, in vieler Hinsicht tref?ichen Bestandsaufnahme der Bildungstheorie und Bildungsphilosophie von Meyer-Wolters (2006) gerade mal unter dem Aspekt ihrer Zugänglichkeit vor (ebd.: 58). Der Modus der (Mit-)Erwähnung von Medien, nicht selten in Form von unverbindlich-generalistischen Verweisen auf die Relevanz technologischer Entwicklungen, ist sehr verbreitet. Aber selbst dort, wo Medien zur fachübergreifenden Querschnittsthematik oder zu einem „prioritären Thema“ (Keuffer/Oelkers 2001) erklärt werden, bedeutet dies im Allgemeinen keinen grundsätzlicheren Perspektivenwechsel. Grosso modo lässt sich mit Blick auf Binnenperspektiven der Bildungswissenschaften insgesamt behaupten, dass – abgesehen von einigen, mehr oder weniger sichtbaren Teildiskursen – Fragen nach der Relevanz von Medien, Medialität und Medialisierung sehr oft halbherzig, zögerlich oder gar nicht aufgegriffen und noch öfters in ihrer Tragweite verkannt werden
Collective computing
The parallel computing model used in this paper, the Collective Computing Model (CCM), is a variant of the well-known Bulk Synchronous Parallel (BSP) model. The synchronicity imposed by the BSP model restricts the set of available algorithms and prevents the overlapping of computation and communication. Other models, like the LogP model, allow asynchronous computing and overlapping but depend on the use of specific libraries. The CCM describes a system exploited through a standard software platform providing facilities for group creation, collective operations and remote memory operations. Based in the BSP model, two kinds of supersteps are considered: division supersteps and normal supersteps. To illustrate these concepts, the Fast Fourier Transform Algorithm is used. Computational results prove the accuracy of the model in four different parallel computers: a Parsytec Power PC, a Cray T3E, a Silicon Graphics Origin 2000 and a Digital Alpha Server.Eje: Disribución y tiempo realRed de Universidades con Carreras en Informática (RedUNCI
Shape-based cost analysis of skeletal parallel programs
Institute for Computing Systems ArchitectureThis work presents an automatic cost-analysis system for an implicitly parallel skeletal
programming language.
Although deducing interesting dynamic characteristics of parallel programs (and in
particular, run time) is well known to be an intractable problem in the general case, it
can be alleviated by placing restrictions upon the programs which can be expressed.
By combining two research threads, the “skeletal” and “shapely” paradigms which
take this route, we produce a completely automated, computation and communication
sensitive cost analysis system. This builds on earlier work in the area by quantifying
communication as well as computation costs, with the former being derived for the
Bulk Synchronous Parallel (BSP) model.
We present details of our shapely skeletal language and its BSP implementation strategy
together with an account of the analysis mechanism by which program behaviour
information (such as shape and cost) is statically deduced. This information can be
used at compile-time to optimise a BSP implementation and to analyse computation
and communication costs. The analysis has been implemented in Haskell. We consider
different algorithms expressed in our language for some example problems and
illustrate each BSP implementation, contrasting the analysis of their efficiency by traditional,
intuitive methods with that achieved by our cost calculator. The accuracy of
cost predictions by our cost calculator against the run time of real parallel programs is
tested experimentally.
Previous shape-based cost analysis required all elements of a vector (our nestable bulk
data structure) to have the same shape. We partially relax this strict requirement on data
structure regularity by introducing new shape expressions in our analysis framework.
We demonstrate that this allows us to achieve the first automated analysis of a complete
derivation, the well known maximum segment sum algorithm of Skillicorn and Cai
- …