17 research outputs found
Hardware Barrier Synchronization: Static Barrier MIMD (SBM)
In this paper, we give the design, and performance analysis, of a new, highly efficient, synchronization mechanism called âStatic Barrier MIMDâ or âSBM.â Unlike traditional barrier synchronization, the proposed barriers are designed to facilitate the use of static (compile-time) code scheduling for eliminating some synchronizations. For this reason, our barrier hardware is more general than most hardware barrier mechanisms, allowing any subset of the processors to participate in each barrier. Since code scheduling typically operates on fine-grain parallelism, it is also vital that barriers be able to execute in a small number of clock ticks. The SBM is actually only one of two new classes of barrier machines proposed to facilitate static code scheduling; the other architecture is the âDynamic Barrier MIMD,â or âDBM,â which is described in a companion paper1. The DBM differs from the SBM in that the DBM employs more complex hardware to make the system less dependent on the precision of the static analysis and code scheduling; for example, an SBM cannot efficiently manage simultaneous execution of independent parallel programs, whereas a DBM can
Improving Model-Based Software Synthesis: A Focus on Mathematical Structures
Computer hardware keeps increasing in complexity. Software design needs to keep up with this. The right models and abstractions empower developers to leverage the novelties of modern hardware. This thesis deals primarily with Models of Computation, as a basis for software design, in a family of methods called software synthesis.
We focus on Kahn Process Networks and dataïŹow applications as abstractions, both for programming and for deriving an eïŹcient execution on heterogeneous multicores. The latter we accomplish by exploring the design space of possible mappings of computation and data to hardware resources. Mapping algorithms are not at the center of this thesis, however. Instead, we examine the mathematical structure of the mapping
space, leveraging its inherent symmetries or geometric properties to improve mapping methods in general.
This thesis thoroughly explores the process of model-based design, aiming to go beyond the more established software synthesis on dataïŹow applications. We starting with the problem of assessing these methods through benchmarking, and go on to formally examine the general goals of benchmarks. In this context, we also consider the role modern machine learning methods play in benchmarking.
We explore different established semantics, stretching the limits of Kahn Process Networks. We also discuss novel models, like Reactors, which are designed to be a deterministic, adaptive model with time as a ïŹrst-class citizen. By investigating abstractions and transformations in the Ohua language for implicit dataïŹow programming, we also focus on programmability.
The focus of the thesis is in the models and methods, but we evaluate them in diverse use-cases, generally centered around Cyber-Physical Systems. These include the 5G telecommunication standard, automotive and signal processing domains. We even go beyond embedded systems and discuss use-cases in GPU programming and microservice-based architectures
Models, Composability, and Validity
Composability is the capability to select and assemble simulation components in various combinations into simulation systems to satisfy specific user requirements. The defining characteristic of composability is the ability to combine and recombine components into different simulation systems for different purposes. The ability to compose simulation systems from repositories of reusable components has been a highly sought after goal among modeling and simulation developers. The expected benefits of robust, general composability include reduced simulation development cost and time, increased validity and reliability of simulation results, and increased involvement of simulation users in the process. Consequently, composability is an active research area, with both software engineering and theoretical approaches being developed. Composability exists in two forms, syntactic and semantic (also known as engineering and modeling). Syntactic composability is the implementation of components so that they can be connected. Semantic composability answers the question of whether the models implemented in the composition can be meaningfully composed
The 1991 3rd NASA Symposium on VLSI Design
Papers from the symposium are presented from the following sessions: (1) featured presentations 1; (2) very large scale integration (VLSI) circuit design; (3) VLSI architecture 1; (4) featured presentations 2; (5) neural networks; (6) VLSI architectures 2; (7) featured presentations 3; (8) verification 1; (9) analog design; (10) verification 2; (11) design innovations 1; (12) asynchronous design; and (13) design innovations 2
Recommended from our members
High integrity hardware-software codesign
Programmable logic devices (PLDs) are increasing in complexity and speed, and are being used as important components in safety-critical systems. Methods for developing high-integrity software for these systems are well-known, but this is not true for programmable logic. We propose a process for developing a system incorporating software and PLDs, suitable for safety critical systems of the highest levels of integrity. This process incorporates the use of Synchronous Receptive Process Theory as a semantic basis for specifying and proving properties of programs executing on PLDs, and extends the use of SPARK Ada from a programming language for safety-critical systems software to cover the interface between software and programmable logic. We have validated this approach through the specification and development of a substantial safety-critical system incorporating both software and programmable logic components, and the development of tools to support this work. This enables us to claim that the methods demonstrated are not only feasible but also scale up to realistic system sizes, allowing development of such safety-critical software-hardware systems to the levels required by current system safety standards
Task assignment in parallel processor systems
A generic object-oriented simulation platform is developed in order to conduct
experiments on the performance of assignment schemes. The simulation platform,
called Genesis, is generic in the sense that it can model the key parameters that
describe a parallel system: the architecture, the program, the assignment scheme
and the message routing strategy. Genesis uses as its basis a sound architectural
representation scheme developed in the thesis.
The thesis reports results from a number of experiments assessing the performance
of assignment schemes using Genesis. The comparison results indicate that the
new assignment scheme proposed in this thesis is a promising alternative to the
work-greedy assignment schemes. The proposed scheme has a time-complexity
less than those of the work-greedy schemes and achieves an average performance
better than, or comparable to, those of the work-greedy schemes.
To generate an assignment, some parameters describing the program model will
be required. In many cases, accurate estimation of these parameters is hard. It is
thought that inaccuracies in the estimation would lead to poor assignments. The
thesis investigates this speculation and presents experimental evidence that shows
such inaccuracies do not greatly affect the quality of the assignments