2,936 research outputs found
Recommended from our members
Exploiting iteration-level parallelism in dataflow programs
The term "dataflow" generally encompasses three distinct aspects of computation - a data-driven model of computation, a functional/declarative programming language, and a special-purpose multiprocessor architecture. In this paper we decouple the language and architecture issues by demonstrating that declarative programming is a suitable vehicle for the programming of conventional distributed-memory multiprocessors.This is achieved by appling several transformations to the compiled declarative program to achieve iteration-level (rather than instruction-level) parallelism. The transformations first group individual instructions into sequential light-weight processes, and then insert primitives to: (1) cause array allocation to be distributed over multiple processors, (2) cause computation to follow the data distribution by inserting an index filtering mechanism into a given loop and spawning a copy of it on all PEs; the filter causes each instance of that loop to operate on a different subrange of the index variable.The underlying model of computation is a dataflow/von Neumann hybrid in that exection within a process is control-driven while the creation, blocking, and activation of processes is data-driven.The performance of this process-oriented dataflow system (PODS) is demonstrated using the hydrodynamics simulation benchmark called SIMPLE, where a 19-fold speedup on a 32-processor architecture has been achieved
Recommended from our members
Executing matrix multiply on a process oriented data flow machine
The Process-Oriented Dataflow System (PODS) is an execution model that combines the von Neumann and dataflow models of computation to gain the benefits of each. Central to PODS is the concept of array distribution and its effects on partitioning and mapping of processes.In PODS arrays are partitioned by simply assigning consecutive elements to each processing element (PE) equally. Since PODS uses single assignment, there will be only one producer of each element. This producing PE owns that element and will perform the necessary computations to assign it. Using this approach the filling loop is distributed across the PEs. This simple partitioning and mapping scheme provides excellent results for executing scientific code on MIMD machines. In this way PODS allows MIMD machines to exploit vector and data parallelism easily while still providing the flexibility of MIMD over SIMD for multi-user systems.In this paper, the classic matrix multiply algorithm, with 1024 data points, is executed on a PODS simulator and the results are presented and discussed. Matrix multiply is a good example because it has several interesting properties: there are multiple code-blocks; a new array must be dynamically allocated and distributed; there is a loop-carried dependency in the innermost loop; the two input arrays have different access patterns; and the sizes of the input arrays are not known at compile time. Matrix multiply also forms the basis for many important scientific algorithms such as: LU decomposition, convolution, and the Fast-Fourier Transform.The results show that PODS is comparable to both Iannucci's Hybrid Architecture and MIT's TTDA in terms of overhead and instruction power. They also show that PODS easily distributes the work load evenly across the PEs. The key result is that PODS can scale matrix multiply in a near linear fashion until there is little or no work to be performed for each PE. Then overhead and message passing become a major component of the execution time. With larger problems (e.g., >/=16k data points) this limit would be reached at around 256 PEs
Recommended from our members
Automatic data/program partitioning using the single assignment principle
Loosely-coupled MIMD architectures do not suffer from memory contention; hence large numbers of processors may be utilized. The main problem, however, is how to partition data and programs in order to exploit the available parallelism. In this paper we show that efficient schemes for automatic data/program partitioning and synchronization may be employed if single assignment is used. Using simulations of program loops common to scientific computations (the Livermore Loops), we demonstrate that only a small fraction of data accesses are remote and thus the degradation in network performance due to multiprocessing is minimal
Investigation Into Laser Shock Processing
Laser shock processing is a good candidate for surface industry due to its rapid processing, localized
ablation, and precision of operation. In the current study, laser shock processing of steel was considered.
The numerical solutions for temperature rise and recoil pressure development across the interface of the
ablating front and solid are presented. The propagation of elastic-plastic waves in the solid due to recoil
pressure loading at the surface is analyzed and numerical solution for the wave propagation was obtained.
An experiment was conducted to ablate the steel surfaces for shock processing. Scanning electron microscopy
was carried out to examine the ablated surfaces shock processing while transmission electron microscopy
was conducted to obtain dislocation densities after the shock processing. It was found that surface
hardness of the workpiece increased in the order of 1.8 times of the base material hardness, and the
dislocation was the main source of the shock hardening in the region affected by laser shock processing
Reginald Heber Smith and Justice and the Poor in the 21st Century
Reginald Heber Smith\u27s 1919 book, Justice and the Poor, is one of the most important books about the legal profession in history. It found that people without money were denied access to the courts. Smith argued that this failure to provide equal justice undermined the social fabric of the nation. Accordingly, he urged a number of actions, including simplifying court procedures, creating small claims courts, and providing the poor with access to lawyers. These lawyers would deliver a full range of legal services to their clients, including seeking reform of the substantive laws that burdened the poor. Smith\u27s book shamed the elite bar into action and led to the creation of the modern legal aid movement. As we come upon the 1 00th anniversary of its publication, Justice and the Poor reminds us that we are not much closer to Smith\u27s vision of equal justice than we were in 1919
- …