763 research outputs found
Broadcasting in Hyper-cylinder graphs
Broadcasting in computer networking means the dissemination of information, which is known initially only at some nodes, to all network members. The goal is to inform every node in the minimal time possible. There are few models for broadcasting; the simplest and the historical model is called the Classical model. In the Classical model, dissemination happens in synchronous rounds, wherein a node may only inform one of its neighbors. The broadcast question is: What is the minimum number of rounds needed for broadcasting, and what broadcast scheme achieves it?
For general graphs, these questions are NP-hard, and it is known to be at least 3 - ε inapproximable for any real ε > 0. Even for some very restricted classes of graphs, the questions remain as an NP-hard problem. Little is known about broadcasting in restricted graphs, and only a few classes have a polynomial solution.
Parallel and distributed computing is one of the important domains which relies on efficient broadcasting. Hypercube and torus are the most used network topology in this domain. The widespread use is not only due to their simplicity but also is for their efficiency and high robustness (e.g., fault tolerance) while having an acceptable number of links. In this thesis, it is observed that the Cartesian product of a number of path and cycle graphs produces a valuable set of topologies, we called hyper-cylinders, which contain hypercube and Torus as well. Any hyper-cylinder shares many of the beneficial features of hypercube and torus and might be a suitable substitution in some cases. Some hyper-cylinders are also similar to other practically used topologies such as cube-connected cycles. In this thesis, the effect of the Cartesian product on broadcasting and broadcasting of hyper-cylinders under the Classical and Messy models is studied. This will add a valuable class of graphs to the limited classes of graphs which have a polynomially computable broadcast time. In the end, the relation between worst-case originators and diameters in trees is studied, which may help in the broadcast study of a larger class of graphs where any tree is allowed instead of a path in the Cartesian product
A bibliography on parallel and vector numerical algorithms
This is a bibliography of numerical methods. It also includes a number of other references on machine architecture, programming language, and other topics of interest to scientific computing. Certain conference proceedings and anthologies which have been published in book form are listed also
Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1
Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified
Hypercube-Based Topologies With Incremental Link Redundancy.
Hypercube structures have received a great deal of attention due to the attractive properties inherent to their topology. Parallel algorithms targeted at this topology can be partitioned into many tasks, each of which running on one node processor. A high degree of performance is achievable by running every task individually and concurrently on each node processor available in the hypercube. Nevertheless, the performance can be greatly degraded if the node processors spend much time just communicating with one another. The goal in designing hypercubes is, therefore, to achieve a high ratio of computation time to communication time. The dissertation addresses primarily ways to enhance system performance by minimizing the communication time among processors. The need for improving the performance of hypercube networks is clearly explained. Three novel topologies related to hypercubes with improved performance are proposed and analyzed. Firstly, the Bridged Hypercube (BHC) is introduced. It is shown that this design is remarkably more efficient and cost-effective than the standard hypercube due to its low diameter. Basic routing algorithms such as one to one and broadcasting are developed for the BHC and proven optimal. Shortcomings of the BHC such as its asymmetry and limited application are clearly discussed. The Folded Hypercube (FHC), a symmetric network with low diameter and low degree of the node, is introduced. This new topology is shown to support highly efficient communications among the processors. For the FHC, optimal routing algorithms are developed and proven to be remarkably more efficient than those of the conventional hypercube. For both BHC and FHC, network parameters such as average distance, message traffic density, and communication delay are derived and comparatively analyzed. Lastly, to enhance the fault tolerance of the hypercube, a new design called Fault Tolerant Hypercube (FTH) is proposed. The FTH is shown to exhibit a graceful degradation in performance with the existence of faults. Probabilistic models based on Markov chain are employed to characterize the fault tolerance of the FTH. The results are verified by Monte Carlo simulation. The most attractive feature of all new topologies is the asymptotically zero overhead associated with them. The designs are simple and implementable. These designs can lead themselves to many parallel processing applications requiring high degree of performance
Recommended from our members
The NON-VON Supercomputer
NON-VON is a highly parallel, non-von Neumann supercomputer, portions of which are now being implemented in the Computer Science Department at Columbia University. The machine is intended to support the extremely rapid execution of large scale data manipulation tasks, including relational database operations and many other functions relevant to commercial data processing. The NON-VON architecture includes a tree-structured Primary Processing Subsystem (PPS), which we are implementing using custom nMOS VLSI circuits, along with a Secondary Processing Subsystem (SPS) based on a bank of intelligent disk drives. A high-bandwidth parallel interface provides for rapid data transfer between the two subsystems. This paper describes the organization of the NON-VON machine, with particular emphasis on the structure and function of the PPS. Some of the most important NON-VON programming techniques are then outlined, and their application to typical data processing applications illustrated with simple examples
Some Optimally Adaptive Parallel Graph Algorithms on EREW PRAM Model
The study of graph algorithms is an important area of research in computer science, since graphs offer useful tools to model many real-world situations. The commercial availability of parallel computers have led to the development of efficient parallel graph algorithms.
Using an exclusive-read and exclusive-write (EREW) parallel random access machine (PRAM) as the computation model with a fixed number of processors, we design and analyze parallel algorithms for seven undirected graph problems, such as, connected components, spanning forest, fundamental cycle set, bridges, bipartiteness, assignment problems, and approximate vertex coloring. For all but the last two problems, the input data structure is an unordered list of edges, and divide-and-conquer is the paradigm for designing algorithms. One of the algorithms to solve the assignment problem makes use of an appropriate variant of dynamic programming strategy. An elegant data structure, called the adjacency list matrix, used in a vertex-coloring algorithm avoids the sequential nature of linked adjacency lists.
Each of the proposed algorithms achieves optimal speedup, choosing an optimal granularity (thus exploiting maximum parallelism) which depends on the density or the number of vertices of the given graph. The processor-(time)2 product has been identified as a useful parameter to measure the cost-effectiveness of a parallel algorithm. We derive a lower bound on this measure for each of our algorithms
The instruction of systolic array (ISA) and simulation of parallel algorithms
Systolic arrays have proved to be well suited for Very Large
Scale Integrated technology (VLSI) since they:
-Consist of a regular network of simple processing cells,
-Use local communication between the processing cells only,
-Exploit a maximal degree of parallelism.
However, systolic arrays have one main disadvantage compared with
other parallel computer architectures: they are special purpose
architectures only capable of executing one algorithm, e.g., a
systolic array designed for sorting cannot be used to form matrix
multiplication.
Several approaches have been made to make systolic arrays more
flexible, in order to be able to handle different problems on a
single systolic array.
In this thesis an alternative concept to a VLSI-architecture
the Soft-Systolic Simulation System (SSSS), is introduced and
developed as a working model of virtual machine with the power to
simulate hard systolic arrays and more general forms of concurrency
such as the SIMD and MIMD models of computation.
The virtual machine includes a processing element consisting of
a soft-systolic processor implemented in the virtual.machine language.
The processing element considered here was a very general element
which allows the choice of a wide range of arithmetic and logical
operators and allows the simulation of a wide class of algorithms
but in principle extra processing cells can be added making a library
and this library be tailored to individual needs.
The virtual machine chosen for this implementation is the
Instruction Systolic Array (ISA). The ISA has a number of interesting
features, firstly it has been used to simulate all SIMD algorithms
and many MIMD algorithms by a simple program transformation technique,
further, the ISA can also simulate the so-called wavefront processor
algorithms, as well as many hard systolic algorithms. The ISA removes
the need for the broadcasting of data which is a feature of SIMD
algorithms (limiting the size of the machine and its cycle time) and also presents a fairly simple communication structure for MIMD
algorithms.
The model of systolic computation developed from the VLSI
approach to systolic arrays is such that the processing surface is
fixed, as are the processing elements or cells by virtue of their
being embedded in the processing surface.
The VLSI approach therefore freezes instructions and hardware
relative to the movement of data with the virtual machine and softsystolic
programming retaining the constructions of VLSI for array
design features such as regularity, simplicity and local communication,
allowing the movement of instructions with respect to data. Data can
be frozen into the structure with instructions moving systolically.
Alternatively both the data and instructions can move systolically
around the virtual processors, (which are deemed fixed relative to
the underlying architecture).
The ISA is implemented in OCCAM programs whose execution and
output implicitly confirm the correctness of the design.
The soft-systolic preparation comprises of the usual operating
system facilities for the creation and modification of files during
the development of new programs and ISA processor elements. We allow
any concurrent high level language to be used to model the softsystolic
program. Consequently the Replicating Instruction Systolic
Array Language (RI SAL) was devised to provide a very primitive program
environment to the ISA but adequate for testing. RI SAL accepts
instructions in an assembler-like form, but is fairly permissive
about the format of statements, subject of course to syntax.
The RI SAL compiler is adopted to transform the soft-systolic
program description (RISAL) into a form suitable for the virtual
machine (simulating the algorithm) to run.
Finally we conclude that the principles mentioned here can form
the basis for a soft-systolic simulator using an orthogonally
connected mesh of processors. The wide range of algorithms which the
ISA can simulate make it suitable for a virtual simulating grid
Circuit simulation using distributed waveform relaxation techniques
Simulation plays an important role in the design of integrated circuits. Due to high costs and large delays involved in their fabrication, simulation is commonly used to verify functionality and to predict performance before fabrication. This thesis describes analysis, implementation and performance evaluation of a distributed memory parallel waveform relaxation technique for the electrical circuit simulation of MOS VLSI circuits. The waveform relaxation technique exhibits inherent parallelism due to the partitioning of a circuit into a number of sub-circuits. These subcircuits can be concurrently simulated on parallel processors. Different forms of parallelism in the direct method and the waveform relaxation technique are studied. An analysis of single queue and distributed queue approaches to implement parallel waveform relaxation on distributed memory machines is performed and their performance implications are studied. The distributed queue approach selected for exploiting the coarse grain parallelism across sub-circuits is described. Parallel waveform relaxation programs based on Gauss-Seidel and Gauss-Jacobi techniques are implemented using a network of eight Transputers. Static and dynamic load balancing strategies are studied. A dynamic load balancing algorithm is developed and implemented. Results of parallel implementation are analyzed to identify sources of bottlenecks. This thesis has demonstrated the applicability of a low cost distributed memory multi-computer system for simulation of MOS VLSI circuits. Speed-up measurements prove that a five times improvement in the speed of calculations can be achieved using a full window parallel Gauss-Jacobi waveform relaxation algorithm. Analysis of overheads shows that load imbalance is the major source of overhead and that the fraction of the computation which must be performed sequentially is very low. Communication overhead depends on the nature of the parallel architecture and the design of communication mechanisms. The run-time environment (parallel processing framework) developed in this research exploits features of the Transputer architecture to reduce the effect of the communication overhead by effectively overlapping computation with communications, and running communications processes at a higher priority. This research will contribute to the development of low cost, high performance workstations for computer-aided design and analysis of VLSI circuits
- …