10 research outputs found
Process-Oriented Parallel Programming with an Application to Data-Intensive Computing
We introduce process-oriented programming as a natural extension of
object-oriented programming for parallel computing. It is based on the
observation that every class of an object-oriented language can be instantiated
as a process, accessible via a remote pointer. The introduction of process
pointers requires no syntax extension, identifies processes with programming
objects, and enables processes to exchange information simply by executing
remote methods. Process-oriented programming is a high-level language
alternative to multithreading, MPI and many other languages, environments and
tools currently used for parallel computations. It implements natural
object-based parallelism using only minimal syntax extension of existing
languages, such as C++ and Python, and has therefore the potential to lead to
widespread adoption of parallel programming. We implemented a prototype system
for running processes using C++ with MPI and used it to compute a large
three-dimensional Fourier transform on a computer cluster built of commodity
hardware components. Three-dimensional Fourier transform is a prototype of a
data-intensive application with a complex data-access pattern. The
process-oriented code is only a few hundred lines long, and attains very high
data throughput by achieving massive parallelism and maximizing hardware
utilization.Comment: 20 pages, 1 figur
Computational experiments with a three-dimensional model of the Cochlea
We present results from a series of compute-intensive simulation experiments employing a realistic and detailed three-dimensional model of the human cochlear macro-mechanics. Our model uses the immersed boundary method to compute the fluid-structure interactions within the cochlea. It is a three-dimensional model based on an accurate cochlear geometry obtained from physical measurements. It includes detailed descriptions of the elastic material components immersed in the fluid, and is based on the previously developed immersed boundary method for elastic shells. The basilar membrane is modeled by a fourth-order partial differential equation of shell theory. The results reproduce the basic well known characteristics of cochlear mechanics and constitute a successful initial step in model validation
A Comprehensive Three-Dimensional Model of the Cochlea
The human cochlea is a remarkable device, able to discern extremely small
amplitude sound pressure waves, and discriminate between very close
frequencies. Simulation of the cochlea is computationally challenging due to
its complex geometry, intricate construction and small physical size. We have
developed, and are continuing to refine, a detailed three-dimensional
computational model based on an accurate cochlear geometry obtained from
physical measurements. In the model, the immersed boundary method is used to
calculate the fluid-structure interactions produced in response to incoming
sound waves. The model includes a detailed and realistic description of the
various elastic structures present.
In this paper, we describe the computational model and its performance on the
latest generation of shared memory servers from Hewlett Packard. Using compiler
generated threads and OpenMP directives, we have achieved a high degree of
parallelism in the executable, which has made possible several large scale
numerical simulation experiments that study the interesting features of the
cochlear system. We show several results from these simulations, reproducing
some of the basic known characteristics of cochlear mechanics.Comment: 22 pages, 5 figure
TOPS (Terascale Optimal PDE Simulations)
Summary. Our work has focused on the development and analysis of domain decomposition algorithms for a variety of problems arising in continuum mechanics modeling. In particular, we have extended and analyzed FETI-DP and BDDC algorithms; these iterative solvers were first introduced and studied by Charbel Farhat and his collaborators, see A very desirable feature of these iterative substructuring and other domain decomposition algorithms is that they respect the memory hierarchy of modern parallel and distributed computing systems, which is essential for approaching peak floating point performance. The development of improved methods, together with more powerful computer systems, is making it possible to carry out simulations in three dimensions, with quite high resolution, relatively easily. This work is supported by high quality software systems, such as Argonne's PETSc library, which facilitates code development as well as the access to a variety of parallel and distributed computer systems. The success in finding scalable and robust domain decomposition algorithms for very large number of processors and very large finite element problems is, e.g., illustrated in Our work over these five and half years has, in our opinion, helped advance the knowledge of domain decomposition methods significantly. We see these methods as providing valuable alternatives to other iterative methods, in particular, those based on multi-grid. In our opinion, our accomplishments also match the goals of the TOPS project quite closely