24,189 research outputs found
apeNEXT: A multi-TFlops Computer for Simulations in Lattice Gauge Theory
We present the APE (Array Processor Experiment) project for the development
of dedicated parallel computers for numerical simulations in lattice gauge
theories. While APEmille is a production machine in today's physics simulations
at various sites in Europe, a new machine, apeNEXT, is currently being
developed to provide multi-Tflops computing performance. Like previous APE
machines, the new supercomputer is largely custom designed and specifically
optimized for simulations of Lattice QCD.Comment: Poster at the XXIII Physics in Collisions Conference (PIC03),
Zeuthen, Germany, June 2003, 3 pages, Latex. PSN FRAP15. Replaced for adding
forgotten autho
nbodykit: an open-source, massively parallel toolkit for large-scale structure
We present nbodykit, an open-source, massively parallel Python toolkit for
analyzing large-scale structure (LSS) data. Using Python bindings of the
Message Passing Interface (MPI), we provide parallel implementations of many
commonly used algorithms in LSS. nbodykit is both an interactive and scalable
piece of scientific software, performing well in a supercomputing environment
while still taking advantage of the interactive tools provided by the Python
ecosystem. Existing functionality includes estimators of the power spectrum, 2
and 3-point correlation functions, a Friends-of-Friends grouping algorithm,
mock catalog creation via the halo occupation distribution technique, and
approximate N-body simulations via the FastPM scheme. The package also provides
a set of distributed data containers, insulated from the algorithms themselves,
that enable nbodykit to provide a unified treatment of both simulation and
observational data sets. nbodykit can be easily deployed in a high performance
computing environment, overcoming some of the traditional difficulties of using
Python on supercomputers. We provide performance benchmarks illustrating the
scalability of the software. The modular, component-based approach of nbodykit
allows researchers to easily build complex applications using its tools. The
package is extensively documented at http://nbodykit.readthedocs.io, which also
includes an interactive set of example recipes for new users to explore. As
open-source software, we hope nbodykit provides a common framework for the
community to use and develop in confronting the analysis challenges of future
LSS surveys.Comment: 18 pages, 7 figures. Feedback very welcome. Code available at
https://github.com/bccp/nbodykit and for documentation, see
http://nbodykit.readthedocs.i
High performance Python for direct numerical simulations of turbulent flows
Direct Numerical Simulations (DNS) of the Navier Stokes equations is an
invaluable research tool in fluid dynamics. Still, there are few publicly
available research codes and, due to the heavy number crunching implied,
available codes are usually written in low-level languages such as C/C++ or
Fortran. In this paper we describe a pure scientific Python pseudo-spectral DNS
code that nearly matches the performance of C++ for thousands of processors and
billions of unknowns. We also describe a version optimized through Cython, that
is found to match the speed of C++. The solvers are written from scratch in
Python, both the mesh, the MPI domain decomposition, and the temporal
integrators. The solvers have been verified and benchmarked on the Shaheen
supercomputer at the KAUST supercomputing laboratory, and we are able to show
very good scaling up to several thousand cores.
A very important part of the implementation is the mesh decomposition (we
implement both slab and pencil decompositions) and 3D parallel Fast Fourier
Transforms (FFT). The mesh decomposition and FFT routines have been implemented
in Python using serial FFT routines (either NumPy, pyFFTW or any other serial
FFT module), NumPy array manipulations and with MPI communications handled by
MPI for Python (mpi4py). We show how we are able to execute a 3D parallel FFT
in Python for a slab mesh decomposition using 4 lines of compact Python code,
for which the parallel performance on Shaheen is found to be slightly better
than similar routines provided through the FFTW library. For a pencil mesh
decomposition 7 lines of code is required to execute a transform
Solution of large linear systems of equations on the massively parallel processor
The Massively Parallel Processor (MPP) was designed as a special machine for specific applications in image processing. As a parallel machine, with a large number of processors that can be reconfigured in different combinations it is also applicable to other problems that require a large number of processors. The solution of linear systems of equations on the MPP is investigated. The solution times achieved are compared to those obtained with a serial machine and the performance of the MPP is discussed
High volume colour image processing with massively parallel embedded processors
Currently Oc´e uses FPGA technology for implementing colour image processing for their high volume colour printers. Although FPGA technology provides enough performance it, however, has a rather tedious development process. This paper describes the research conducted on an alternative implementation technology: software defined massively parallel processing. It is shown that this technology not only leads to a reduction in development time but also adds flexibility to the design
Interactive Visualization of the Largest Radioastronomy Cubes
3D visualization is an important data analysis and knowledge discovery tool,
however, interactive visualization of large 3D astronomical datasets poses a
challenge for many existing data visualization packages. We present a solution
to interactively visualize larger-than-memory 3D astronomical data cubes by
utilizing a heterogeneous cluster of CPUs and GPUs. The system partitions the
data volume into smaller sub-volumes that are distributed over the rendering
workstations. A GPU-based ray casting volume rendering is performed to generate
images for each sub-volume, which are composited to generate the whole volume
output, and returned to the user. Datasets including the HI Parkes All Sky
Survey (HIPASS - 12 GB) southern sky and the Galactic All Sky Survey (GASS - 26
GB) data cubes were used to demonstrate our framework's performance. The
framework can render the GASS data cube with a maximum render time < 0.3 second
with 1024 x 1024 pixels output resolution using 3 rendering workstations and 8
GPUs. Our framework will scale to visualize larger datasets, even of Terabyte
order, if proper hardware infrastructure is available.Comment: 15 pages, 12 figures, Accepted New Astronomy July 201
- …