1,003 research outputs found
Improving Data Locality in Distributed Processing of Multi-Channel Remote Sensing Data with Potentially Large Stencils
Distributing a multi-channel remote sensing data processing with potentially large stencils
is a difficult challenge. The goal of this master thesis was to evaluate and investigate the
performance impacts of such a processing on a distributed system and if it is possible to
improve the total execution time by exploiting data locality or memory alignments. The
thesis also gives a brief overview of the actual state of the art in remote sensing distributed
data processing and points out why distributed computing will become more important for
it in the future. For the experimental part of this thesis an application to process huge
arrays on a distributed system was implemented with DASH, a C++ Template Library for
Distributed Data Structures with Support for Hierarchical Locality for High Performance
Computing and Data-Driven Science. On the basis of the first results an optimization model
was developed which has the goal to reduce network traffic while initializing a distributed
data structure and executing computations on it with potentially large stencils. Furthermore,
a software to estimate the memory layouts with the least network communication cost for a
given multi-channel remote sensing data processing workflow was implemented. The results
of this optimization were executed and evaluated afterwards. The results show that it is
possible to improve the initialization speed of a large image by considering the brick locality
by 25%. The optimization model also generate valid decisions for the initialization of the
PGAS memory layouts. However, for a real implementation the optimization model has to
be modified to reflect implementation-dependent sources of overhead. This thesis presented
some approaches towards solving challenges of the distributed computing world that can be
used for real-world remote sensing imaging applications and contributed towards solving the
challenges of the modern Big Data world for future scientific data exploitation
Programming Abstractions for Data Locality
The goal of the workshop and this report is to identify common themes and standardize concepts for locality-preserving abstractions for exascale programming models. Current software tools are built on the premise that computing is the most expensive component, we are rapidly moving to an era that computing is cheap and massively parallel while data movement dominates energy and performance costs. In order to respond to exascale systems (the next generation of high performance computing systems), the scientific computing community needs to refactor their applications to align with the emerging data-centric paradigm. Our applications must be evolved to express information about data locality. Unfortunately current programming environments offer few ways to do so. They ignore the incurred cost of communication and simply rely on the hardware cache coherency to virtualize data movement. With the increasing importance of task-level parallelism on future systems, task models have to support constructs that express data locality and affinity. At the system level, communication libraries implicitly assume all the processing elements are equidistant to each other. In order to take advantage of emerging technologies, application developers need a set of programming abstractions to describe data locality for the new computing ecosystem. The new programming paradigm should be more data centric and allow to describe how to decompose and how to layout data in the memory.Fortunately, there are many emerging concepts such as constructs for tiling, data layout, array views, task and thread affinity, and topology aware communication libraries for managing data locality. There is an opportunity to identify commonalities in strategy to enable us to combine the best of these concepts to develop a comprehensive approach to expressing and managing data locality on exascale programming systems. These programming model abstractions can expose crucial information about data locality to the compiler and runtime system to enable performance-portable code. The research question is to identify the right level of abstraction, which includes techniques that range from template libraries all the way to completely new languages to achieve this goal
Trends in Data Locality Abstractions for HPC Systems
The cost of data movement has always been an important concern in high performance computing (HPC) systems. It has now become the dominant factor in terms of both energy consumption and performance. Support for expression of data locality has been explored in the past, but those efforts have had only modest success in being adopted in HPC applications for various reasons. them However, with the increasing complexity of the memory hierarchy and higher parallelism in emerging HPC systems, locality management has acquired a new urgency. Developers can no longer limit themselves to low-level solutions and ignore the potential for productivity and performance portability obtained by using locality abstractions. Fortunately, the trend emerging in recent literature on the topic alleviates many of the concerns that got in the way of their adoption by application developers. Data locality abstractions are available in the forms of libraries, data structures, languages and runtime systems; a common theme is increasing productivity without sacrificing performance. This paper examines these trends and identifies commonalities that can combine various locality concepts to develop a comprehensive approach to expressing and managing data locality on future large-scale high-performance computing systems
ColDICE: a parallel Vlasov-Poisson solver using moving adaptive simplicial tessellation
Resolving numerically Vlasov-Poisson equations for initially cold systems can
be reduced to following the evolution of a three-dimensional sheet evolving in
six-dimensional phase-space. We describe a public parallel numerical algorithm
consisting in representing the phase-space sheet with a conforming,
self-adaptive simplicial tessellation of which the vertices follow the
Lagrangian equations of motion. The algorithm is implemented both in six- and
four-dimensional phase-space. Refinement of the tessellation mesh is performed
using the bisection method and a local representation of the phase-space sheet
at second order relying on additional tracers created when needed at runtime.
In order to preserve in the best way the Hamiltonian nature of the system,
refinement is anisotropic and constrained by measurements of local Poincar\'e
invariants. Resolution of Poisson equation is performed using the fast Fourier
method on a regular rectangular grid, similarly to particle in cells codes. To
compute the density projected onto this grid, the intersection of the
tessellation and the grid is calculated using the method of Franklin and
Kankanhalli (1993) generalised to linear order. As preliminary tests of the
code, we study in four dimensional phase-space the evolution of an initially
small patch in a chaotic potential and the cosmological collapse of a
fluctuation composed of two sinusoidal waves. We also perform a "warm" dark
matter simulation in six-dimensional phase-space that we use to check the
parallel scaling of the code.Comment: Code and illustration movies available at:
http://www.vlasix.org/index.php?n=Main.ColDICE - Article submitted to Journal
of Computational Physic
Contractive Schroedinger cat states for a free mass
Contractive states for a free quantum particle were introduced by Yuen [Yuen
H P 1983 Phys. Rev. Lett. 51, 719] in an attempt to evade the standard quantum
limit for repeated position measurements. We show how appropriate families of
two- and three component ``Schroedinger cat states'' are able to support
non-trivial correlations between the position and momentum observables leading
to contractive behavior. The existence of contractive Schroedinger cat states
is suggestive of potential novel roles of non-classical states for precision
measurement schemes.Comment: 24 pages, 7 encapsulated eps color figures, REVTeX4 style. Published
online in New Journal of Physics 5 (2003) 5.1-5.21. Higher-resolution figures
available in published version. (accessible at http://www.njp.org/
Have Your Cake and Eat It? Productive Parallel Programming via Chapelâs High-level Constructs
Explicit parallel programming is required to utilize the growing parallelism in computer
hardware. However, current mainstream parallel notations, such as OpenMP
and MPI, lack in programmability. Chapel tries to tackle this problem by providing
high-level constructs. However, the performance implication of such constructs is not
clear, and needs to be evaluated.
The key contributions of this work are: 1. An evaluation of data parallelism and
global-view programming in Chapel through the reduce and transpose benchmarks.
2. Identification of bugs in Chapel runtime code with proposed fixes. 3. A benchmarking
framework that aids in conducting systematic and rigorous performance
evaluation.
Through examples, I show that data parallelism and global-view programming
lead to clean and succinct code in Chapel. In the reduce benchmark, I found that
data parallelism makes Chapel outperform the baseline. However, in the transpose
benchmark, I found that global-view programming causes performance degradation
in Chapel due to frequent implicit communication. I argue that this is not an inherent
problem with Chapel, and can be solved by compiler optimizations.
The results suggest that it is possible to use high-level abstraction in parallel
languages to improve the productivity of programmers, while still delivering competitive
performance. Furthermore, the benchmarking framework I developed can aid
the wider research community in performance evaluations
- âŠ