10,331 research outputs found
The End of Slow Networks: It's Time for a Redesign
Next generation high-performance RDMA-capable networks will require a
fundamental rethinking of the design and architecture of modern distributed
DBMSs. These systems are commonly designed and optimized under the assumption
that the network is the bottleneck: the network is slow and "thin", and thus
needs to be avoided as much as possible. Yet this assumption no longer holds
true. With InfiniBand FDR 4x, the bandwidth available to transfer data across
network is in the same ballpark as the bandwidth of one memory channel, and it
increases even further with the most recent EDR standard. Moreover, with the
increasing advances of RDMA, the latency improves similarly fast. In this
paper, we first argue that the "old" distributed database design is not capable
of taking full advantage of the network. Second, we propose architectural
redesigns for OLTP, OLAP and advanced analytical frameworks to take better
advantage of the improved bandwidth, latency and RDMA capabilities. Finally,
for each of the workload categories, we show that remarkable performance
improvements can be achieved
Programming MPSoC platforms: Road works ahead
This paper summarizes a special session on multicore/multi-processor system-on-chip (MPSoC) programming challenges. The current trend towards MPSoC platforms in most computing domains does not only mean a radical change in computer architecture. Even more important from a SW developer´s viewpoint, at the same time the classical sequential von Neumann programming model needs to be overcome. Efficient utilization of the MPSoC HW resources demands for radically new models and corresponding SW development tools, capable of exploiting the available parallelism and guaranteeing bug-free parallel SW. While several standards are established in the high-performance computing domain (e.g. OpenMP), it is clear that more innovations are required for successful\ud
deployment of heterogeneous embedded MPSoC. On the other hand, at least for coming years, the freedom for disruptive programming technologies is limited by the huge amount of certified sequential code that demands for a more pragmatic, gradual tool and code replacement strategy
Parallel and Distributed Simulation from Many Cores to the Public Cloud (Extended Version)
In this tutorial paper, we will firstly review some basic simulation concepts
and then introduce the parallel and distributed simulation techniques in view
of some new challenges of today and tomorrow. More in particular, in the last
years there has been a wide diffusion of many cores architectures and we can
expect this trend to continue. On the other hand, the success of cloud
computing is strongly promoting the everything as a service paradigm. Is
parallel and distributed simulation ready for these new challenges? The current
approaches present many limitations in terms of usability and adaptivity: there
is a strong need for new evaluation metrics and for revising the currently
implemented mechanisms. In the last part of the paper, we propose a new
approach based on multi-agent systems for the simulation of complex systems. It
is possible to implement advanced techniques such as the migration of simulated
entities in order to build mechanisms that are both adaptive and very easy to
use. Adaptive mechanisms are able to significantly reduce the communication
cost in the parallel/distributed architectures, to implement load-balance
techniques and to cope with execution environments that are both variable and
dynamic. Finally, such mechanisms will be used to build simulations on top of
unreliable cloud services.Comment: Tutorial paper published in the Proceedings of the International
Conference on High Performance Computing and Simulation (HPCS 2011). Istanbul
(Turkey), IEEE, July 2011. ISBN 978-1-61284-382-
Development of 2MASS Catalog Server Kit
We develop a software kit called "2MASS Catalog Server Kit" to easily
construct a high-performance database server for the 2MASS Point Source Catalog
(includes 470,992,970 objects) and several all-sky catalogs. Users can perform
fast radial search and rectangular search using provided stored functions in
SQL similar to SDSS SkyServer. Our software kit utilizes open-source RDBMS, and
therefore any astronomers and developers can install our kit on their personal
computers for research, observation, etc. Out kit is tuned for optimal
coordinate search performance. We implement an effective radial search using an
orthogonal coordinate system, which does not need any techniques that depend on
HTM or HEALpix. Applying the xyz coordinate system to the database index, we
can easily implement a system of fast radial search for relatively small (less
than several million rows) catalogs. To enable high-speed search of huge
catalogs on RDBMS, we apply three additional techniques: table partitioning,
composite expression index, and optimization in stored functions. As a result,
we obtain satisfactory performance of radial search for the 2MASS catalog. Our
system can also perform fast rectangular search. It is implemented using
techniques similar to those applied for radial search. Our way of
implementation enables a compact system and will give important hints for a
low-cost development of other huge catalog databases.Comment: 2011 PASP accepte
Large scale ab initio calculations based on three levels of parallelization
We suggest and implement a parallelization scheme based on an efficient
multiband eigenvalue solver, called the locally optimal block preconditioned
conjugate gradient LOBPCG method, and using an optimized three-dimensional (3D)
fast Fourier transform (FFT) in the ab initio}plane-wave code ABINIT. In
addition to the standard data partitioning over processors corresponding to
different k-points, we introduce data partitioning with respect to blocks of
bands as well as spatial partitioning in the Fourier space of coefficients over
the plane waves basis set used in ABINIT. This k-points-multiband-FFT
parallelization avoids any collective communications on the whole set of
processors relying instead on one-dimensional communications only. For a single
k-point, super-linear scaling is achieved for up to 100 processors due to an
extensive use of hardware optimized BLAS, LAPACK, and SCALAPACK routines,
mainly in the LOBPCG routine. We observe good performance up to 200 processors.
With 10 k-points our three-way data partitioning results in linear scaling up
to 1000 processors for a practical system used for testing.Comment: 8 pages, 5 figures. Accepted to Computational Material Scienc
Scalable Persistent Storage for Erlang
The many core revolution makes scalability a key property. The RELEASE project aims to improve the scalability of Erlang on emergent commodity architectures with 100,000 cores. Such architectures require scalable and available persistent storage on up to 100 hosts. We enumerate the requirements for scalable and available persistent storage, and evaluate four popular Erlang DBMSs against these requirements. This analysis shows that Mnesia and CouchDB are not suitable persistent storage at our target scale, but Dynamo-like NoSQL DataBase Management Systems (DBMSs) such as Cassandra and Riak potentially are. We investigate the current scalability limits of the Riak 1.1.1 NoSQL DBMS in practice on a 100-node cluster. We establish for the first time scientifically the scalability limit of Riak as 60 nodes on the Kalkyl cluster, thereby confirming developer folklore. We show that resources like memory, disk, and network do not limit the scalability of Riak. By instrumenting Erlang/OTP and Riak libraries we identify a specific Riak functionality that limits scalability. We outline how later releases of Riak are refactored to eliminate the scalability bottlenecks. We conclude that Dynamo-style NoSQL DBMSs provide scalable and available persistent storage for Erlang in general, and for our RELEASE target architecture in particular
- …