181,852 research outputs found
Matrix Distributed Processing: A set of C++ Tools for implementing generic lattice computations on parallel systems
We present a set of programming tools (classes and functions written in C++
and based on Message Passing Interface) for fast development of generic
parallel (and non-parallel) lattice simulations. They are collectively called
MDP 1.2.
These programming tools include classes and algorithms for matrices, random
number generators, distributed lattices (with arbitrary topology), fields and
parallel iterations. No previous knowledge of MPI is required in order to use
them.
Some applications in electromagnetism, electronics, condensed matter and
lattice QCD are presented.Comment: Minor esthetical modifications from previous version
Distributed Parallel Computing for Visual Cryptography Algorithms
Proceedings of: Second International Workshop on Sustainable Ultrascale Computing Systems (NESUS 2015). Krakow (Poland), September 10-11, 2015.The recent activities to construct exascale and ultrascale distributed computational systems are opening a possibility to apply parallel and distributed computing techniques for applied problems which previously were considered as not solvable with the standard computational resources. In this paper we consider one global optimization problem where a set of feasible solutions is discrete and very large. There is no possibility to apply some apriori estimation techniques to exclude an essential part of these elements from the computational analysis, e.g. applying branch and bound type methods. Thus a full search is required in order to solve such global optimization problems. The considered problem describes visual cryptography algorithms. The main goal is to find optimal perfect gratings, which can guarantee high quality and security of the visual cryptography method. The full search parallel algorithm is based on master-slave paradigm. We present a library of C++ templates that allow the developer to implement parallel master-slave algorithms for his application without any parallel programming and knowledge of parallel programming API. These templates automatically give parallel solvers tailored for clusters of computers using MPI API and distributed computing applications using BOINC API. Results of some computational experiments are presented.The work presented in this paper has been partially supported by EU under the COST programme Action IC1305, ’Network for Sustainable Ultrascale Computing (NESUS)’
- …