1,345 research outputs found
GPU accelerated maximum cardinality matching algorithms for bipartite graphs
We design, implement, and evaluate GPU-based algorithms for the maximum
cardinality matching problem in bipartite graphs. Such algorithms have a
variety of applications in computer science, scientific computing,
bioinformatics, and other areas. To the best of our knowledge, ours is the
first study which focuses on GPU implementation of the maximum cardinality
matching algorithms. We compare the proposed algorithms with serial and
multicore implementations from the literature on a large set of real-life
problems where in majority of the cases one of our GPU-accelerated algorithms
is demonstrated to be faster than both the sequential and multicore
implementations.Comment: 14 pages, 5 figure
Multilevel Threshold Secret and Function Sharing based on the Chinese Remainder Theorem
A recent work of Harn and Fuyou presents the first multilevel (disjunctive)
threshold secret sharing scheme based on the Chinese Remainder Theorem. In this
work, we first show that the proposed method is not secure and also fails to
work with a certain natural setting of the threshold values on compartments. We
then propose a secure scheme that works for all threshold settings. In this
scheme, we employ a refined version of Asmuth-Bloom secret sharing with a
special and generic Asmuth-Bloom sequence called the {\it anchor sequence}.
Based on this idea, we also propose the first multilevel conjunctive threshold
secret sharing scheme based on the Chinese Remainder Theorem. Lastly, we
discuss how the proposed schemes can be used for multilevel threshold function
sharing by employing it in a threshold RSA cryptosystem as an example
Threshold cryptography with Chinese remainder theorem
Ankara : The Department of Computer Engineering and the Institute of Engineering and Science of Bilkent University, 2009.Thesis (Master's) -- Bilkent University, 2009.Includes bibliographical references leaves 84-91.Information security has become much more important since electronic communication
is started to be used in our daily life. The content of the term information
security varies according to the type and the requirements of the area. However,
no matter which algorithms are used, security depends on the secrecy of a key
which is supposed to be only known by the agents in the first place.
The requirement of the key being secret brings several problems. Storing
a secret key on only one person, server or database reduces the security of the
system to the security and credibility of that agent. Besides, not having a backup
of the key introduces the problem of losing the key if a software/hardware failure
occurs. On the other hand, if the key is held by more than one agent an adversary
with a desire for the key has more flexibility of choosing the target. Hence the
security is reduced to the security of the least secure or least credible of these
agents.
Secret sharing schemes are introduced to solve the problems above. The main
idea of these schemes is to share the secret among the agents such that only
predefined coalitions can come together and reveal the secret, while no other
coalition can obtain any information about the secret. Thus, the keys used in the
areas requiring vital secrecy like large-scale finance applications and commandcontrol
mechanisms of nuclear systems, can be stored by using secret sharing
schemes.
Threshold cryptography deals with a particular type of secret sharing schemes.
In threshold cryptography related secret sharing schemes, if the size of a coalition
exceeds a bound t, it can reveal the key. And, smaller coalitions can reveal no information
about the key. Actually, the first secret sharing scheme in the literature
is the threshold scheme of Shamir where he considered the secret as the constant
of a polynomial of degree t − 1, and distributed the points on the polynomial to
the group of users. Thus, a coalition of size t can recover the polynomial and
reveal the key but a smaller coalition can not. This scheme is widely accepted by
the researchers and used in several applications. Shamir’s secret sharing scheme
is not the only one in the literature. For example, almost concurrently, Blakley
proposed another secret sharing scheme depending on planar geometry and
Asmuth and Bloom proposed a scheme depending on the Chinese Remainder
Theorem. Although these schemes satisfy the necessary and sufficient conditions
for the security, they have not been considered for the applications requiring a
secret sharing scheme.
Secret sharing schemes constituted a building block in several other applications
other than the ones mentioned above. These applications simply contain a
standard problem in the literature, the function sharing problem. In a function
sharing scheme, each user has its own secret as an input to a function and the
scheme computes the outcome of the function without revealing the secrets. In
the literature, encryption or signature functions of the public key algorithms like
RSA, ElGamal and Paillier can be given as an example to the functions shared by
using a secret sharing scheme. Even new generation applications like electronic
voting require a function sharing scheme.
As mentioned before, Shamir’s secret sharing scheme has attracted much of the
attention in the literature and other schemes are not considered much. However,
as this thesis shows, secret sharing schemes depending on the Chinese Remainder
Theorem can be practically used in these applications. Since each application has
different needs, Shamir’s secret sharing scheme is used in applications with several
extensions. Basically, this thesis investigates how to adapt Chinese Remainder
Theorem based secret sharing schemes to the applications in the literature. We
first propose some modifications on the Asmuth-Bloom secret sharing scheme and
then by using this modified scheme we designed provably secure function sharing
schemes and security extensions.Kaya, KamerM.S
Two approximation algorithms for bipartite matching on multicore architectures
International audienceWe propose two heuristics for the bipartite matching problem that are amenable to shared-memory parallelization. The first heuristic is very intriguing from a parallelization perspective. It has no significant algorithmic synchronization overhead and no conflict resolution is needed across threads. We show that this heuristic has an approximation ratio of around 0.632 under some common conditions. The second heuristic is designed to obtain a larger matching by employing the well-known Karp-Sipser heuristic on a judiciously chosen subgraph of the original graph. We show that the Karp-Sipser heuristic always finds a maximum cardinality matching in the chosen subgraph. Although the Karp-Sipser heuristic is hard to parallelize for general graphs, we exploit the structure of the selected subgraphs to propose a specialized implementation which demonstrates very good scalability. We prove that this second heuristic has an approximation guarantee of around 0.866 under the same conditions as in the first algorithm. We discuss parallel implementations of the proposed heuristics on a multicore architecture. Experimental results, for demonstrating speed-ups and verifying the theoretical results in practice, are provided
On partitioning problems with complex objectives
Hypergraph and graph partitioning tools are used to partition work for efficient parallelization of many sparse matrix computations. Most of the time, the objective function that is reduced by these tools relates to reducing the communication requirements, and the balancing constraints satisfied by these tools relate to balancing the work or memory requirements. Sometimes, the objective sought for having balance is a complex function of the partition. We describe some important class of parallel sparse matrix computations that have such balance objectives. For these cases, the current state of the art partitioning tools fall short of being adequate. To the best of our knowledge, there is only a single algorithmic framework in the literature to address such balance objectives. We propose another algorithmic framework to tackle complex objectives and experimentally investigate the proposed framework.Les outils de partitionnement de graphes et d'hypergraphes interviennent pour paralléliser efficacement de nombreux algorithmes liés aux matrices creuses. La plupart du temps, la fonction objectif minimisée par ces outils est liée au besoin de réduire les coûts de communication, tandis que les contraintes d'équilibre à satisfaire sont elles liées à l'équilibrage de la charge ou de la consommation mémoire. Parfois, l'objectif d'équilibre est une fonction complexe du partitionnement. Nous décrivons plusieurs applications majeures de calcul parallèle sur des matrices creuses où de telles contraintes d'équilibre apparaissent. Pour ces exemples, même les outils de partitionnement les plus pointus sont loin d'être adéquats. Pour autant que nous sachions, il n'existe dans la littérature qu'un seul cadre algorithmique qui traite ces problèmes. Nous proposons ici une nouvelle approche algorithmique et fournissons des résultats d'expériences la mettant en œuvre
Investigations on push-relabel based algorithms for the maximum transversal problem
We investigate the push-relabel algorithm for solving the problem of finding a maximum cardinality matching in a bipartite graph in the context of the maximum transversal problem. We describe in detail an optimized yet easy-to-implement version of the algorithm and fine-tune its parameters. We also introduce new performance-enhancing techniques. On a wide range of real-world instances, we compare the push-relabel algorithm with state-of-the-art augmenting path-based algorithms and the recently proposed pseudoflow approach. We conclude that a carefully tuned push-relabel algorithm is competitive with all known augmenting path-based algorithms, and superior to the pseudoflow-based ones.Nous étudions le problème de couplage maximum dans des graphes bipartis. Nous décrivons en détail une version optimisée de l'algorithme en ajustant ses paramètres. L'algorithme est facile à mettre en œuvre. Nous introduisons également de nouvelles techniques pour améliorer la performance de l'algorithme. Sur un large éventail de cas du monde réel, nous comparons l'algorithme Push-Relabel avec des algorithmes basés sur les concepts de chemins augmentants et de pseudoflot récemment proposés. Nous concluons qu'un algorithme de type Push-Relabel soigneusement réglé est en concurrence avec tous les algorithmes connus de type chemins augmentants, et supérieur à ceux de type pseudoflot
Enumerator: an efficient approach for enumerating all valid t-tuples
In this paper, we present an efficient approach for enumerating all valid
t-tuples for a given configuration space model, which is an important
task in computing covering arrays. The results of our experiments suggest that the proposed approach scales better than existing approaches
Understanding Coarsening for Embedding Large-Scale Graphs
A significant portion of the data today, e.g, social networks, web
connections, etc., can be modeled by graphs. A proper analysis of graphs with
Machine Learning (ML) algorithms has the potential to yield far-reaching
insights into many areas of research and industry. However, the irregular
structure of graph data constitutes an obstacle for running ML tasks on graphs
such as link prediction, node classification, and anomaly detection. Graph
embedding is a compute-intensive process of representing graphs as a set of
vectors in a d-dimensional space, which in turn makes it amenable to ML tasks.
Many approaches have been proposed in the literature to improve the performance
of graph embedding, e.g., using distributed algorithms, accelerators, and
pre-processing techniques. Graph coarsening, which can be considered a
pre-processing step, is a structural approximation of a given, large graph with
a smaller one. As the literature suggests, the cost of embedding significantly
decreases when coarsening is employed. In this work, we thoroughly analyze the
impact of the coarsening quality on the embedding performance both in terms of
speed and accuracy. Our experiments with a state-of-the-art, fast graph
embedding tool show that there is an interplay between the coarsening decisions
taken and the embedding quality.Comment: 10 pages, 6 figures, submitted to 2020 IEEE International Conference
on Big Dat
Fast and high quality topology-aware task mapping
Considering the large number of processors and the size of the interconnection networks on exascale capable supercomputers, mapping concurrently executable and communicating tasks of an application is a complex problem that needs to be dealt with care. For parallel applications, the communication overhead can be a significant bottleneck on scalability. Topology-aware task-mapping methods that map the tasks to the processors (i.e., cores) by exploiting the underlying network information are very effective to avoid, or at worst bend, this limitation. We propose novel, efficient, and effective task mapping algorithms employing a graph model. The experiments show that the methods are faster than the existing approaches proposed for the same task, and on 4096 processors, the algorithms improve the communication hops and link contentions by 16% and 32%, respectively, on the average. In addition, they improve the average execution time of a parallel SpMV kernel and a communication-only application by 9% and 14%, respectively
- …