135 research outputs found

    GPU accelerated maximum cardinality matching algorithms for bipartite graphs

    Get PDF
    We design, implement, and evaluate GPU-based algorithms for the maximum cardinality matching problem in bipartite graphs. Such algorithms have a variety of applications in computer science, scientific computing, bioinformatics, and other areas. To the best of our knowledge, ours is the first study which focuses on GPU implementation of the maximum cardinality matching algorithms. We compare the proposed algorithms with serial and multicore implementations from the literature on a large set of real-life problems where in majority of the cases one of our GPU-accelerated algorithms is demonstrated to be faster than both the sequential and multicore implementations.Comment: 14 pages, 5 figure

    Constraints Propagation on GPU: A Case Study for AllDifferent

    Get PDF
    The AllDifferent constraint is a fundamental tool in Constraint Programming. It naturally arises in many problems, from puzzles to scheduling and routing applications. Such popularity has prompted an extensive literature on filtering and propagation for this constraint. Motivated by the benefits that GPUs offer to other branches of AI, this paper investigates the use of GPUs to accelerate filtering and propagation. In particular, we present an efficient parallelization of the AllDifferent constraint on GPU; we analyze different design and implementation choices and evaluates the performance of the resulting system on medium to large instances of the Travelling Salesman Problem with encouraging results

    Vanishingly Sparse Matrices and Expander Graphs, With Application to Compressed Sensing

    Full text link
    We revisit the probabilistic construction of sparse random matrices where each column has a fixed number of nonzeros whose row indices are drawn uniformly at random with replacement. These matrices have a one-to-one correspondence with the adjacency matrices of fixed left degree expander graphs. We present formulae for the expected cardinality of the set of neighbors for these graphs, and present tail bounds on the probability that this cardinality will be less than the expected value. Deducible from these bounds are similar bounds for the expansion of the graph which is of interest in many applications. These bounds are derived through a more detailed analysis of collisions in unions of sets. Key to this analysis is a novel {\em dyadic splitting} technique. The analysis led to the derivation of better order constants that allow for quantitative theorems on existence of lossless expander graphs and hence the sparse random matrices we consider and also quantitative compressed sensing sampling theorems when using sparse non mean-zero measurement matrices.Comment: 17 pages, 12 Postscript figure

    Two approximation algorithms for bipartite matching on multicore architectures

    Get PDF
    International audienceWe propose two heuristics for the bipartite matching problem that are amenable to shared-memory parallelization. The first heuristic is very intriguing from a parallelization perspective. It has no significant algorithmic synchronization overhead and no conflict resolution is needed across threads. We show that this heuristic has an approximation ratio of around 0.632 under some common conditions. The second heuristic is designed to obtain a larger matching by employing the well-known Karp-Sipser heuristic on a judiciously chosen subgraph of the original graph. We show that the Karp-Sipser heuristic always finds a maximum cardinality matching in the chosen subgraph. Although the Karp-Sipser heuristic is hard to parallelize for general graphs, we exploit the structure of the selected subgraphs to propose a specialized implementation which demonstrates very good scalability. We prove that this second heuristic has an approximation guarantee of around 0.866 under the same conditions as in the first algorithm. We discuss parallel implementations of the proposed heuristics on a multicore architecture. Experimental results, for demonstrating speed-ups and verifying the theoretical results in practice, are provided

    Load Balanced Demand Distribution under Overload Penalties

    Full text link
    Input to the Load Balanced Demand Distribution (LBDD) consists of the following: (a) a set of public service centers (e.g., schools); (b) a set of demand (people) units and; (c) a cost matrix containing the cost of assignment for all demand unit-service center pairs. In addition, each service center is also associated with a notion of capacity and a penalty which is incurred if it gets overloaded. Given the input, the LBDD problem determines a mapping from the set of demand units to the set of service centers. The objective is to determine a mapping that minimizes the sum of the following two terms: (i) the total assignment cost between demand units and their allotted service centers and, (ii) total of penalties incurred. The problem of LBDD finds its application in the domain of urban planning. An instance of the LBDD problem can be reduced to an instance of the min-cost bi-partite matching problem. However, this approach cannot scale up to the real world large problem instances. The current state of the art related to LBDD makes simplifying assumptions such as infinite capacity or total capacity being equal to the total demand. This paper proposes a novel allotment subspace re-adjustment based approach (ASRAL) for the LBDD problem. We analyze ASRAL theoretically and present its asymptotic time complexity. We also evaluate ASRAL experimentally on large problem instances and compare with alternative approaches. Our results indicate that ASRAL is able to scale-up while maintaining significantly better solution quality over the alternative approaches. In addition, we also extend ASRAL to para-ASRAL which uses the GPU and CPU cores to speed-up the execution while maintaining the same solution quality as ASRAL.Comment: arXiv admin note: text overlap with arXiv:2009.0176

    A Fast Dynamic Assignment Algorithm for Solving Resource Allocation Problems

    Get PDF
    The assignment problem is one of the fundamental problems in the field of combinatorial optimization. The Hungarian algorithm can be developed to solve various assignment problems according to each criterion. The assignment problem that is solved in this paper is a dynamic assignment to find the maximum weight on the resource allocation problems. The dynamic characteristic lies in the weight change that can occur after the optimal solution is obtained. The Hungarian algorithm can be used directly, but the initialization process must be done from the beginning every time a change occurs. The solution becomes ineffective because it takes up a lot of time and memory. This paper proposed a fast dynamic assignment algorithm based on the Hungarian algorithm. The proposed algorithm is able to obtain an optimal solution without performing the initialization process from the beginning. Based on the test results, the proposed algorithm has an average time of 0.146 s and an average memory of 4.62 M. While the Hungarian algorithm has an average time of 2.806 s and an average memory of 4.65 M. The fast dynamic assignment algorithm is influenced linearly by the number of change operations and quadratically by the number of vertices

    Statistical learning for predictive targeting in online advertising

    Get PDF
    • 

    corecore