69,015 research outputs found

    Work-Efficient Query Evaluation with PRAMs

    Get PDF
    The paper studies query evaluation in parallel constant time in the PRAM model. While it is well-known that all relational algebra queries can be evaluated in constant time on an appropriate CRCW-PRAM, this paper is interested in the efficiency of evaluation algorithms, that is, in the number of processors or, asymptotically equivalent, in the work. Naive evaluation in the parallel setting results in huge (polynomial) bounds on the work of such algorithms and in presentations of the result sets that can be extremely scattered in memory. The paper first discusses some obstacles for constant time PRAM query evaluation. It presents algorithms for relational operators that are considerably more efficient than the naive approaches. Further it explores three settings, in which efficient sequential query evaluation algorithms exist: acyclic queries, semi-join algebra queries, and join queries - the latter in the worst-case optimal framework. Under natural assumptions on the representation of the database, the work of the given algorithms matches the best sequential algorithms in the case of semi-join queries, and it comes close in the other two settings. An important tool is the compaction technique from Hagerup (1992)

    On Dynamic Algorithms for Algebraic Problems

    Get PDF
    In this paper, we examine the problem of incrementally evaluating algebraic functions. In particular, if f(x1, x2, …, xn) = (y1, y2, …, ym) is an algebraic problem, we consider answering on-line requests of the form "change input xi to value v" or "what is the value of output yj?" We first present lower bounds for some simply stated algebraic problems such as multipoint polynomial evaluation, polynomial reciprocal, and extended polynomial GCD, proving an &#x03A9(n). lower bound for the incremental evaluation of these functions. In addition, we prove two time-space trade-off theorems that apply to incremental algorithms for almost all algebraic functions. We then derive several general-purpose algorithm design techniques and apply them to several fundamental algebraic problems. For example, we give an O( √ n  ) time per request algorithm for incremental DFT. We also present a design technique for serving incremental requests using a parallel machine, giving a choice of either optimal work with respect to the sequential incremental algorithm or superfast algorithms with O(log log n) time per request with a sublinear number of processors

    How proofs are prepared at Camelot

    Full text link
    We study a design framework for robust, independently verifiable, and workload-balanced distributed algorithms working on a common input. An algorithm based on the framework is essentially a distributed encoding procedure for a Reed--Solomon code, which enables (a) robustness against byzantine failures with intrinsic error-correction and identification of failed nodes, and (b) independent randomized verification to check the entire computation for correctness, which takes essentially no more resources than each node individually contributes to the computation. The framework builds on recent Merlin--Arthur proofs of batch evaluation of Williams~[{\em Electron.\ Colloq.\ Comput.\ Complexity}, Report TR16-002, January 2016] with the observation that {\em Merlin's magic is not needed} for batch evaluation---mere Knights can prepare the proof, in parallel, and with intrinsic error-correction. The contribution of this paper is to show that in many cases the verifiable batch evaluation framework admits algorithms that match in total resource consumption the best known sequential algorithm for solving the problem. As our main result, we show that the kk-cliques in an nn-vertex graph can be counted {\em and} verified in per-node O(n(ω+ϵ)k/6)O(n^{(\omega+\epsilon)k/6}) time and space on O(n(ω+ϵ)k/6)O(n^{(\omega+\epsilon)k/6}) compute nodes, for any constant ϵ>0\epsilon>0 and positive integer kk divisible by 66, where 2≤ω<2.37286392\leq\omega<2.3728639 is the exponent of matrix multiplication. This matches in total running time the best known sequential algorithm, due to Ne{\v{s}}et{\v{r}}il and Poljak [{\em Comment.~Math.~Univ.~Carolin.}~26 (1985) 415--419], and considerably improves its space usage and parallelizability. Further results include novel algorithms for counting triangles in sparse graphs, computing the chromatic polynomial of a graph, and computing the Tutte polynomial of a graph.Comment: 42 p

    MATSuMoTo: The MATLAB Surrogate Model Toolbox For Computationally Expensive Black-Box Global Optimization Problems

    Full text link
    MATSuMoTo is the MATLAB Surrogate Model Toolbox for computationally expensive, black-box, global optimization problems that may have continuous, mixed-integer, or pure integer variables. Due to the black-box nature of the objective function, derivatives are not available. Hence, surrogate models are used as computationally cheap approximations of the expensive objective function in order to guide the search for improved solutions. Due to the computational expense of doing a single function evaluation, the goal is to find optimal solutions within very few expensive evaluations. The multimodality of the expensive black-box function requires an algorithm that is able to search locally as well as globally. MATSuMoTo is able to address these challenges. MATSuMoTo offers various choices for surrogate models and surrogate model mixtures, initial experimental design strategies, and sampling strategies. MATSuMoTo is able to do several function evaluations in parallel by exploiting MATLAB's Parallel Computing Toolbox.Comment: 13 pages, 7 figure
    • …
    corecore