6 research outputs found

    Solving kk-SUM using few linear queries

    Full text link
    The kk-SUM problem is given nn input real numbers to determine whether any kk of them sum to zero. The problem is of tremendous importance in the emerging field of complexity theory within PP, and it is in particular open whether it admits an algorithm of complexity O(nc)O(n^c) with c<⌈k2⌉c<\lceil \frac{k}{2} \rceil. Inspired by an algorithm due to Meiser (1993), we show that there exist linear decision trees and algebraic computation trees of depth O(n3log⁡3n)O(n^3\log^3 n) solving kk-SUM. Furthermore, we show that there exists a randomized algorithm that runs in O~(n⌈k2⌉+8)\tilde{O}(n^{\lceil \frac{k}{2} \rceil+8}) time, and performs O(n3log⁡3n)O(n^3\log^3 n) linear queries on the input. Thus, we show that it is possible to have an algorithm with a runtime almost identical (up to the +8+8) to the best known algorithm but for the first time also with the number of queries on the input a polynomial that is independent of kk. The O(n3log⁡3n)O(n^3\log^3 n) bound on the number of linear queries is also a tighter bound than any known algorithm solving kk-SUM, even allowing unlimited total time outside of the queries. By simultaneously achieving few queries to the input without significantly sacrificing runtime vis-\`{a}-vis known algorithms, we deepen the understanding of this canonical problem which is a cornerstone of complexity-within-PP. We also consider a range of tradeoffs between the number of terms involved in the queries and the depth of the decision tree. In particular, we prove that there exist o(n)o(n)-linear decision trees of depth o(n4)o(n^4)

    Sorting under Forbidden Comparisons

    Get PDF
    In this paper we study the problem of sorting under forbidden comparisons where some pairs of elements may not be compared (forbidden pairs). Along with the set of elements V the input to our problem is a graph G(V, E), whose edges represents the pairs that we can compare in constant time. Given a graph with n vertices and m =(n2) - q edges we propose the first non-trivial deterministic algorithm which makes O((q + n) log n) comparisons with a total complexity of O(n2 + qω/2), where ω is the exponent in the complexity of matrix multiplication. We also propose a simple randomized algorithm for the problem which makes Õ(n2/√q + n+n√q) probes with high probability. When the input graph is random we show that Õ(min (n3/2, pn2)) probes suffice, where p is the edge probability

    Near-Optimal Online Multiselection in Internal and External Memory

    Get PDF
    We introduce an online version of the multiselection problem, in which q selection queries are requested on an unsorted array of n elements. We provide the first online algorithm that is 1-competitive with Kaligosi et al. [ICALP 2005] in terms of comparison complexity. Our algorithm also supports online search queries efficiently. We then extend our algorithm to the dynamic setting, while retaining online functionality, by supporting arbitrary insertions and deletions on the array. Assuming that the insertion of an element is immediately preceded by a search for that element, we show that our dynamic online algorithm performs an optimal number of comparisons, up to lower order terms and an additive O(n) term. For the external memory model, we describe the first online multiselection algorithm that is O(1)-competitive. This result improves upon the work of Sibeyn [Journal of Algorithms 2006] when q > m, where m is the number of blocks that can be stored in main memory. We also extend it to support searches, insertions, and deletions of elements efficiently

    Combinatorial algorithms in the approximate computing paradigm

    Get PDF
    Data-intensive computing has led to the emergence of data centers with massive processor counts and main memory sizes. However, the demand for shared resources has surpassed their capacity, resulting in increased costs and limited access. Commodity hardware, although accessible, has limited computational resources. This poses a challenge when performing computationally intensive tasks with large amounts of data on systems with restricted memory. To address these issues, Approximate Computing offers a solution by allowing selective solution approximation, leading to improved resource efficiency. This dissertation focuses on the trade-off between output quality and computational resource usage in sorting and searching problems. It introduces the concept of Approximate Sorting, which aims to reduce resource usage while maintaining an accepted level of sorting quality. Quality metrics are defined to assess the ”sortedness” of approximately sorted arrays. The dissertation also proposes a general framework for incorporating approximate computing into sorting algorithms, presenting an algorithm for approximate sorting with guaranteed upper bounds. The algorithms operate under a constraint on the number of comparisons performed. The dissertation continues to explore searching algorithms, specifically binary search algorithms on approximately sorted arrays. It addresses cases where metrics are given for the input array and cases where metrics are not available. Efficient and optimal algorithms are developed for multidimensional range searches and catalog searches on approximately sorted input. The dissertation further proposes algorithms that analyze patterns in input order to optimize sorting. These algorithms identify underlying patterns and sequences, facilitating faster sorting approaches. Additionally, the dissertation discusses the growing popularity of approximate computing in the field of High-Performance Computing (HPC). It presents a novel approach to comparison-based sorting by incorporating parallel approximate computing. The dissertation also proposes algorithms for various queries on approximately sorted arrays, such as determining the rank or position of an element. The time complexity of these querying algorithms is proportional to the input metric. The dissertation concludes by emphasizing the wide range of applications for sorting and searching algorithms. In the context of packet classification in router buffers, approximate sorting offers advantages by reducing the time-consuming sorting step. By capping the number of comparisons, approximate sorting becomes a practical solution for efficiently handling the large volume of incoming packets. This dissertation contributes to the field of approximate computing by addressing resource limitations and cost issues in data-intensive computing. It provides insights into approximate sorting and searching algorithms, and their application in various domains, offering a valuable contribution to the advancement of efficient, scalable, and accessible data processing

    15th Scandinavian Symposium and Workshops on Algorithm Theory: SWAT 2016, June 22-24, 2016, Reykjavik, Iceland

    Get PDF
    corecore