6 research outputs found

    A Simple Polynomial Time Algorithm for Max Cut on Laminar Geometric Intersection Graphs

    Get PDF
    In a geometric intersection graph, given a collection of n geometric objects as input, each object corresponds to a vertex and there is an edge between two vertices if and only if the corresponding objects intersect. In this work, we present a somewhat surprising result: a polynomial time algorithm for max cut on laminar geometric intersection graphs. In a laminar geometric intersection graph, if two objects intersect, then one of them will completely lie inside the other. To the best of our knowledge, for max cut this is the first class of (non-trivial) geometric intersection graphs with an exact solution in polynomial time. Our algorithm uses a simple greedy strategy. However, proving its correctness requires non-trivial ideas. Next, we design almost-linear time algorithms (in terms of n) for laminar axis-aligned boxes by combining the properties of laminar objects with vertical ray shooting data structures. Note that the edge-set of the graph is not explicitly given as input; only the n geometric objects are given as input

    The Cardinal Complexity of Comparison-based Online Algorithms

    Full text link
    We consider ordinal online problems, i.e., those tasks that only depend on the pairwise comparisons between elements in the input. E.g., the secretary problem and the game of googol. The natural approach to these tasks is to use ordinal online algorithms that at each step only consider relative ranking among the arrived elements, without looking at the numerical values of the input. We formally study the question of how cardinal algorithms (that can use numerical values of the input) can improve upon ordinal algorithms. We give a universal construction of the input distribution for any ordinal online problem, such that the advantage of the cardinal algorithms over the ordinal algorithms is at most 1+Īµ1+\varepsilon for arbitrary small Īµ>0\varepsilon> 0. However, the value range of the input elements in this construction is huge: O(n3ā‹…n!Īµ)ā†‘ā†‘(nāˆ’1)O\left(\frac{n^3\cdot n!}{\varepsilon}\right)\uparrow\uparrow (n-1) for an input sequence of length nn. Surprisingly, we also identify a natural family of hardcore problems that achieve a matching advantage of 1+Ī©(1logā”(c)N),1+ \Omega \left(\frac{1}{\log^{(c)}N}\right), where logā”(c)N=logā”logā”ā€¦logā”N\log^{(c)}N=\log\log\ldots\log N with cc iterative logs and cc is an arbitrary constant cā‰¤nāˆ’2c\le n-2. We also consider a simpler variant of the hardcore problem, which we call maximum guessing and is closely related to the game of googol. We provide a much more efficient construction with cardinal complexity O(1Īµ)nāˆ’1O\left(\frac{1}{\varepsilon}\right)^{n-1} for this easier task. Finally, we study the dependency on nn of the hardcore problem. We provide an efficient construction of size O(n)O(n), if we allow cardinal algorithms to have constant factor advantage against ordinal algorithms

    Is There an Oblivious RAM Lower Bound?

    Get PDF
    An Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (JACM 1996), is a (probabilistic) RAM that hides its access pattern, i.e. for every input the observed locations accessed are similarly distributed. Great progress has been made in recent years in minimizing the overhead of ORAM constructions, with the goal of obtaining the smallest overhead possible. We revisit the lower bound on the overhead required to obliviously simulate programs, due to Goldreich and Ostrovsky. While the lower bound is fairly general, including the offline case, when the simulator is given the reads and writes ahead of time, it does assume that the simulator behaves in a ā€œballs and binsā€ fashion. That is, the simulator must act by shuffling data items around, and is not allowed to have sophisticated encoding of the data. We prove that for the offline case, showing a lower bound without the above restriction is related to the size of the circuits for sorting. Our proof is constructive, and uses a bit-slicing approach which manipulates the bit representations of data in the simulation. This implies that without obtaining yet unknown superlinear lower bounds on the size of such circuits, we cannot hope to get lower bounds on offline (unrestricted) ORAMs

    Algorithms for Order-Preserving Matching

    Get PDF
    String matching is a widely studied problem in Computer Science. ThereĀ have been many recent developments in this field. One fascinating problemĀ considered lately is the order-preserving matching (OPM) problem. TheĀ task is to find all the substrings in the text which have the same lengthĀ and relative order as the pattern, where the relative order is the numericalĀ order of the numbers in a string. The problem finds its applications inĀ the areas involving time series or series of numbers. More specifically, it isĀ useful for those who are interested in the relative order of the pattern andĀ not in the pattern itself. For example, it can be used by analysts in a stockĀ market to study movements of prices.Ā Ā In addition to the OPM problem, we also studied its approximate variation.Ā In approximate order-preserving matching, we search for those substringsĀ in the text which have relative order similar to the pattern, i.e.,Ā relative order of the pattern matches with at most k mismatches. With respectĀ to applications of order-preserving matching, approximate search isĀ more meaningful than exact search.Ā We developed various advanced solutions for the problem and its variant.Ā Special emphasis was laid on the practical efficiency of the solutions. Particularly,Ā we introduced a simple solution for the OPM problem using filtration.Ā We proved experimentally that our method was effective and fasterĀ than the previous solutions for the problem. In addition, we combined theĀ Single Instruction Multiple Data (SIMD) instruction set architecture with filtration to develop competent solutions which were faster than our previousĀ solution. Moreover, we proposed another efficient solution withoutĀ filtration using the SIMD architecture. We also presented an offline solutionĀ based on the FM-index scheme. Furthermore, we proposed practicalĀ solutions for the approximate order-preserving matching problem and oneĀ of the solutions was the first sublinear solution on average for the problem

    Foundations of Differentially Oblivious Algorithms

    Get PDF
    It is well-known that a program\u27s memory access pattern can leak information about its input. To thwart such leakage, most existing works adopt the solution of oblivious RAM (ORAM) simulation. Such a notion has stimulated much debate. Some have argued that the notion of ORAM is too strong, and suffers from a logarithmic lower bound on simulation overhead. Despite encouraging progress in designing efficient ORAM algorithms, it would nonetheless be desirable to avoid the oblivious simulation overhead. Others have argued that obliviousness, without protection of length-leakage, is too weak, and have demonstrated examples where entire databases can be reconstructed merely from length-leakage. Inspired by the elegant notion of differential privacy, we initiate the study of a new notion of access pattern privacy, which we call ``(Ļµ,Ī“)(\epsilon, \delta)-differential obliviousness\u27\u27. We separate the notion of (Ļµ,Ī“)(\epsilon, \delta)-differential obliviousness from classical obliviousness by considering several fundamental algorithmic abstractions including sorting small-length keys, merging two sorted lists, and range query data structures (akin to binary search trees). We show that by adopting differential obliviousness with reasonable choices of Ļµ\epsilon and Ī“\delta, not only can one circumvent several impossibilities pertaining to the classical obliviousness notion, but also in several cases, obtain meaningful privacy with little overhead relative to the non-private baselines (i.e., having privacy ``almost for free\u27\u27). On the other hand, we show that for very demanding choices of Ļµ\epsilon and Ī“\delta, the same lower bounds for oblivious algorithms would be preserved for (Ļµ,Ī“)(\epsilon, \delta)-differential obliviousness

    Can We Overcome the nlogā”nn \log n Barrier for Oblivious Sorting?

    Get PDF
    It is well-known that non-comparison-based techniques can allow us to sort nn elements in o(nlogā”n)o(n \log n) time on a Random-Access Machine (RAM). On the other hand, it is a long-standing open question whether (non-comparison-based) circuits can sort nn elements from the domain [1..2k][1..2^k] with o(knlogā”n)o(k n \log n) boolean gates. We consider weakened forms of this question: first, we consider a restricted class of sorting where the number of distinct keys is much smaller than the input length; and second, we explore Oblivious RAMs and probabilistic circuit families, i.e., computational models that are somewhat more powerful than circuits but much weaker than RAM. We show that Oblivious RAMs and probabilistic circuit families can sort o(logā”n)o(\log n)-bit keys in o(nlogā”n)o(n \log n) time or o(knlogā”n)o(k n \log n) circuit complexity where nn is the input length. Our algorithms work in the balls-and-bins model, i.e., not only can they sort an array of numerical keys --- if each key additionally carries an opaque ball, our algorithms can also move the balls into the correct order. We further show that in such a balls-and-bins model, it is impossible to sort Ī©(logā”n)\Omega(\log n)-bit keys in o(nlogā”n)o(n \log n) time, and thus the o(logā”n)o(\log n)-bit-key assumption is necessary for overcoming the nlogā”nn \log n barrier. Finally, we optimize the IO efficiency of our oblivious algorithms for RAMs --- we show that even the 11-bit special case of our algorithm can solve open questions regarding whether there exist oblivious algorithms for tight compaction and selection in linear IO
    corecore