4,706 research outputs found
Zig-zag Sort: A Simple Deterministic Data-Oblivious Sorting Algorithm Running in O(n log n) Time
We describe and analyze Zig-zag Sort--a deterministic data-oblivious sorting
algorithm running in O(n log n) time that is arguably simpler than previously
known algorithms with similar properties, which are based on the AKS sorting
network. Because it is data-oblivious and deterministic, Zig-zag Sort can be
implemented as a simple O(n log n)-size sorting network, thereby providing a
solution to an open problem posed by Incerpi and Sedgewick in 1985. In
addition, Zig-zag Sort is a variant of Shellsort, and is, in fact, the first
deterministic Shellsort variant running in O(n log n) time. The existence of
such an algorithm was posed as an open problem by Plaxton et al. in 1992 and
also by Sedgewick in 1996. More relevant for today, however, is the fact that
the existence of a simple data-oblivious deterministic sorting algorithm
running in O(n log n) time simplifies the inner-loop computation in several
proposed oblivious-RAM simulation methods (which utilize AKS sorting networks),
and this, in turn, implies simplified mechanisms for privacy-preserving data
outsourcing in several cloud computing applications. We provide both
constructive and non-constructive implementations of Zig-zag Sort, based on the
existence of a circuit known as an epsilon-halver, such that the constant
factors in our constructive implementations are orders of magnitude smaller
than those for constructive variants of the AKS sorting network, which are also
based on the use of epsilon-halvers.Comment: Appearing in ACM Symp. on Theory of Computing (STOC) 201
Sieve algorithms for the shortest vector problem are practical
The most famous lattice problem is the Shortest Vector Problem (SVP), which has many applications in cryptology. The best approximation algorithms known for SVP in high dimension rely on a subroutine for exact SVP in low dimension. In this paper, we assess the practicality of the best (theoretical) algorithm known for exact SVP in low dimension: the sieve algorithm proposed by Ajtai, Kumar and Sivakumar (AKS) in 2001. AKS is a randomized algorithm of time and space complexity 2^(O(n)), which is theoretically much lower than the super-exponential complexity of all alternative SVP algorithms. Surprisingly, no implementation and no practical analysis of AKS has ever been reported. It was in fact widely believed that AKS was impractical: for instance, Schnorr claimed in 2003 that the constant hidden in the 2^(O(n)) complexity was at least 30. In this paper, we show that AKS can actually be made practical: we present a heuristic variant of AKS whose running time is (4/3+ϵ)^n polynomial-time operations, and whose space requirement is (4/3+ ϵ)^(n/2) polynomially many bits. Our implementation can experimentally find shortest lattice vectors up to dimension 50, but is slower than classical alternative SVP algorithms in these dimensions
Sieve algorithms for the shortest vector problem are practical
The most famous lattice problem is the Shortest Vector Problem (SVP), which has many applications in cryptology. The best approximation algorithms known for SVP in high dimension rely on a subroutine for exact SVP in low dimension. In this paper, we assess the practicality of the best (theoretical) algorithm known for exact SVP in low dimension: the sieve algorithm proposed by Ajtai, Kumar and Sivakumar (AKS) in 2001. AKS is a randomized algorithm of time and space complexity 2^(O(n)), which is theoretically much lower than the super-exponential complexity of all alternative SVP algorithms. Surprisingly, no implementation and no practical analysis of AKS has ever been reported. It was in fact widely believed that AKS was impractical: for instance, Schnorr claimed in 2003 that the constant hidden in the 2^(O(n)) complexity was at least 30. In this paper, we show that AKS can actually be made practical: we present a heuristic variant of AKS whose running time is (4/3+ϵ)^n polynomial-time operations, and whose space requirement is (4/3+ ϵ)^(n/2) polynomially many bits. Our implementation can experimentally find shortest lattice vectors up to dimension 50, but is slower than classical alternative SVP algorithms in these dimensions
An Empirical Study towards Refining the AKS Primality Testing Algorithm
The AKS (Agrawal-Kayal-Saxena) algorithm is the first ever deterministic polynomial-time primality-proving algorithm whose asymptotic run time complexity is , where . Despite this theoretical breakthrough, the algorithm serves no practical use in conventional cryptologic applications, as the existing probabilistic primality tests like ECPP in conjunction with conditional usage of sub-exponential time deterministic tests are found to have better practical running time. Later, the authors of AKS test improved the algorithm so that it runs in time. A variant of AKS test was demonstrated by Carl Pomerance and H. W. Lenstra, which runs in almost half the number of operations required in AKS. This algorithm also suffers impracticality. Attempts were made to efficiently implement AKS algorithm, but in contrast with the slightest improvements in performance which target specific machine architectures, the limitations of the algorithm are found highlighted. In this paper we present our analysis and observations on AKS algorithm based on the empirical results and statistics of certain parameters which control the asymptotic running time of the algorithm. From this analysis we refine AKS so that it runs in time
Robust Exponential Worst Cases for Divide-et-Impera Algorithms for Parity Games
The McNaughton-Zielonka divide et impera algorithm is the simplest and most
flexible approach available in the literature for determining the winner in a
parity game. Despite its theoretical worst-case complexity and the negative
reputation as a poorly effective algorithm in practice, it has been shown to
rank among the best techniques for the solution of such games. Also, it proved
to be resistant to a lower bound attack, even more than the strategy
improvements approaches, and only recently a family of games on which the
algorithm requires exponential time has been provided by Friedmann. An easy
analysis of this family shows that a simple memoization technique can help the
algorithm solve the family in polynomial time. The same result can also be
achieved by exploiting an approach based on the dominion-decomposition
techniques proposed in the literature. These observations raise the question
whether a suitable combination of dynamic programming and game-decomposition
techniques can improve on the exponential worst case of the original algorithm.
In this paper we answer this question negatively, by providing a robustly
exponential worst case, showing that no intertwining of the above mentioned
techniques can help mitigating the exponential nature of the divide et impera
approaches.Comment: In Proceedings GandALF 2017, arXiv:1709.0176
- …