54 research outputs found

    Why Is Dual-Pivot Quicksort Fast?

    Get PDF
    I discuss the new dual-pivot Quicksort that is nowadays used to sort arrays of primitive types in Java. I sketch theoretical analyses of this algorithm that offer a possible, and in my opinion plausible, explanation why (a) dual-pivot Quicksort is faster than the previously used (classic) Quicksort and (b) why this improvement was not already found much earlier.Comment: extended abstract for Theorietage 2015 (https://www.uni-trier.de/index.php?id=55089) (v2 fixes a small bug in the pseudocode

    Comparison of Data Partitioning Schema of Parallel Pairwise Alignment on Shared Memory System

    Get PDF
    The pairwise alignment (PA) algorithm is widely used in bioinformatics to analyze biological sequence. With the advance of sequencer technology, a massive amount of DNA fragments are sequenced much quicker and cheaper. The alignment algorithm needs to be parallelized to be able to align them in a shorter time. Many previous researches have parallelize PA algorithm using various data partitioning schema, but it is unclear which one is the best. The data partitioning schema is important for parallel PA performance, because this algorithm use dynamic programming technique that needs intense inter-thread communication. In this paper, we compared four partitioning schemas to find the best performing one on shared memory system. Those schemas are: blocked columnwise, rowwise, antidiagonal, and blocked columnwise with manual scheduling and loop unrolling. The last schema gave the best performance of 89% efficiency on 4 threads. This result provided fine-grain parallelism that can be used further to develop parallel multiple sequence alignment (MSA)

    Lucky Cars and the Quicksort Algorithm

    Full text link
    Quicksort is a classical divide-and-conquer sorting algorithm. It is a comparison sort that makes an average of 2(n+1)Hn4n2(n+1)H_n - 4n comparisons on an array of size nn ordered uniformly at random, where Hn=i=1n1iH_n = \sum_{i=1}^n\frac{1}{i} is the nnth harmonic number. Therefore, it makes n![2(n+1)Hn4n]n!\left[2(n+1)H_n - 4n\right] comparisons to sort all possible orderings of the array. In this article, we prove that this count also enumerates the parking preference lists of nn cars parking on a one-way street with nn parking spots resulting in exactly n1n-1 lucky cars (i.e., cars that park in their preferred spot). For n2n\geq 2, both counts satisfy the second order recurrence relation fn=2nfn1n(n1)fn2+2(n1)! f_n=2nf_{n-1}-n(n-1)f_{n-2}+2(n-1)! with f0=f1=0f_0=f_1=0.Comment: 8 pages, and 2 figures, to appear in The American Mathematical Monthl

    Mixing Bandt-Pompe and Lempel-Ziv approaches: another way to analyze the complexity of continuous-states sequences

    Get PDF
    In this paper, we propose to mix the approach underlying Bandt-Pompe permutation entropy with Lempel-Ziv complexity, to design what we call Lempel-Ziv permutation complexity. The principle consists of two steps: (i) transformation of a continuous-state series that is intrinsically multivariate or arises from embedding into a sequence of permutation vectors, where the components are the positions of the components of the initial vector when re-arranged; (ii) performing the Lempel-Ziv complexity for this series of `symbols', as part of a discrete finite-size alphabet. On the one hand, the permutation entropy of Bandt-Pompe aims at the study of the entropy of such a sequence; i.e., the entropy of patterns in a sequence (e.g., local increases or decreases). On the other hand, the Lempel-Ziv complexity of a discrete-state sequence aims at the study of the temporal organization of the symbols (i.e., the rate of compressibility of the sequence). Thus, the Lempel-Ziv permutation complexity aims to take advantage of both of these methods. The potential from such a combined approach - of a permutation procedure and a complexity analysis - is evaluated through the illustration of some simulated data and some real data. In both cases, we compare the individual approaches and the combined approach.Comment: 30 pages, 4 figure

    Reoptimization in lagrangian methods for the quadratic knapsack problem

    No full text
    International audienceThe 0-1 quadratic knapsack problem consists in maximizing a quadratic objective function subject to a linear capacity constraint. To solve exactly large instances of this problem with a tree search algorithm (e.g. a branch and bound method), the knowledge of good lower and upper bounds is crucial for pruning the tree but also for fixing as many variables as possible in a preprocessing phase. The upper bounds used in the best known exact approaches are based on Lagrangian relaxation and decomposition. It appears that the computation of these Lagrangian dual bounds involves the resolution of numerous 0-1 linear knapsack subproblems. Thus, taking this huge number of solvings into account, we propose to embed reoptimization techniques for improving the efficiency of the preprocessing phase of the 0-1 quadratic knapsack resolution. Namely, reoptimization is introduced to accelerate each independent sequence of 0-1 linear knapsack problems induced by the Lagrangian relaxation as well as the Lagrangian decomposition. Numerous numerical experiments validate the relevance of our approach

    Modular smoothed analysis of median-of-three Quicksort

    Get PDF
    Spielman’s smoothed complexity - a hybrid between worst and average case complexity measures - relies on perturbations of input instances to determine where average-case behavior turns to worst-case. This approach simplifies the smoothed analysis and achieves greater precession in the expression of the smoothed complexity, where a recurrence equation is obtained as opposed to bounds. Moreover, the approach addresses, in this context, the formation of input instances–an open problem in smoothed complexity. In [23], we proposed a method supporting modular smoothed analysis and illustrated the method by determining the modular smoothed complexity of Quicksort. Here, we use the modular approach to calculate the median of three variant and compare these results with those in [23]

    On smoothed analysis of quicksort and Hoare's find

    Get PDF
    We provide a smoothed analysis of Hoare's find algorithm, and we revisit the smoothed analysis of quicksort. Hoare's find algorithm - often called quickselect or one-sided quicksort - is an easy-to-implement algorithm for finding the k-th smallest element of a sequence. While the worst-case number of comparisons that Hoare’s find needs is Theta(n^2), the average-case number is Theta(n). We analyze what happens between these two extremes by providing a smoothed analysis. In the first perturbation model, an adversary specifies a sequence of n numbers of [0,1], and then, to each number of the sequence, we add a random number drawn independently from the interval [0,d]. We prove that Hoare's find needs Theta(n/(d+1) sqrt(n/d) + n) comparisons in expectation if the adversary may also specify the target element (even after seeing the perturbed sequence) and slightly fewer comparisons for finding the median. In the second perturbation model, each element is marked with a probability of p, and then a random permutation is applied to the marked elements. We prove that the expected number of comparisons to find the median is Omega((1−p)n/p log n). Finally, we provide lower bounds for the smoothed number of comparisons of quicksort and Hoare’s find for the median-of-three pivot rule, which usually yields faster algorithms than always selecting the first element: The pivot is the median of the first, middle, and last element of the sequence. We show that median-of-three does not yield a significant improvement over the classic rule
    corecore