3 research outputs found

    Partition Sort Revisited: Reconfirming the Robustness in Average Case and much more!

    Full text link
    In our previous work there was some indication that Partition Sort could be having a more robust average case O(nlogn) complexity than the popular Quick Sort. In our first study in this paper, we reconfirm this through computer experiments for inputs from Cauchy distribution for which expectation theoretically does not exist. Additionally, the algorithm is found to be sensitive to parameters of the input probability distribution demanding further investigation on parameterized complexity. The results on this algorithm for Binomial inputs in our second study are very encouraging in that direction.Comment: 8 page

    How robust is quicksort average complexity?

    Full text link
    The paper questions the robustness of average case time complexity of the fast and popular quicksort algorithm. Among the six standard probability distributions examined in the paper, only continuous uniform, exponential and standard normal are supporting it whereas the others are supporting the worst case complexity measure. To the question -why are we getting the worst case complexity measure each time the average case measure is discredited? -- one logical answer is average case complexity under the universal distribution equals worst case complexity. This answer, which is hard to challenge, however gives no idea as to which of the standard probability distributions come under the umbrella of universality. The morale is that average case complexity measures, in cases where they are different from those in worst case, should be deemed as robust provided only they get the support from at least the standard probability distributions, both discrete and continuous. Regretfully, this is not the case with quicksort.Comment: 15 pages;12figures;2 table

    A Statistical Peek into Average Case Complexity

    Full text link
    The present paper gives a statistical adventure towards exploring the average case complexity behavior of computer algorithms. Rather than following the traditional count based analytical (pen and paper) approach, we instead talk in terms of the weight based analysis that permits mixing of distinct operations into a conceptual bound called the statistical bound and its empirical estimate, the so called "empirical O". Based on careful analysis of the results obtained, we have introduced two new conjectures in the domain of algorithmic analysis. The analytical way of average case analysis falls flat when it comes to a data model for which the expectation does not exist (e.g. Cauchy distribution for continuous input data and certain discrete distribution inputs as those studied in the paper). The empirical side of our approach, with a thrust in computer experiments and applied statistics in its paradigm, lends a helping hand by complimenting and supplementing its theoretical counterpart. Computer science is or at least has aspects of an experimental science as well, and hence hopefully, our statistical findings will be equally recognized among theoretical scientists as well
    corecore