273,902 research outputs found

    A low-cost neural sorting network with O(1) time complexity

    Get PDF
    [[abstract]]In this paper, we present an O(1) time neural network with O(n1 + var epsilon) neurons and links to sort n data, var epsilon > 0. For large-size problems, it is desirable to have low-cost hardware solutions. In order to solve the sorting problem in constant time and with less hardware-cost, we adopt Leighton's column sort [5] as the main architecture. Then we use Chen and Hsieh's neural network [3] with O(n3) complexity as the lowest-level sub-networks. By using recursive techniques properly, we are able to explore constant-time, low-complexity neural sorting networks.

    Constant Time Sorting and Searching

    Get PDF
    Title from PDF of title page, viewed January 4, 2023Thesis advisor: Yijie HanVitaIncludes bibliographical references (pages 25-28)Thesis (M.S.)--Department of Computer Science and Electrical Engineering. University of Missouri--Kansas City, 2022To study the sorting of real numbers into a linked list on Parallel Random Access Machine model. To show that input array of n real numbers can be sorted into a linked list in constant time using n²/logᶜn processors for any positive constant c. The searching problem studied is locating the interval of n sorted real numbers for inserting a query real number. Taking into account an input of n real numbers and organize them in the sorted order to facilitate searching. Initially, sorting the n input real numbers and then convert these real numbers into integers such that their relative order is preserved. Convert the query input real number to a query integer and then search the interval among these n integers for the insertion point of this query real number in constant time.Introduction -- Sorting in constant time -- Searching in constant time -- Theorem -- Conclusion

    Zig-zag Sort: A Simple Deterministic Data-Oblivious Sorting Algorithm Running in O(n log n) Time

    Full text link
    We describe and analyze Zig-zag Sort--a deterministic data-oblivious sorting algorithm running in O(n log n) time that is arguably simpler than previously known algorithms with similar properties, which are based on the AKS sorting network. Because it is data-oblivious and deterministic, Zig-zag Sort can be implemented as a simple O(n log n)-size sorting network, thereby providing a solution to an open problem posed by Incerpi and Sedgewick in 1985. In addition, Zig-zag Sort is a variant of Shellsort, and is, in fact, the first deterministic Shellsort variant running in O(n log n) time. The existence of such an algorithm was posed as an open problem by Plaxton et al. in 1992 and also by Sedgewick in 1996. More relevant for today, however, is the fact that the existence of a simple data-oblivious deterministic sorting algorithm running in O(n log n) time simplifies the inner-loop computation in several proposed oblivious-RAM simulation methods (which utilize AKS sorting networks), and this, in turn, implies simplified mechanisms for privacy-preserving data outsourcing in several cloud computing applications. We provide both constructive and non-constructive implementations of Zig-zag Sort, based on the existence of a circuit known as an epsilon-halver, such that the constant factors in our constructive implementations are orders of magnitude smaller than those for constructive variants of the AKS sorting network, which are also based on the use of epsilon-halvers.Comment: Appearing in ACM Symp. on Theory of Computing (STOC) 201

    Sorting Real Numbers in Constant Time Using n^2/log^cn Processors

    Get PDF
    We study the sorting of real numbers into a linked list on the PRAM (Parallel Random-Access Machine) model. We show that n real numbers can be sorted into a linked list in constant time using n2 processors. Previously n numbers can be sorted into a linked list using n2 processors in O(loglogn) time. We also study the time processor trade-off for sorting real numbers into a linked list on the PRAM (Parallel Random Access Machine) model. We show that n real numbers can be sorted into a linked list with n2/t processors in O(logt) time. Previously n real numbers can be sorted into a linked list using n3 processors in constant time and n2 processors in O(loglogn). And then we show that input array of n real numbers  can be sorted into linked list in constant time using n2/logcn  processors for any positive constant c. We believe that further reduction on the number of processors for sorting real numbers in constant time will be very difficult if not impossible

    Hybridizing Non-dominated Sorting Algorithms: Divide-and-Conquer Meets Best Order Sort

    Full text link
    Many production-grade algorithms benefit from combining an asymptotically efficient algorithm for solving big problem instances, by splitting them into smaller ones, and an asymptotically inefficient algorithm with a very small implementation constant for solving small subproblems. A well-known example is stable sorting, where mergesort is often combined with insertion sort to achieve a constant but noticeable speed-up. We apply this idea to non-dominated sorting. Namely, we combine the divide-and-conquer algorithm, which has the currently best known asymptotic runtime of O(N(logN)M1)O(N (\log N)^{M - 1}), with the Best Order Sort algorithm, which has the runtime of O(N2M)O(N^2 M) but demonstrates the best practical performance out of quadratic algorithms. Empirical evaluation shows that the hybrid's running time is typically not worse than of both original algorithms, while for large numbers of points it outperforms them by at least 20%. For smaller numbers of objectives, the speedup can be as large as four times.Comment: A two-page abstract of this paper will appear in the proceedings companion of the 2017 Genetic and Evolutionary Computation Conference (GECCO 2017

    VLSI-sorting evaluated under the linear model

    Get PDF
    AbstractThere are several different models of computation used on which to base evaluations of VLSI sorting algorithms and there are different measures of complexity. This paper revises complexity results under the linear model that have been gained under the constant model. This approach is due to expected technological development (see Mangir, 1983; Thompson and Raghavan, 1984; Vitanyi, 1984a, 1984b).For the constant model we know that for medium sized keys there are AT2and AP2 optimal sorting algorithms with T ranging from ω(log n) to O(√nk) and P ranging from Ω(1) to O(√nk) (Bilardi, 1984). The main results of asymptotic analysis of sorting algorithms under the linear model are that the lower bounds allow AT2 optimal sorting algorithms only for T = Θ(√nk) but allow AP2 algorithms in the same range as under the constant model. Furthermore the sorting algorithms presented in this paper meet these lower bounds. This proves that these bounds cannot be improved for k = Θ (log n). The building block for the realization of these sorting algorithms is a comparison exchange module that compares r × s bit matrices in time TC = Θ(r + s) on an area AC = Θ(r2) (not including the storage area for the keys).For problem sizes that exceed realistic chip capacities, chip-external sorting algorithms can be used. In this paper two different chip-external sorting algorithms (BBB(S) and TWB(S)) are presented. They are designed to be implemented on a single board. They use a sorting chip S to perform the sort-split operation on blocks of data BBB(S) and TWB(S) are systolic algorithms using local communication only so that their evaluation does not depend on whether the constant or the linear model is used. Furthermore it seems obvious that their design is technically feasible whenever the sorting chip S is technically feasible.TWB has optimal asymptotic time complexity, so its existence proves that under the linear model external sorting can be done asymptotically as fast as under the constant model. The time complexity of TWB(S) is linearly dependent on the speed gs = nsts. It is shown that the speed if looked at as a function of the chip capacity C is asymptotically maximal for AT2 optimal sorting algorithms. Thus S should be a sorting algorithm similar to the M-M-sorter presented in this paper. A major disadvantage of TWB(S) is that it cannot exploit the maximal throughput ds = ns/ps of a systolic sorting algorithm S.Therefore algorithm BBB(S) is introduced. The time complexity of BBB(S) is linearly dependent on ds. It is shown that the throughput is maximal for AP2 optimal algorithms. There is a wide range of such sorting algorithms including algorithms that can be realized in a way that is independent of the length of the keys. For example, BBB(S) with S being a highly parallel version of odd-even transposition sort has this kind of flexibility. A disadvantage of BBB(S) is that it is asymptotically slower than TWB(S)
    corecore