1,138 research outputs found

    LRM-Trees: Compressed Indices, Adaptive Sorting, and Compressed Permutations

    Full text link
    LRM-Trees are an elegant way to partition a sequence of values into sorted consecutive blocks, and to express the relative position of the first element of each block within a previous block. They were used to encode ordinal trees and to index integer arrays in order to support range minimum queries on them. We describe how they yield many other convenient results in a variety of areas, from data structures to algorithms: some compressed succinct indices for range minimum queries; a new adaptive sorting algorithm; and a compressed succinct data structure for permutations supporting direct and indirect application in time all the shortest as the permutation is compressible.Comment: 13 pages, 1 figur

    QuickHeapsort: Modifications and improved analysis

    Full text link
    We present a new analysis for QuickHeapsort splitting it into the analysis of the partition-phases and the analysis of the heap-phases. This enables us to consider samples of non-constant size for the pivot selection and leads to better theoretical bounds for the algorithm. Furthermore we introduce some modifications of QuickHeapsort, both in-place and using n extra bits. We show that on every input the expected number of comparisons is n lg n - 0.03n + o(n) (in-place) respectively n lg n -0.997 n+ o (n). Both estimates improve the previously known best results. (It is conjectured in Wegener93 that the in-place algorithm Bottom-Up-Heapsort uses at most n lg n + 0.4 n on average and for Weak-Heapsort which uses n extra-bits the average number of comparisons is at most n lg n -0.42n in EdelkampS02.) Moreover, our non-in-place variant can even compete with index based Heapsort variants (e.g. Rank-Heapsort in WangW07) and Relaxed-Weak-Heapsort (n lg n -0.9 n+ o (n) comparisons in the worst case) for which no O(n)-bound on the number of extra bits is known

    Informational Externalities, Strategic Delay, and the Search for Optimal Policy

    Get PDF
    This paper examines optimal policy when agents, private investors and a government, can learn about the economy by observing others. Investors can delay investment in order to exploit future information. Importantly, investors ignore the informational value of their actions to others when deciding: this externality results in inefficiently high delay, motivating government intervention. The government searches for the optimal policy, while learning about the economy. Complications arise since investors are aware of any systematic component to policy and may respond perversely to government initiatives. The paper characterizes the optimal government policy and shows that the government achieves its desired outcome.arrangement monotone; functional analysis; market structure; ordinal analysis; simplex; symmetry

    Ordinal Approach to Characterizing Efficient Allocations, An

    Get PDF
    The invisible hand theorem relates nothing about the attributes of the optimal allocation vector. In this paper, we identify a convex cone of functions such that order on vectors of exogenous heterogeneity parameters induces component-wise order on allocation vectors for firms in an efficient market. By use of functional analysis, we then replace the vectors of heterogeneities with asymmetries in function attributes such that the induced component-wise order on efficient allocations still pertains. We do so through integration over a kernel in which the requisite asymmetries are embedded. Likelihood ratio order on the measures of integration is both necessary and sufficient to ensure component-wise order on efficient factor allocations across firms. Upon specializing to supermodular functions, familiar stochastic dominance orders on normalized measures of integration provide necessary and sufficient conditions for this component-wise order on efficient allocation. The analysis engaged in throughout the paper is ordinal in the sense that all conclusions drawn are robust to monotone transformations of the arguments in production.arrangement monotone; functional analysis; market structure; ordinal analysis; simplex; symmetry

    Alignments with non-overlapping moves, inversions and tandem duplications in O ( n 4) time

    Get PDF
    Sequence alignment is a central problem in bioinformatics. The classical dynamic programming algorithm aligns two sequences by optimizing over possible insertions, deletions and substitutions. However, other evolutionary events can be observed, such as inversions, tandem duplications or moves (transpositions). It has been established that the extension of the problem to move operations is NP-complete. Previous work has shown that an extension restricted to non-overlapping inversions can be solved in O(n 3) with a restricted scoring scheme. In this paper, we show that the alignment problem extended to non-overlapping moves can be solved in O(n 5) for general scoring schemes, O(n 4log n) for concave scoring schemes and O(n 4) for restricted scoring schemes. Furthermore, we show that the alignment problem extended to non-overlapping moves, inversions and tandem duplications can be solved with the same time complexities. Finally, an example of an alignment with non-overlapping moves is provide

    On Time-Reversal Imaging by Statistical Testing

    Full text link
    This letter is focused on the design and analysis of computational wideband time-reversal imaging algorithms, designed to be adaptive with respect to the noise levels pertaining to the frequencies being employed for scene probing. These algorithms are based on the concept of cell-by-cell processing and are obtained as theoretically-founded decision statistics for testing the hypothesis of single-scatterer presence (absence) at a specific location. These statistics are also validated in comparison with the maximal invariant statistic for the proposed problem.Comment: Reduced form accepted in IEEE Signal Processing Letter

    Optimal and efficient crossover designs for comparing test treatments with a control treatment

    Full text link
    This paper deals exclusively with crossover designs for the purpose of comparing t test treatments with a control treatment when the number of periods is no larger than t+1. Among other results it specifies sufficient conditions for a crossover design to be simultaneously A-optimal and MV-optimal in a very large and appealing class of crossover designs. It is expected that these optimal designs are highly efficient in the entire class of crossover designs. Some computationally useful tools are given and used to build assorted small optimal and efficient crossover designs. The model robustness of these newly discovered crossover designs is discussed.Comment: Published at http://dx.doi.org/10.1214/009053604000000887 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A Framework for Tiered Service in MPLS Networks

    Full text link

    An improved procedure for gene selection from microarray experiments using false discovery rate criterion

    Get PDF
    BACKGROUND: A large number of genes usually show differential expressions in a microarray experiment with two types of tissues, and the p-values of a proper statistical test are often used to quantify the significance of these differences. The genes with small p-values are then picked as the genes responsible for the differences in the tissue RNA expressions. One key question is what should be the threshold to consider the p-values small. There is always a trade off between this threshold and the rate of false claims. Recent statistical literature shows that the false discovery rate (FDR) criterion is a powerful and reasonable criterion to pick those genes with differential expression. Moreover, the power of detection can be increased by knowing the number of non-differential expression genes. While this number is unknown in practice, there are methods to estimate it from data. The purpose of this paper is to present a new method of estimating this number and use it for the FDR procedure construction. RESULTS: A combination of test functions is used to estimate the number of differentially expressed genes. Simulation study shows that the proposed method has a higher power to detect these genes than other existing methods, while still keeping the FDR under control. The improvement can be substantial if the proportion of true differentially expressed genes is large. This procedure has also been tested with good results using a real dataset. CONCLUSION: For a given expected FDR, the method proposed in this paper has better power to pick genes that show differentiation in their expression than two other well known methods
    corecore