47,388 research outputs found

    Using TPA to count linear extensions

    Full text link
    A linear extension of a poset PP is a permutation of the elements of the set that respects the partial order. Let L(P)L(P) denote the number of linear extensions. It is a #P complete problem to determine L(P)L(P) exactly for an arbitrary poset, and so randomized approximation algorithms that draw randomly from the set of linear extensions are used. In this work, the set of linear extensions is embedded in a larger state space with a continuous parameter ?. The introduction of a continuous parameter allows for the use of a more efficient method for approximating L(P)L(P) called TPA. Our primary result is that it is possible to sample from this continuous embedding in time that as fast or faster than the best known methods for sampling uniformly from linear extensions. For a poset containing nn elements, this means we can approximate L(P)L(P) to within a factor of 1+ϵ1 + \epsilon with probability at least 1δ1 - \delta using an expected number of random bits and comparisons in the poset which is at most O(n3(lnn)(lnL(P))ϵ2lnδ1).O(n^3(ln n)(ln L(P))\epsilon^{-2}\ln \delta^{-1}).Comment: 12 pages, 4 algorithm

    Optimizing Ranking Models in an Online Setting

    Get PDF
    Online Learning to Rank (OLTR) methods optimize ranking models by directly interacting with users, which allows them to be very efficient and responsive. All OLTR methods introduced during the past decade have extended on the original OLTR method: Dueling Bandit Gradient Descent (DBGD). Recently, a fundamentally different approach was introduced with the Pairwise Differentiable Gradient Descent (PDGD) algorithm. To date the only comparisons of the two approaches are limited to simulations with cascading click models and low levels of noise. The main outcome so far is that PDGD converges at higher levels of performance and learns considerably faster than DBGD-based methods. However, the PDGD algorithm assumes cascading user behavior, potentially giving it an unfair advantage. Furthermore, the robustness of both methods to high levels of noise has not been investigated. Therefore, it is unclear whether the reported advantages of PDGD over DBGD generalize to different experimental conditions. In this paper, we investigate whether the previous conclusions about the PDGD and DBGD comparison generalize from ideal to worst-case circumstances. We do so in two ways. First, we compare the theoretical properties of PDGD and DBGD, by taking a critical look at previously proven properties in the context of ranking. Second, we estimate an upper and lower bound on the performance of methods by simulating both ideal user behavior and extremely difficult behavior, i.e., almost-random non-cascading user models. Our findings show that the theoretical bounds of DBGD do not apply to any common ranking model and, furthermore, that the performance of DBGD is substantially worse than PDGD in both ideal and worst-case circumstances. These results reproduce previously published findings about the relative performance of PDGD vs. DBGD and generalize them to extremely noisy and non-cascading circumstances.Comment: European Conference on Information Retrieval (ECIR) 201

    Classical sampling theorems in the context of multirate and polyphase digital filter bank structures

    Get PDF
    The recovery of a signal from so-called generalized samples is a problem of designing appropriate linear filters called reconstruction (or synthesis) filters. This relationship is reviewed and explored. Novel theorems for the subsampling of sequences are derived by direct use of the digital-filter-bank framework. These results are related to the theory of perfect reconstruction in maximally decimated digital-filter-bank systems. One of the theorems pertains to the subsampling of a sequence and its first few differences and its subsequent stable reconstruction at finite cost with no error. The reconstruction filters turn out to be multiplierless and of the FIR (finite impulse response) type. These ideas are extended to the case of two-dimensional signals by use of a Kronecker formalism. The subsampling of bandlimited sequences is also considered. A sequence x(n ) with a Fourier transform vanishes for |ω|&ges;Lπ/M, where L and M are integers with L<M, can in principle be represented by reducing the data rate by the amount M/L. The digital polyphase framework is used as a convenient tool for the derivation as well as mechanization of the sampling theorem

    Recursive Compressed Sensing

    Get PDF
    We introduce a recursive algorithm for performing compressed sensing on streaming data. The approach consists of a) recursive encoding, where we sample the input stream via overlapping windowing and make use of the previous measurement in obtaining the next one, and b) recursive decoding, where the signal estimate from the previous window is utilized in order to achieve faster convergence in an iterative optimization scheme applied to decode the new one. To remove estimation bias, a two-step estimation procedure is proposed comprising support set detection and signal amplitude estimation. Estimation accuracy is enhanced by a non-linear voting method and averaging estimates over multiple windows. We analyze the computational complexity and estimation error, and show that the normalized error variance asymptotically goes to zero for sublinear sparsity. Our simulation results show speed up of an order of magnitude over traditional CS, while obtaining significantly lower reconstruction error under mild conditions on the signal magnitudes and the noise level.Comment: Submitted to IEEE Transactions on Information Theor

    Image formation in synthetic aperture radio telescopes

    Full text link
    Next generation radio telescopes will be much larger, more sensitive, have much larger observation bandwidth and will be capable of pointing multiple beams simultaneously. Obtaining the sensitivity, resolution and dynamic range supported by the receivers requires the development of new signal processing techniques for array and atmospheric calibration as well as new imaging techniques that are both more accurate and computationally efficient since data volumes will be much larger. This paper provides a tutorial overview of existing image formation techniques and outlines some of the future directions needed for information extraction from future radio telescopes. We describe the imaging process from measurement equation until deconvolution, both as a Fourier inversion problem and as an array processing estimation problem. The latter formulation enables the development of more advanced techniques based on state of the art array processing. We demonstrate the techniques on simulated and measured radio telescope data.Comment: 12 page
    corecore