31,712 research outputs found

    Approximately Sampling Elements with Fixed Rank in Graded Posets

    Full text link
    Graded posets frequently arise throughout combinatorics, where it is natural to try to count the number of elements of a fixed rank. These counting problems are often #P\#\textbf{P}-complete, so we consider approximation algorithms for counting and uniform sampling. We show that for certain classes of posets, biased Markov chains that walk along edges of their Hasse diagrams allow us to approximately generate samples with any fixed rank in expected polynomial time. Our arguments do not rely on the typical proofs of log-concavity, which are used to construct a stationary distribution with a specific mode in order to give a lower bound on the probability of outputting an element of the desired rank. Instead, we infer this directly from bounds on the mixing time of the chains through a method we call balanced bias\textit{balanced bias}. A noteworthy application of our method is sampling restricted classes of integer partitions of nn. We give the first provably efficient Markov chain algorithm to uniformly sample integer partitions of nn from general restricted classes. Several observations allow us to improve the efficiency of this chain to require O(n1/2log(n))O(n^{1/2}\log(n)) space, and for unrestricted integer partitions, expected O(n9/4)O(n^{9/4}) time. Related applications include sampling permutations with a fixed number of inversions and lozenge tilings on the triangular lattice with a fixed average height.Comment: 23 pages, 12 figure

    Segmented compressed sampling for analog-to-information conversion: Method and performance analysis

    Full text link
    A new segmented compressed sampling method for analog-to-information conversion (AIC) is proposed. An analog signal measured by a number of parallel branches of mixers and integrators (BMIs), each characterized by a specific random sampling waveform, is first segmented in time into MM segments. Then the sub-samples collected on different segments and different BMIs are reused so that a larger number of samples than the number of BMIs is collected. This technique is shown to be equivalent to extending the measurement matrix, which consists of the BMI sampling waveforms, by adding new rows without actually increasing the number of BMIs. We prove that the extended measurement matrix satisfies the restricted isometry property with overwhelming probability if the original measurement matrix of BMI sampling waveforms satisfies it. We also show that the signal recovery performance can be improved significantly if our segmented AIC is used for sampling instead of the conventional AIC. Simulation results verify the effectiveness of the proposed segmented compressed sampling method and the validity of our theoretical studies.Comment: 32 pages, 5 figures, submitted to the IEEE Transactions on Signal Processing in April 201

    Efficient generation of random derangements with the expected distribution of cycle lengths

    Full text link
    We show how to generate random derangements efficiently by two different techniques: random restricted transpositions and sequential importance sampling. The algorithm employing restricted transpositions can also be used to generate random fixed-point-free involutions only, a.k.a. random perfect matchings on the complete graph. Our data indicate that the algorithms generate random samples with the expected distribution of cycle lengths, which we derive, and for relatively small samples, which can actually be very large in absolute numbers, we argue that they generate samples indistinguishable from the uniform distribution. Both algorithms are simple to understand and implement and possess a performance comparable to or better than those of currently known methods. Simulations suggest that the mixing time of the algorithm based on random restricted transpositions (in the total variance distance with respect to the distribution of cycle lengths) is O(nalogn2)O(n^{a}\log{n}^{2}) with a12a \simeq \frac{1}{2} and nn the length of the derangement. We prove that the sequential importance sampling algorithm generates random derangements in O(n)O(n) time with probability O(1/n)O(1/n) of failing.Comment: This version corrected and updated; 14 pages, 2 algorithms, 2 tables, 4 figure

    Reparameterizing the Birkhoff Polytope for Variational Permutation Inference

    Full text link
    Many matching, tracking, sorting, and ranking problems require probabilistic reasoning about possible permutations, a set that grows factorially with dimension. Combinatorial optimization algorithms may enable efficient point estimation, but fully Bayesian inference poses a severe challenge in this high-dimensional, discrete space. To surmount this challenge, we start with the usual step of relaxing a discrete set (here, of permutation matrices) to its convex hull, which here is the Birkhoff polytope: the set of all doubly-stochastic matrices. We then introduce two novel transformations: first, an invertible and differentiable stick-breaking procedure that maps unconstrained space to the Birkhoff polytope; second, a map that rounds points toward the vertices of the polytope. Both transformations include a temperature parameter that, in the limit, concentrates the densities on permutation matrices. We then exploit these transformations and reparameterization gradients to introduce variational inference over permutation matrices, and we demonstrate its utility in a series of experiments
    corecore