1,179 research outputs found
Random variate generation and connected computational issues for the Poisson–Tweedie distribution
After providing a systematic outline of the stochastic genesis of the Poisson–Tweedie distribution, some computational issues are considered. More specifically, we introduce a closed form for the probability function, as well as its corresponding integral representation which may be useful for large argument values. Several algorithms for generating Poisson–Tweedie random variates are also suggested. Finally, count data connected to the citation profiles of two statistical journals are modeled and analyzed by means of the Poisson–Tweedie distribution
A PRG for Lipschitz Functions of Polynomials with Applications to Sparsest Cut
We give improved pseudorandom generators (PRGs) for Lipschitz functions of
low-degree polynomials over the hypercube. These are functions of the form
psi(P(x)), where P is a low-degree polynomial and psi is a function with small
Lipschitz constant. PRGs for smooth functions of low-degree polynomials have
received a lot of attention recently and play an important role in constructing
PRGs for the natural class of polynomial threshold functions. In spite of the
recent progress, no nontrivial PRGs were known for fooling Lipschitz functions
of degree O(log n) polynomials even for constant error rate. In this work, we
give the first such generator obtaining a seed-length of (log
n)\tilde{O}(d^2/eps^2) for fooling degree d polynomials with error eps.
Previous generators had an exponential dependence on the degree.
We use our PRG to get better integrality gap instances for sparsest cut, a
fundamental problem in graph theory with many applications in graph
optimization. We give an instance of uniform sparsest cut for which a powerful
semi-definite relaxation (SDP) first introduced by Goemans and Linial and
studied in the seminal work of Arora, Rao and Vazirani has an integrality gap
of exp(\Omega((log log n)^{1/2})). Understanding the performance of the
Goemans-Linial SDP for uniform sparsest cut is an important open problem in
approximation algorithms and metric embeddings and our work gives a
near-exponential improvement over previous lower bounds which achieved a gap of
\Omega(log log n)
CSM-349 - Benford's Law: An Empirical Investigation and a Novel Explanation
This report describes an investigation into Benford?s Law for the distribution of leading digits in real data sets. A large number of such data sets have been examined and it was found that only a small fraction of them conform to the law. Three classes of mathematical model of processes that might account for such a leading digit distribution have also been investigated. We found that based on the notion of taking the product of many random factors the most credible. This led to the identification of a class of lognormal distributions, those whose shape parameter exceeds 1, which satisfy Benford?s Law. This in turn led us to a novel explanation for the law: that it is fundamentally a consequence of the fact that many physical quantities cannot meaningfully take negative values. This enabled us to develop a simple set of rules for determining whether a given data set is likely to conform to Benford?s Law. Our explanation has an important advantage over previous attempts to account for the law: it also explains which data sets will not have logarithmically distributed leading digits. Some techniques for generating data that satisfy Benford?s law are described and the report concludes with a summary and a discussion of the practical implications
Decoding by Sampling: A Randomized Lattice Algorithm for Bounded Distance Decoding
Despite its reduced complexity, lattice reduction-aided decoding exhibits a
widening gap to maximum-likelihood (ML) performance as the dimension increases.
To improve its performance, this paper presents randomized lattice decoding
based on Klein's sampling technique, which is a randomized version of Babai's
nearest plane algorithm (i.e., successive interference cancelation (SIC)). To
find the closest lattice point, Klein's algorithm is used to sample some
lattice points and the closest among those samples is chosen. Lattice reduction
increases the probability of finding the closest lattice point, and only needs
to be run once during pre-processing. Further, the sampling can operate very
efficiently in parallel. The technical contribution of this paper is two-fold:
we analyze and optimize the decoding radius of sampling decoding resulting in
better error performance than Klein's original algorithm, and propose a very
efficient implementation of random rounding. Of particular interest is that a
fixed gain in the decoding radius compared to Babai's decoding can be achieved
at polynomial complexity. The proposed decoder is useful for moderate
dimensions where sphere decoding becomes computationally intensive, while
lattice reduction-aided decoding starts to suffer considerable loss. Simulation
results demonstrate near-ML performance is achieved by a moderate number of
samples, even if the dimension is as high as 32
- …