5,935 research outputs found
The complexity of generating an exponentially distributed variate
Résumé disponible dans les fichiers attaché
Optimal Discrete Uniform Generation from Coin Flips, and Applications
This article introduces an algorithm to draw random discrete uniform
variables within a given range of size n from a source of random bits. The
algorithm aims to be simple to implement and optimal both with regards to the
amount of random bits consumed, and from a computational perspective---allowing
for faster and more efficient Monte-Carlo simulations in computational physics
and biology. I also provide a detailed analysis of the number of bits that are
spent per variate, and offer some extensions and applications, in particular to
the optimal random generation of permutations.Comment: first draft, 22 pages, 5 figures, C code implementation of algorith
Limited Feedback-based Block Diagonalization for the MIMO Broadcast Channel
Block diagonalization is a linear precoding technique for the multiple
antenna broadcast (downlink) channel that involves transmission of multiple
data streams to each receiver such that no multi-user interference is
experienced at any of the receivers. This low-complexity scheme operates only a
few dB away from capacity but requires very accurate channel knowledge at the
transmitter. We consider a limited feedback system where each receiver knows
its channel perfectly, but the transmitter is only provided with a finite
number of channel feedback bits from each receiver. Using a random quantization
argument, we quantify the throughput loss due to imperfect channel knowledge as
a function of the feedback level. The quality of channel knowledge must improve
proportional to the SNR in order to prevent interference-limitations, and we
show that scaling the number of feedback bits linearly with the system SNR is
sufficient to maintain a bounded rate loss. Finally, we compare our
quantization strategy to an analog feedback scheme and show the superiority of
quantized feedback.Comment: 20 pages, 4 figures, submitted to IEEE JSAC November 200
On Buffon Machines and Numbers
The well-know needle experiment of Buffon can be regarded as an analog (i.e.,
continuous) device that stochastically "computes" the number 2/pi ~ 0.63661,
which is the experiment's probability of success. Generalizing the experiment
and simplifying the computational framework, we consider probability
distributions, which can be produced perfectly, from a discrete source of
unbiased coin flips. We describe and analyse a few simple Buffon machines that
generate geometric, Poisson, and logarithmic-series distributions. We provide
human-accessible Buffon machines, which require a dozen coin flips or less, on
average, and produce experiments whose probabilities of success are expressible
in terms of numbers such as, exp(-1), log 2, sqrt(3), cos(1/4), aeta(5).
Generally, we develop a collection of constructions based on simple
probabilistic mechanisms that enable one to design Buffon experiments involving
compositions of exponentials and logarithms, polylogarithms, direct and inverse
trigonometric functions, algebraic and hypergeometric functions, as well as
functions defined by integrals, such as the Gaussian error function.Comment: Largely revised version with references and figures added. 12 pages.
In ACM-SIAM Symposium on Discrete Algorithms (SODA'2011
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
- …