15,492 research outputs found
Fighting Bandits with a New Kind of Smoothness
We define a novel family of algorithms for the adversarial multi-armed bandit
problem, and provide a simple analysis technique based on convex smoothing. We
prove two main results. First, we show that regularization via the
\emph{Tsallis entropy}, which includes EXP3 as a special case, achieves the
minimax regret. Second, we show that a wide class of
perturbation methods achieve a near-optimal regret as low as if the perturbation distribution has a bounded hazard rate. For example,
the Gumbel, Weibull, Frechet, Pareto, and Gamma distributions all satisfy this
key property.Comment: In Proceedings of NIPS, 201
A Proposal to Measure the Quasiparticle Poisoning Time of Majorana Bound States
We propose a method of measuring the fermion parity lifetime of Majorana
fermion modes due to quasiparticle poisoning. We model quasiparticle poisoning
by coupling the Majorana modes to electron reservoirs, explicitly breaking
parity conservation in the system. This poisoning broadens and shortens the
resonance peak associated with Majorana modes. In a two lead geometry, the
poisoning decreases the correlation in current noise between the two leads from
the maximal value characteristic of crossed Andreev reflection. The latter
measurement allows for calculation of the poisoning rate even if temperature is
much higher than the resonance width.Comment: 5 pages, 5 figure
Ramsey numbers of cubes versus cliques
The cube graph Q_n is the skeleton of the n-dimensional cube. It is an
n-regular graph on 2^n vertices. The Ramsey number r(Q_n, K_s) is the minimum N
such that every graph of order N contains the cube graph Q_n or an independent
set of order s. Burr and Erdos in 1983 asked whether the simple lower bound
r(Q_n, K_s) >= (s-1)(2^n - 1)+1 is tight for s fixed and n sufficiently large.
We make progress on this problem, obtaining the first upper bound which is
within a constant factor of the lower bound.Comment: 26 page
Parallel resampling in the particle filter
Modern parallel computing devices, such as the graphics processing unit
(GPU), have gained significant traction in scientific and statistical
computing. They are particularly well-suited to data-parallel algorithms such
as the particle filter, or more generally Sequential Monte Carlo (SMC), which
are increasingly used in statistical inference. SMC methods carry a set of
weighted particles through repeated propagation, weighting and resampling
steps. The propagation and weighting steps are straightforward to parallelise,
as they require only independent operations on each particle. The resampling
step is more difficult, as standard schemes require a collective operation,
such as a sum, across particle weights. Focusing on this resampling step, we
analyse two alternative schemes that do not involve a collective operation
(Metropolis and rejection resamplers), and compare them to standard schemes
(multinomial, stratified and systematic resamplers). We find that, in certain
circumstances, the alternative resamplers can perform significantly faster on a
GPU, and to a lesser extent on a CPU, than the standard approaches. Moreover,
in single precision, the standard approaches are numerically biased for upwards
of hundreds of thousands of particles, while the alternatives are not. This is
particularly important given greater single- than double-precision throughput
on modern devices, and the consequent temptation to use single precision with a
greater number of particles. Finally, we provide auxiliary functions useful for
implementation, such as for the permutation of ancestry vectors to enable
in-place propagation.Comment: 21 pages, 6 figure
- β¦