2,019 research outputs found
Batched bandit problems
Motivated by practical applications, chiefly clinical trials, we study the
regret achievable for stochastic bandits under the constraint that the employed
policy must split trials into a small number of batches. We propose a simple
policy, and show that a very small number of batches gives close to minimax
optimal regret bounds. As a byproduct, we derive optimal policies with low
switching cost for stochastic bandits.Comment: Published at http://dx.doi.org/10.1214/15-AOS1381 in the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
On Some Integrated Approaches to Inference
We present arguments for the formulation of unified approach to different
standard continuous inference methods from partial information. It is claimed
that an explicit partition of information into a priori (prior knowledge) and a
posteriori information (data) is an important way of standardizing inference
approaches so that they can be compared on a normative scale, and so that
notions of optimal algorithms become farther-reaching. The inference methods
considered include neural network approaches, information-based complexity, and
Monte Carlo, spline, and regularization methods. The model is an extension of
currently used continuous complexity models, with a class of algorithms in the
form of optimization methods, in which an optimization functional (involving
the data) is minimized. This extends the family of current approaches in
continuous complexity theory, which include the use of interpolatory algorithms
in worst and average case settings
Approximate Profile Maximum Likelihood
We propose an efficient algorithm for approximate computation of the profile
maximum likelihood (PML), a variant of maximum likelihood maximizing the
probability of observing a sufficient statistic rather than the empirical
sample. The PML has appealing theoretical properties, but is difficult to
compute exactly. Inspired by observations gleaned from exactly solvable cases,
we look for an approximate PML solution, which, intuitively, clumps comparably
frequent symbols into one symbol. This amounts to lower-bounding a certain
matrix permanent by summing over a subgroup of the symmetric group rather than
the whole group during the computation. We extensively experiment with the
approximate solution, and find the empirical performance of our approach is
competitive and sometimes significantly better than state-of-the-art
performance for various estimation problems
Unbiased Shape Compactness for Segmentation
We propose to constrain segmentation functionals with a dimensionless,
unbiased and position-independent shape compactness prior, which we solve
efficiently with an alternating direction method of multipliers (ADMM).
Involving a squared sum of pairwise potentials, our prior results in a
challenging high-order optimization problem, which involves dense (fully
connected) graphs. We split the problem into a sequence of easier sub-problems,
each performed efficiently at each iteration: (i) a sparse-matrix inversion
based on Woodbury identity, (ii) a closed-form solution of a cubic equation and
(iii) a graph-cut update of a sub-modular pairwise sub-problem with a sparse
graph. We deploy our prior in an energy minimization, in conjunction with a
supervised classifier term based on CNNs and standard regularization
constraints. We demonstrate the usefulness of our energy in several medical
applications. In particular, we report comprehensive evaluations of our fully
automated algorithm over 40 subjects, showing a competitive performance for the
challenging task of abdominal aorta segmentation in MRI.Comment: Accepted at MICCAI 201
Simplified Energy Landscape for Modularity Using Total Variation
Networks capture pairwise interactions between entities and are frequently
used in applications such as social networks, food networks, and protein
interaction networks, to name a few. Communities, cohesive groups of nodes,
often form in these applications, and identifying them gives insight into the
overall organization of the network. One common quality function used to
identify community structure is modularity. In Hu et al. [SIAM J. App. Math.,
73(6), 2013], it was shown that modularity optimization is equivalent to
minimizing a particular nonconvex total variation (TV) based functional over a
discrete domain. They solve this problem, assuming the number of communities is
known, using a Merriman, Bence, Osher (MBO) scheme.
We show that modularity optimization is equivalent to minimizing a convex
TV-based functional over a discrete domain, again, assuming the number of
communities is known. Furthermore, we show that modularity has no convex
relaxation satisfying certain natural conditions. We therefore, find a
manageable non-convex approximation using a Ginzburg Landau functional, which
provably converges to the correct energy in the limit of a certain parameter.
We then derive an MBO algorithm with fewer hand-tuned parameters than in Hu et
al. and which is 7 times faster at solving the associated diffusion equation
due to the fact that the underlying discretization is unconditionally stable.
Our numerical tests include a hyperspectral video whose associated graph has
2.9x10^7 edges, which is roughly 37 times larger than was handled in the paper
of Hu et al.Comment: 25 pages, 3 figures, 3 tables, submitted to SIAM J. App. Mat
- …