170 research outputs found
Faster Convex Optimization: Simulated Annealing with an Efficient Universal Barrier
This paper explores a surprising equivalence between two seemingly-distinct
convex optimization methods. We show that simulated annealing, a well-studied
random walk algorithms, is directly equivalent, in a certain sense, to the
central path interior point algorithm for the the entropic universal barrier
function. This connection exhibits several benefits. First, we are able improve
the state of the art time complexity for convex optimization under the
membership oracle model. We improve the analysis of the randomized algorithm of
Kalai and Vempala by utilizing tools developed by Nesterov and Nemirovskii that
underly the central path following interior point algorithm. We are able to
tighten the temperature schedule for simulated annealing which gives an
improved running time, reducing by square root of the dimension in certain
instances. Second, we get an efficient randomized interior point method with an
efficiently computable universal barrier for any convex set described by a
membership oracle. Previously, efficiently computable barriers were known only
for particular convex sets
A Collaborative Mechanism for Crowdsourcing Prediction Problems
Machine Learning competitions such as the Netflix Prize have proven
reasonably successful as a method of "crowdsourcing" prediction tasks. But
these competitions have a number of weaknesses, particularly in the incentive
structure they create for the participants. We propose a new approach, called a
Crowdsourced Learning Mechanism, in which participants collaboratively "learn"
a hypothesis for a given prediction task. The approach draws heavily from the
concept of a prediction market, where traders bet on the likelihood of a future
event. In our framework, the mechanism continues to publish the current
hypothesis, and participants can modify this hypothesis by wagering on an
update. The critical incentive property is that a participant will profit an
amount that scales according to how much her update improves performance on a
released test set.Comment: Full version of the extended abstract which appeared in NIPS 201
Rate of Price Discovery in Iterative Combinatorial Auctions
We study a class of iterative combinatorial auctions which can be viewed as
subgradient descent methods for the problem of pricing bundles to balance
supply and demand. We provide concrete convergence rates for auctions in this
class, bounding the number of auction rounds needed to reach clearing prices.
Our analysis allows for a variety of pricing schemes, including item, bundle,
and polynomial pricing, and the respective convergence rates confirm that more
expressive pricing schemes come at the cost of slower convergence. We consider
two models of bidder behavior. In the first model, bidders behave
stochastically according to a random utility model, which includes standard
best-response bidding as a special case. In the second model, bidders behave
arbitrarily (even adversarially), and meaningful convergence relies on properly
designed activity rules
Fighting Bandits with a New Kind of Smoothness
We define a novel family of algorithms for the adversarial multi-armed bandit
problem, and provide a simple analysis technique based on convex smoothing. We
prove two main results. First, we show that regularization via the
\emph{Tsallis entropy}, which includes EXP3 as a special case, achieves the
minimax regret. Second, we show that a wide class of
perturbation methods achieve a near-optimal regret as low as if the perturbation distribution has a bounded hazard rate. For example,
the Gumbel, Weibull, Frechet, Pareto, and Gamma distributions all satisfy this
key property.Comment: In Proceedings of NIPS, 201
Low-Cost Learning via Active Data Procurement
We design mechanisms for online procurement of data held by strategic agents
for machine learning tasks. The challenge is to use past data to actively price
future data and give learning guarantees even when an agent's cost for
revealing her data may depend arbitrarily on the data itself. We achieve this
goal by showing how to convert a large class of no-regret algorithms into
online posted-price and learning mechanisms. Our results in a sense parallel
classic sample complexity guarantees, but with the key resource being money
rather than quantity of data: With a budget constraint , we give robust risk
(predictive error) bounds on the order of . Because we use an
active approach, we can often guarantee to do significantly better by
leveraging correlations between costs and data.
Our algorithms and analysis go through a model of no-regret learning with
arriving pairs (cost, data) and a budget constraint of . Our regret bounds
for this model are on the order of and we give lower bounds on the
same order.Comment: Full version of EC 2015 paper. Color recommended for figures but
nonessential. 36 pages, of which 12 appendi
A New Approach to Collaborative Filtering: Operator Estimation with Spectral Regularization
We present a general approach for collaborative filtering (CF) using spectral
regularization to learn linear operators from "users" to the "objects" they
rate. Recent low-rank type matrix completion approaches to CF are shown to be
special cases. However, unlike existing regularization based CF methods, our
approach can be used to also incorporate information such as attributes of the
users or the objects -- a limitation of existing regularization based CF
methods. We then provide novel representer theorems that we use to develop new
estimation methods. We provide learning algorithms based on low-rank
decompositions, and test them on a standard CF dataset. The experiments
indicate the advantages of generalizing the existing regularization based CF
methods to incorporate related information about users and objects. Finally, we
show that certain multi-task learning methods can be also seen as special cases
of our proposed approach
- …