43,805 research outputs found
The Assistive Multi-Armed Bandit
Learning preferences implicit in the choices humans make is a well studied
problem in both economics and computer science. However, most work makes the
assumption that humans are acting (noisily) optimally with respect to their
preferences. Such approaches can fail when people are themselves learning about
what they want. In this work, we introduce the assistive multi-armed bandit,
where a robot assists a human playing a bandit task to maximize cumulative
reward. In this problem, the human does not know the reward function but can
learn it through the rewards received from arm pulls; the robot only observes
which arms the human pulls but not the reward associated with each pull. We
offer sufficient and necessary conditions for successfully assisting the human
in this framework. Surprisingly, better human performance in isolation does not
necessarily lead to better performance when assisted by the robot: a human
policy can do better by effectively communicating its observed rewards to the
robot. We conduct proof-of-concept experiments that support these results. We
see this work as contributing towards a theory behind algorithms for
human-robot interaction.Comment: Accepted to HRI 201
Inference via low-dimensional couplings
We investigate the low-dimensional structure of deterministic transformations
between random variables, i.e., transport maps between probability measures. In
the context of statistics and machine learning, these transformations can be
used to couple a tractable "reference" measure (e.g., a standard Gaussian) with
a target measure of interest. Direct simulation from the desired measure can
then be achieved by pushing forward reference samples through the map. Yet
characterizing such a map---e.g., representing and evaluating it---grows
challenging in high dimensions. The central contribution of this paper is to
establish a link between the Markov properties of the target measure and the
existence of low-dimensional couplings, induced by transport maps that are
sparse and/or decomposable. Our analysis not only facilitates the construction
of transformations in high-dimensional settings, but also suggests new
inference methodologies for continuous non-Gaussian graphical models. For
instance, in the context of nonlinear state-space models, we describe new
variational algorithms for filtering, smoothing, and sequential parameter
inference. These algorithms can be understood as the natural
generalization---to the non-Gaussian case---of the square-root
Rauch-Tung-Striebel Gaussian smoother.Comment: 78 pages, 25 figure
Projections Onto Convex Sets (POCS) Based Optimization by Lifting
Two new optimization techniques based on projections onto convex space (POCS)
framework for solving convex and some non-convex optimization problems are
presented. The dimension of the minimization problem is lifted by one and sets
corresponding to the cost function are defined. If the cost function is a
convex function in R^N the corresponding set is a convex set in R^(N+1). The
iterative optimization approach starts with an arbitrary initial estimate in
R^(N+1) and an orthogonal projection is performed onto one of the sets in a
sequential manner at each step of the optimization problem. The method provides
globally optimal solutions in total-variation, filtered variation, l1, and
entropic cost functions. It is also experimentally observed that cost functions
based on lp, p<1 can be handled by using the supporting hyperplane concept
A Stochastic Majorize-Minimize Subspace Algorithm for Online Penalized Least Squares Estimation
Stochastic approximation techniques play an important role in solving many
problems encountered in machine learning or adaptive signal processing. In
these contexts, the statistics of the data are often unknown a priori or their
direct computation is too intensive, and they have thus to be estimated online
from the observed signals. For batch optimization of an objective function
being the sum of a data fidelity term and a penalization (e.g. a sparsity
promoting function), Majorize-Minimize (MM) methods have recently attracted
much interest since they are fast, highly flexible, and effective in ensuring
convergence. The goal of this paper is to show how these methods can be
successfully extended to the case when the data fidelity term corresponds to a
least squares criterion and the cost function is replaced by a sequence of
stochastic approximations of it. In this context, we propose an online version
of an MM subspace algorithm and we study its convergence by using suitable
probabilistic tools. Simulation results illustrate the good practical
performance of the proposed algorithm associated with a memory gradient
subspace, when applied to both non-adaptive and adaptive filter identification
problems
- …