6,981 research outputs found
Online Learning and Profit Maximization from Revealed Preferences
We consider the problem of learning from revealed preferences in an online
setting. In our framework, each period a consumer buys an optimal bundle of
goods from a merchant according to her (linear) utility function and current
prices, subject to a budget constraint. The merchant observes only the
purchased goods, and seeks to adapt prices to optimize his profits. We give an
efficient algorithm for the merchant's problem that consists of a learning
phase in which the consumer's utility function is (perhaps partially) inferred,
followed by a price optimization step. We also consider an alternative online
learning algorithm for the setting where prices are set exogenously, but the
merchant would still like to predict the bundle that will be bought by the
consumer for purposes of inventory or supply chain management. In contrast with
most prior work on the revealed preferences problem, we demonstrate that by
making stronger assumptions on the form of utility functions, efficient
algorithms for both learning and profit maximization are possible, even in
adaptive, online settings
Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches
Imaging spectrometers measure electromagnetic energy scattered in their
instantaneous field view in hundreds or thousands of spectral channels with
higher spectral resolution than multispectral cameras. Imaging spectrometers
are therefore often referred to as hyperspectral cameras (HSCs). Higher
spectral resolution enables material identification via spectroscopic analysis,
which facilitates countless applications that require identifying materials in
scenarios unsuitable for classical spectroscopic analysis. Due to low spatial
resolution of HSCs, microscopic material mixing, and multiple scattering,
spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus,
accurate estimation requires unmixing. Pixels are assumed to be mixtures of a
few materials, called endmembers. Unmixing involves estimating all or some of:
the number of endmembers, their spectral signatures, and their abundances at
each pixel. Unmixing is a challenging, ill-posed inverse problem because of
model inaccuracies, observation noise, environmental conditions, endmember
variability, and data set size. Researchers have devised and investigated many
models searching for robust, stable, tractable, and accurate unmixing
algorithms. This paper presents an overview of unmixing methods from the time
of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models
are first discussed. Signal-subspace, geometrical, statistical, sparsity-based,
and spatial-contextual unmixing algorithms are described. Mathematical problems
and potential solutions are described. Algorithm characteristics are
illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of
Selected Topics in Applied Earth Observations and Remote Sensin
Scalable Methods for Adaptively Seeding a Social Network
In recent years, social networking platforms have developed into
extraordinary channels for spreading and consuming information. Along with the
rise of such infrastructure, there is continuous progress on techniques for
spreading information effectively through influential users. In many
applications, one is restricted to select influencers from a set of users who
engaged with the topic being promoted, and due to the structure of social
networks, these users often rank low in terms of their influence potential. An
alternative approach one can consider is an adaptive method which selects users
in a manner which targets their influential neighbors. The advantage of such an
approach is that it leverages the friendship paradox in social networks: while
users are often not influential, they often know someone who is.
Despite the various complexities in such optimization problems, we show that
scalable adaptive seeding is achievable. In particular, we develop algorithms
for linear influence models with provable approximation guarantees that can be
gracefully parallelized. To show the effectiveness of our methods we collected
data from various verticals social network users follow. For each vertical, we
collected data on the users who responded to a certain post as well as their
neighbors, and applied our methods on this data. Our experiments show that
adaptive seeding is scalable, and importantly, that it obtains dramatic
improvements over standard approaches of information dissemination.Comment: Full version of the paper appearing in WWW 201
Locally Adaptive Optimization: Adaptive Seeding for Monotone Submodular Functions
The Adaptive Seeding problem is an algorithmic challenge motivated by
influence maximization in social networks: One seeks to select among certain
accessible nodes in a network, and then select, adaptively, among neighbors of
those nodes as they become accessible in order to maximize a global objective
function. More generally, adaptive seeding is a stochastic optimization
framework where the choices in the first stage affect the realizations in the
second stage, over which we aim to optimize.
Our main result is a -approximation for the adaptive seeding
problem for any monotone submodular function. While adaptive policies are often
approximated via non-adaptive policies, our algorithm is based on a novel
method we call \emph{locally-adaptive} policies. These policies combine a
non-adaptive global structure, with local adaptive optimizations. This method
enables the -approximation for general monotone submodular functions
and circumvents some of the impossibilities associated with non-adaptive
policies.
We also introduce a fundamental problem in submodular optimization that may
be of independent interest: given a ground set of elements where every element
appears with some small probability, find a set of expected size at most
that has the highest expected value over the realization of the elements. We
show a surprising result: there are classes of monotone submodular functions
(including coverage) that can be approximated almost optimally as the
probability vanishes. For general monotone submodular functions we show via a
reduction from \textsc{Planted-Clique} that approximations for this problem are
not likely to be obtainable. This optimization problem is an important tool for
adaptive seeding via non-adaptive policies, and its hardness motivates the
introduction of \emph{locally-adaptive} policies we use in the main result
Efficient method for measuring the parameters encoded in a gravitational-wave signal
Once upon a time, predictions for the accuracy of inference on
gravitational-wave signals relied on computationally inexpensive but often
inaccurate techniques. Recently, the approach has shifted to actual inference
on noisy signals with complex stochastic Bayesian methods, at the expense of
significant computational cost. Here, we argue that it is often possible to
have the best of both worlds: a Bayesian approach that incorporates prior
information and correctly marginalizes over uninteresting parameters, providing
accurate posterior probability distribution functions, but carried out on a
simple grid at a low computational cost, comparable to the inexpensive
predictive techniques.Comment: 17 pages, 5 figure
- …