1,193 research outputs found
Structured Sparsity: Discrete and Convex approaches
Compressive sensing (CS) exploits sparsity to recover sparse or compressible
signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity
is also used to enhance interpretability in machine learning and statistics
applications: While the ambient dimension is vast in modern data analysis
problems, the relevant information therein typically resides in a much lower
dimensional space. However, many solutions proposed nowadays do not leverage
the true underlying structure. Recent results in CS extend the simple sparsity
idea to more sophisticated {\em structured} sparsity models, which describe the
interdependency between the nonzero components of a signal, allowing to
increase the interpretability of the results and lead to better recovery
performance. In order to better understand the impact of structured sparsity,
in this chapter we analyze the connections between the discrete models and
their convex relaxations, highlighting their relative advantages. We start with
the general group sparse model and then elaborate on two important special
cases: the dispersive and the hierarchical models. For each, we present the
models in their discrete nature, discuss how to solve the ensuing discrete
problems and then describe convex relaxations. We also consider more general
structures as defined by set functions and present their convex proxies.
Further, we discuss efficient optimization solutions for structured sparsity
problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure
Combinatorial Assortment Optimization
Assortment optimization refers to the problem of designing a slate of
products to offer potential customers, such as stocking the shelves in a
convenience store. The price of each product is fixed in advance, and a
probabilistic choice function describes which product a customer will choose
from any given subset. We introduce the combinatorial assortment problem, where
each customer may select a bundle of products. We consider a model of consumer
choice where the relative value of different bundles is described by a
valuation function, while individual customers may differ in their absolute
willingness to pay, and study the complexity of the resulting optimization
problem. We show that any sub-polynomial approximation to the problem requires
exponentially many demand queries when the valuation function is XOS, and that
no FPTAS exists even for succinctly-representable submodular valuations. On the
positive side, we show how to obtain constant approximations under a
"well-priced" condition, where each product's price is sufficiently high. We
also provide an exact algorithm for -additive valuations, and show how to
extend our results to a learning setting where the seller must infer the
customers' preferences from their purchasing behavior
- …