190 research outputs found
Structured Sparsity: Discrete and Convex approaches
Compressive sensing (CS) exploits sparsity to recover sparse or compressible
signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity
is also used to enhance interpretability in machine learning and statistics
applications: While the ambient dimension is vast in modern data analysis
problems, the relevant information therein typically resides in a much lower
dimensional space. However, many solutions proposed nowadays do not leverage
the true underlying structure. Recent results in CS extend the simple sparsity
idea to more sophisticated {\em structured} sparsity models, which describe the
interdependency between the nonzero components of a signal, allowing to
increase the interpretability of the results and lead to better recovery
performance. In order to better understand the impact of structured sparsity,
in this chapter we analyze the connections between the discrete models and
their convex relaxations, highlighting their relative advantages. We start with
the general group sparse model and then elaborate on two important special
cases: the dispersive and the hierarchical models. For each, we present the
models in their discrete nature, discuss how to solve the ensuing discrete
problems and then describe convex relaxations. We also consider more general
structures as defined by set functions and present their convex proxies.
Further, we discuss efficient optimization solutions for structured sparsity
problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure
Balancing Relevance and Diversity in Online Bipartite Matching via Submodularity
In bipartite matching problems, vertices on one side of a bipartite graph are
paired with those on the other. In its online variant, one side of the graph is
available offline, while the vertices on the other side arrive online. When a
vertex arrives, an irrevocable and immediate decision should be made by the
algorithm; either match it to an available vertex or drop it. Examples of such
problems include matching workers to firms, advertisers to keywords, organs to
patients, and so on. Much of the literature focuses on maximizing the total
relevance---modeled via total weight---of the matching. However, in many
real-world problems, it is also important to consider contributions of
diversity: hiring a diverse pool of candidates, displaying a relevant but
diverse set of ads, and so on. In this paper, we propose the Online Submodular
Bipartite Matching (\osbm) problem, where the goal is to maximize a submodular
function over the set of matched edges. This objective is general enough to
capture the notion of both diversity (\emph{e.g.,} a weighted coverage
function) and relevance (\emph{e.g.,} the traditional linear function)---as
well as many other natural objective functions occurring in practice
(\emph{e.g.,} limited total budget in advertising settings). We propose novel
algorithms that have provable guarantees and are essentially optimal when
restricted to various special cases. We also run experiments on real-world and
synthetic datasets to validate our algorithms.Comment: To appear in AAAI 201
Mixed-Integer Programming for a Class of Robust Submodular Maximization Problems
We consider robust submodular maximization problems (RSMs), where given a set
of monotone submodular objective functions, the robustness is with respect
to the worst-case (scaled) objective function. The model we consider
generalizes two variants of robust submodular maximization problems in the
literature, depending on the choice of the scaling vector. On one hand, by
using unit scaling, we obtain a usual robust submodular maximization problem.
On the other hand, by letting the scaling vector be the optimal objective
function of each individual (NP-hard) submodular maximization problem, we
obtain a second variant. While the robust version of the objective is no longer
submodular, we reformulate the problem by exploiting the submodularity of each
function. We conduct a polyhedral study of the resulting formulation and
provide conditions under which the submodular inequalities are facet-defining
for a key mixed-integer set. We investigate several strategies for
incorporating these inequalities within a delayed cut generation framework to
solve the problem exactly. For the second variant, we provide an algorithm to
obtain a feasible solution along with its optimality gap. We apply the proposed
methods to a sensor placement optimization problem in water distribution
networks using real-world datasets to demonstrate the effectiveness of the
methods
Mixed-Integer Programming Approaches to Generalized Submodular Optimization and its Applications
Submodularity is an important concept in integer and combinatorial
optimization. A classical submodular set function models the utility of
selecting homogenous items from a single ground set, and such selections can be
represented by binary variables. In practice, many problem contexts involve
choosing heterogenous items from more than one ground set or selecting multiple
copies of homogenous items, which call for extensions of submodularity. We
refer to the optimization problems associated with such generalized notions of
submodularity as Generalized Submodular Optimization (GSO). GSO is found in
wide-ranging applications, including infrastructure design, healthcare, online
marketing, and machine learning. Due to the often highly nonlinear (even
non-convex and non-concave) objective function and the mixed-integer decision
space, GSO is a broad subclass of challenging mixed-integer nonlinear
programming problems. In this tutorial, we first provide an overview of
classical submodularity. Then we introduce two subclasses of GSO, for which we
present polyhedral theory for the mixed-integer set structures that arise from
these problem classes. Our theoretical results lead to efficient and versatile
exact solution methods that demonstrate their effectiveness in practical
problems using real-world datasets
Implementation in Advised Strategies: Welfare Guarantees from Posted-Price Mechanisms When Demand Queries Are NP-Hard
State-of-the-art posted-price mechanisms for submodular bidders with
items achieve approximation guarantees of [Assadi and
Singla, 2019]. Their truthfulness, however, requires bidders to compute an
NP-hard demand-query. Some computational complexity of this form is
unavoidable, as it is NP-hard for truthful mechanisms to guarantee even an
-approximation for any [Dobzinski and
Vondr\'ak, 2016]. Together, these establish a stark distinction between
computationally-efficient and communication-efficient truthful mechanisms.
We show that this distinction disappears with a mild relaxation of
truthfulness, which we term implementation in advised strategies, and that has
been previously studied in relation to "Implementation in Undominated
Strategies" [Babaioff et al, 2009]. Specifically, advice maps a tentative
strategy either to that same strategy itself, or one that dominates it. We say
that a player follows advice as long as they never play actions which are
dominated by advice. A poly-time mechanism guarantees an -approximation
in implementation in advised strategies if there exists poly-time advice for
each player such that an -approximation is achieved whenever all
players follow advice. Using an appropriate bicriterion notion of approximate
demand queries (which can be computed in poly-time), we establish that (a
slight modification of) the [Assadi and Singla, 2019] mechanism achieves the
same -approximation in implementation in advised
strategies
- …