2,303 research outputs found
Submodular Optimization under Noise
We consider the problem of maximizing a monotone submodular function under
noise. There has been a great deal of work on optimization of submodular
functions under various constraints, resulting in algorithms that provide
desirable approximation guarantees. In many applications, however, we do not
have access to the submodular function we aim to optimize, but rather to some
erroneous or noisy version of it. This raises the question of whether provable
guarantees are obtainable in presence of error and noise. We provide initial
answers, by focusing on the question of maximizing a monotone submodular
function under a cardinality constraint when given access to a noisy oracle of
the function. We show that:
- For a cardinality constraint , there is an approximation
algorithm whose approximation ratio is arbitrarily close to ;
- For there is an algorithm whose approximation ratio is arbitrarily
close to . No randomized algorithm can obtain an approximation ratio
better than ;
-If the noise is adversarial, no non-trivial approximation guarantee can be
obtained
Near-Optimal Sparse Sensing for Gaussian Detection with Correlated Observations
Detection of a signal under noise is a classical signal processing problem.
When monitoring spatial phenomena under a fixed budget, i.e., either physical,
economical or computational constraints, the selection of a subset of available
sensors, referred to as sparse sensing, that meets both the budget and
performance requirements is highly desirable. Unfortunately, the subset
selection problem for detection under dependent observations is combinatorial
in nature and suboptimal subset selection algorithms must be employed. In this
work, different from the widely used convex relaxation of the problem, we
leverage submodularity, the diminishing returns property, to provide practical
near optimal algorithms suitable for large-scale subset selection. This is
achieved by means of low-complexity greedy algorithms, which incur a reduced
computational complexity compared to their convex counterparts.Comment: 13 pages, 9 figure
Maximizing Monotone DR-submodular Continuous Functions by Derivative-free Optimization
In this paper, we study the problem of monotone (weakly) DR-submodular
continuous maximization. While previous methods require the gradient
information of the objective function, we propose a derivative-free algorithm
LDGM for the first time. We define and to characterize how
close a function is to continuous DR-submodulr and submodular, respectively.
Under a convex polytope constraint, we prove that LDGM can achieve a
-approximation guarantee after
iterations, which is the same as the best previous gradient-based algorithm.
Moreover, in some special cases, a variant of LDGM can achieve a
-approximation guarantee for (weakly)
submodular functions. We also compare LDGM with the gradient-based algorithm
Frank-Wolfe under noise, and show that LDGM can be more robust. Empirical
results on budget allocation verify the effectiveness of LDGM
Optimal approximation for unconstrained non-submodular minimization
Submodular function minimization is a well studied problem; existing
algorithms solve it exactly or up to arbitrary accuracy. However, in many
applications, the objective function is not exactly submodular. No theoretical
guarantees exist in this case. While submodular minimization algorithms rely on
intricate connections between submodularity and convexity, we show that these
relations can be extended sufficiently to obtain approximation guarantees for
non-submodular minimization. In particular, we prove how a projected
subgradient method can perform well even for certain non-submodular functions.
This includes important examples, such as objectives for structured sparse
learning and variance reduction in Bayesian optimization. We also extend this
result to noisy function evaluations. Our algorithm works in the value oracle
model. We prove that in this model, the approximation result we obtain is the
best possible with a subexponential number of queries
On the Optimality of Simple Schedules for Networks with Multiple Half-Duplex Relays
This paper studies networks with N half-duplex relays assisting the
communication between a source and a destination. In ISIT'12 Brahma,
\"{O}zg\"{u}r and Fragouli conjectured that in Gaussian half-duplex diamond
networks (i.e., without a direct link between the source and the destination,
and with N non-interfering relays) an approximately optimal relay scheduling
policy (i.e., achieving the cut-set upper bound to within a constant gap) has
at most N+1 active states (i.e., at most N+1 out of the possible relay
listen-transmit states have a strictly positive probability). Such relay
scheduling policies were referred to as simple. In ITW'13 we conjectured that
simple approximately optimal relay scheduling policies exist for any Gaussian
half-duplex multi-relay network irrespectively of the topology. This paper
formally proves this more general version of the conjecture and shows it holds
beyond Gaussian noise networks. In particular, for any memoryless half-duplex
N-relay network with independent noises and for which independent inputs are
approximately optimal in the cut-set upper bound, an approximately optimal
simple relay scheduling policy exists. A convergent iterative polynomial-time
algorithm, which alternates between minimizing a submodular function and
maximizing a linear program, is proposed to find the approximately optimal
simple relay schedule. As an example, for N-relay Gaussian networks with
independent noises, where each node in equipped with multiple antennas and
where each antenna can be configured to listen or transmit irrespectively of
the others, the existence of an approximately optimal simple relay scheduling
policy with at most N+1 active states is proved. Through a line-network example
it is also shown that independently switching the antennas at each relay can
provide a strictly larger multiplexing gain compared to using the antennas for
the same purpose.Comment: This paper is an extension of arXiv:1410.7174. Submitted to IEEE
Transactions on Information Theor
Differentially Private Online Submodular Optimization
In this paper we develop the first algorithms for online submodular
minimization that preserve differential privacy under full information feedback
and bandit feedback. A sequence of submodular functions over a collection
of elements arrive online, and at each timestep the algorithm must choose a
subset of before seeing the function. The algorithm incurs a cost equal
to the function evaluated on the chosen set, and seeks to choose a sequence of
sets that achieves low expected regret.
Our first result is in the full information setting, where the algorithm can
observe the entire function after making its decision at each timestep. We give
an algorithm in this setting that is -differentially private and
achieves expected regret
. This algorithm works
by relaxing submodular function to a convex function using the Lovasz
extension, and then simulating an algorithm for differentially private online
convex optimization.
Our second result is in the bandit setting, where the algorithm can only see
the cost incurred by its chosen set, and does not have access to the entire
function. This setting is significantly more challenging because the algorithm
does not receive enough information to compute the Lovasz extension or its
subgradients. Instead, we construct an unbiased estimate using a single-point
estimation, and then simulate private online convex optimization using this
estimate. Our algorithm using bandit feedback is -differentially
private and achieves expected regret
Efficient Capacity Computation and Power Optimization for Relay Networks
The capacity or approximations to capacity of various single-source
single-destination relay network models has been characterized in terms of the
cut-set upper bound. In principle, a direct computation of this bound requires
evaluating the cut capacity over exponentially many cuts. We show that the
minimum cut capacity of a relay network under some special assumptions can be
cast as a minimization of a submodular function, and as a result, can be
computed efficiently. We use this result to show that the capacity, or an
approximation to the capacity within a constant gap for the Gaussian, wireless
erasure, and Avestimehr-Diggavi-Tse deterministic relay network models can be
computed in polynomial time. We present some empirical results showing that
computing constant-gap approximations to the capacity of Gaussian relay
networks with around 300 nodes can be done in order of minutes.
For Gaussian networks, cut-set capacities are also functions of the powers
assigned to the nodes. We consider a family of power optimization problems and
show that they can be solved in polynomial time. In particular, we show that
the minimization of the sum of powers assigned to the nodes subject to a
minimum rate constraint (measured in terms of cut-set bounds) can be computed
in polynomial time. We propose an heuristic algorithm to solve this problem and
measure its performance through simulations on random Gaussian networks. We
observe that in the optimal allocations most of the power is assigned to a
small subset of relays, which suggests that network simplification may be
possible without excessive performance degradation.Comment: Submitted to IEEE Transactions on Information Theor
Optimizing Beams and Bits: A Novel Approach for Massive MIMO Base-Station Design
We consider the problem of jointly optimizing ADC bit resolution and analog
beamforming over a frequency-selective massive MIMO uplink. We build upon a
popular model to incorporate the impact of low bit resolution ADCs, that
hitherto has mostly been employed over flat-fading systems. We adopt weighted
sum rate (WSR) as our objective and show that WSR maximization under finite
buffer limits and important practical constraints on choices of beams and ADC
bit resolutions can equivalently be posed as constrained submodular set
function maximization. This enables us to design a constant-factor
approximation algorithm. Upon incorporating further enhancements we obtain an
efficient algorithm that significantly outperforms state-of-the-art ones.Comment: Tech. Report. Appeared in part in IEEE ICNC 2019. Added few more
comments and corrected minor typo
Submodular Observation Selection and Information Gathering for Quadratic Models
We study the problem of selecting most informative subset of a large
observation set to enable accurate estimation of unknown parameters. This
problem arises in a variety of settings in machine learning and signal
processing including feature selection, phase retrieval, and target
localization. Since for quadratic measurement models the moment matrix of the
optimal estimator is generally unknown, majority of prior work resorts to
approximation techniques such as linearization of the observation model to
optimize the alphabetical optimality criteria of an approximate moment matrix.
Conversely, by exploiting a connection to the classical Van Trees' inequality,
we derive new alphabetical optimality criteria without distorting the
relational structure of the observation model. We further show that under
certain conditions on parameters of the problem these optimality criteria are
monotone and (weak) submodular set functions. These results enable us to
develop an efficient greedy observation selection algorithm uniquely tailored
for quadratic models, and provide theoretical bounds on its achievable utility.Comment: To be published in proceedings of International Conference on Machine
Learning (ICML) 201
Scaling Submodular Optimization Approaches for Control Applications in Networked Systems
Often times, in many design problems, there is a need to select a small set
of informative or representative elements from a large ground set of entities
in an optimal fashion. Submodular optimization that provides for a formal way
to solve such problems, has recently received significant attention from the
controls community where such subset selection problems are abound. However,
scaling these approaches to large systems can be challenging because of the
high computational complexity of the overall flow, in-part due to the
high-complexity compute-oracles used to determine the objective function
values. In this work, we explore a well-known paradigm, namely leader-selection
in a multi-agent networked environment to illustrate strategies for scalable
submodular optimization. We study the performance of the state-of-the-art
stochastic and distributed greedy algorithms as well as explore techniques that
accelerate the computation oracles within the optimization loop. We finally
present results combining accelerated greedy algorithms with accelerated
computation oracles and demonstrate significant speedups with little loss of
optimality when compared to the baseline ordinary greedy algorithm
- …