1,376 research outputs found
Locally Adaptive Optimization: Adaptive Seeding for Monotone Submodular Functions
The Adaptive Seeding problem is an algorithmic challenge motivated by
influence maximization in social networks: One seeks to select among certain
accessible nodes in a network, and then select, adaptively, among neighbors of
those nodes as they become accessible in order to maximize a global objective
function. More generally, adaptive seeding is a stochastic optimization
framework where the choices in the first stage affect the realizations in the
second stage, over which we aim to optimize.
Our main result is a -approximation for the adaptive seeding
problem for any monotone submodular function. While adaptive policies are often
approximated via non-adaptive policies, our algorithm is based on a novel
method we call \emph{locally-adaptive} policies. These policies combine a
non-adaptive global structure, with local adaptive optimizations. This method
enables the -approximation for general monotone submodular functions
and circumvents some of the impossibilities associated with non-adaptive
policies.
We also introduce a fundamental problem in submodular optimization that may
be of independent interest: given a ground set of elements where every element
appears with some small probability, find a set of expected size at most
that has the highest expected value over the realization of the elements. We
show a surprising result: there are classes of monotone submodular functions
(including coverage) that can be approximated almost optimally as the
probability vanishes. For general monotone submodular functions we show via a
reduction from \textsc{Planted-Clique} that approximations for this problem are
not likely to be obtainable. This optimization problem is an important tool for
adaptive seeding via non-adaptive policies, and its hardness motivates the
introduction of \emph{locally-adaptive} policies we use in the main result
Machine learning for ultrafast X-ray diffraction patterns on large-scale GPU clusters
The classical method of determining the atomic structure of complex molecules
by analyzing diffraction patterns is currently undergoing drastic developments.
Modern techniques for producing extremely bright and coherent X-ray lasers
allow a beam of streaming particles to be intercepted and hit by an ultrashort
high energy X-ray beam. Through machine learning methods the data thus
collected can be transformed into a three-dimensional volumetric intensity map
of the particle itself. The computational complexity associated with this
problem is very high such that clusters of data parallel accelerators are
required.
We have implemented a distributed and highly efficient algorithm for
inversion of large collections of diffraction patterns targeting clusters of
hundreds of GPUs. With the expected enormous amount of diffraction data to be
produced in the foreseeable future, this is the required scale to approach real
time processing of data at the beam site. Using both real and synthetic data we
look at the scaling properties of the application and discuss the overall
computational viability of this exciting and novel imaging technique
Adaptive Greedy versus Non-adaptive Greedy for Influence Maximization
We consider the \emph{adaptive influence maximization problem}: given a
network and a budget , iteratively select seeds in the network to
maximize the expected number of adopters. In the \emph{full-adoption feedback
model}, after selecting each seed, the seed-picker observes all the resulting
adoptions. In the \emph{myopic feedback model}, the seed-picker only observes
whether each neighbor of the chosen seed adopts. Motivated by the extreme
success of greedy-based algorithms/heuristics for influence maximization, we
propose the concept of \emph{greedy adaptivity gap}, which compares the
performance of the adaptive greedy algorithm to its non-adaptive counterpart.
Our first result shows that, for submodular influence maximization, the
adaptive greedy algorithm can perform up to a -fraction worse than the
non-adaptive greedy algorithm, and that this ratio is tight. More specifically,
on one side we provide examples where the performance of the adaptive greedy
algorithm is only a fraction of the performance of the non-adaptive
greedy algorithm in four settings: for both feedback models and both the
\emph{independent cascade model} and the \emph{linear threshold model}. On the
other side, we prove that in any submodular cascade, the adaptive greedy
algorithm always outputs a -approximation to the expected number of
adoptions in the optimal non-adaptive seed choice. Our second result shows
that, for the general submodular cascade model with full-adoption feedback, the
adaptive greedy algorithm can outperform the non-adaptive greedy algorithm by
an unbounded factor. Finally, we propose a risk-free variant of the adaptive
greedy algorithm that always performs no worse than the non-adaptive greedy
algorithm.Comment: 26 pages, 0 figure, accepted at AAAI'20: Thirty-Fourth AAAI
Conference on Artificial Intelligenc
Algorithms to Approximate Column-Sparse Packing Problems
Column-sparse packing problems arise in several contexts in both
deterministic and stochastic discrete optimization. We present two unifying
ideas, (non-uniform) attenuation and multiple-chance algorithms, to obtain
improved approximation algorithms for some well-known families of such
problems. As three main examples, we attain the integrality gap, up to
lower-order terms, for known LP relaxations for k-column sparse packing integer
programs (Bansal et al., Theory of Computing, 2012) and stochastic k-set
packing (Bansal et al., Algorithmica, 2012), and go "half the remaining
distance" to optimal for a major integrality-gap conjecture of Furedi, Kahn and
Seymour on hypergraph matching (Combinatorica, 1993).Comment: Extended abstract appeared in SODA 2018. Full version in ACM
Transactions of Algorithm
- …