1,261 research outputs found
Computational Difficulty of Global Variations in the Density Matrix Renormalization Group
The density matrix renormalization group (DMRG) approach is arguably the most
successful method to numerically find ground states of quantum spin chains. It
amounts to iteratively locally optimizing matrix-product states, aiming at
better and better approximating the true ground state. To date, both a proof of
convergence to the globally best approximation and an assessment of its
complexity are lacking. Here we establish a result on the computational
complexity of an approximation with matrix-product states: The surprising
result is that when one globally optimizes over several sites of local
Hamiltonians, avoiding local optima, one encounters in the worst case a
computationally difficult NP-hard problem (hard even in approximation). The
proof exploits a novel way of relating it to binary quadratic programming. We
discuss intriguing ramifications on the difficulty of describing quantum
many-body systems.Comment: 5 pages, 1 figure, RevTeX, final versio
Maximizing Welfare in Social Networks under a Utility Driven Influence Diffusion Model
Motivated by applications such as viral marketing, the problem of influence
maximization (IM) has been extensively studied in the literature. The goal is
to select a small number of users to adopt an item such that it results in a
large cascade of adoptions by others. Existing works have three key
limitations. (1) They do not account for economic considerations of a user in
buying/adopting items. (2) Most studies on multiple items focus on competition,
with complementary items receiving limited attention. (3) For the network
owner, maximizing social welfare is important to ensure customer loyalty, which
is not addressed in prior work in the IM literature. In this paper, we address
all three limitations and propose a novel model called UIC that combines
utility-driven item adoption with influence propagation over networks. Focusing
on the mutually complementary setting, we formulate the problem of social
welfare maximization in this novel setting. We show that while the objective
function is neither submodular nor supermodular, surprisingly a simple greedy
allocation algorithm achieves a factor of of the optimum
expected social welfare. We develop \textsf{bundleGRD}, a scalable version of
this approximation algorithm, and demonstrate, with comprehensive experiments
on real and synthetic datasets, that it significantly outperforms all
baselines.Comment: 33 page
Parallel Repetition of Entangled Games with Exponential Decay via the Superposed Information Cost
In a two-player game, two cooperating but non communicating players, Alice
and Bob, receive inputs taken from a probability distribution. Each of them
produces an output and they win the game if they satisfy some predicate on
their inputs/outputs. The entangled value of a game is the
maximum probability that Alice and Bob can win the game if they are allowed to
share an entangled state prior to receiving their inputs.
The -fold parallel repetition of consists of instances of
where the players receive all the inputs at the same time and produce all
the outputs at the same time. They win if they win each instance of .
In this paper we show that for any game such that , decreases exponentially in . First, for
any game on the uniform distribution, we show that , where and are the sizes of the input
and output sets. From this result, we show that for any entangled game ,
where is the input distribution of and
. This implies parallel
repetition with exponential decay as long as for
general games. To prove this parallel repetition, we introduce the concept of
\emph{Superposed Information Cost} for entangled games which is inspired from
the information cost used in communication complexity.Comment: In the first version of this paper we presented a different, stronger
Corollary 1 but due to an error in the proof we had to modify it in the
second version. This third version is a minor update. We correct some typos
and re-introduce a proof accidentally commented out in the second versio
Replica Placement on Bounded Treewidth Graphs
We consider the replica placement problem: given a graph with clients and
nodes, place replicas on a minimum set of nodes to serve all the clients; each
client is associated with a request and maximum distance that it can travel to
get served and there is a maximum limit (capacity) on the amount of request a
replica can serve. The problem falls under the general framework of capacitated
set covering. It admits an O(\log n)-approximation and it is NP-hard to
approximate within a factor of . We study the problem in terms of
the treewidth of the graph and present an O(t)-approximation algorithm.Comment: An abridged version of this paper is to appear in the proceedings of
WADS'1
Limitations to Frechet's Metric Embedding Method
Frechet's classical isometric embedding argument has evolved to become a
major tool in the study of metric spaces. An important example of a Frechet
embedding is Bourgain's embedding. The authors have recently shown that for
every e>0 any n-point metric space contains a subset of size at least n^(1-e)
which embeds into l_2 with distortion O(\log(2/e) /e). The embedding we used is
non-Frechet, and the purpose of this note is to show that this is not
coincidental. Specifically, for every e>0, we construct arbitrarily large
n-point metric spaces, such that the distortion of any Frechet embedding into
l_p on subsets of size at least n^{1/2 + e} is \Omega((\log n)^{1/p}).Comment: 10 pages, 1 figur
Strong inapproximability of the shortest reset word
The \v{C}ern\'y conjecture states that every -state synchronizing
automaton has a reset word of length at most . We study the hardness
of finding short reset words. It is known that the exact version of the
problem, i.e., finding the shortest reset word, is NP-hard and coNP-hard, and
complete for the DP class, and that approximating the length of the shortest
reset word within a factor of is NP-hard [Gerbush and Heeringa,
CIAA'10], even for the binary alphabet [Berlinkov, DLT'13]. We significantly
improve on these results by showing that, for every , it is NP-hard
to approximate the length of the shortest reset word within a factor of
. This is essentially tight since a simple -approximation
algorithm exists.Comment: extended abstract to appear in MFCS 201
Approximating the minimum directed tree cover
Given a directed graph with non negative cost on the arcs, a directed
tree cover of is a rooted directed tree such that either head or tail (or
both of them) of every arc in is touched by . The minimum directed tree
cover problem (DTCP) is to find a directed tree cover of minimum cost. The
problem is known to be -hard. In this paper, we show that the weighted Set
Cover Problem (SCP) is a special case of DTCP. Hence, one can expect at best to
approximate DTCP with the same ratio as for SCP. We show that this expectation
can be satisfied in some way by designing a purely combinatorial approximation
algorithm for the DTCP and proving that the approximation ratio of the
algorithm is with is the maximum outgoing degree of
the nodes in .Comment: 13 page
Thresholded Covering Algorithms for Robust and Max-Min Optimization
The general problem of robust optimization is this: one of several possible
scenarios will appear tomorrow, but things are more expensive tomorrow than
they are today. What should you anticipatorily buy today, so that the
worst-case cost (summed over both days) is minimized? Feige et al. and
Khandekar et al. considered the k-robust model where the possible outcomes
tomorrow are given by all demand-subsets of size k, and gave algorithms for the
set cover problem, and the Steiner tree and facility location problems in this
model, respectively.
In this paper, we give the following simple and intuitive template for
k-robust problems: "having built some anticipatory solution, if there exists a
single demand whose augmentation cost is larger than some threshold, augment
the anticipatory solution to cover this demand as well, and repeat". In this
paper we show that this template gives us improved approximation algorithms for
k-robust Steiner tree and set cover, and the first approximation algorithms for
k-robust Steiner forest, minimum-cut and multicut. All our approximation ratios
(except for multicut) are almost best possible.
As a by-product of our techniques, we also get algorithms for max-min
problems of the form: "given a covering problem instance, which k of the
elements are costliest to cover?".Comment: 24 page
Approximation Algorithms for the Capacitated Domination Problem
We consider the {\em Capacitated Domination} problem, which models a
service-requirement assignment scenario and is also a generalization of the
well-known {\em Dominating Set} problem. In this problem, given a graph with
three parameters defined on each vertex, namely cost, capacity, and demand, we
want to find an assignment of demands to vertices of least cost such that the
demand of each vertex is satisfied subject to the capacity constraint of each
vertex providing the service. In terms of polynomial time approximations, we
present logarithmic approximation algorithms with respect to different demand
assignment models for this problem on general graphs, which also establishes
the corresponding approximation results to the well-known approximations of the
traditional {\em Dominating Set} problem. Together with our previous work, this
closes the problem of generally approximating the optimal solution. On the
other hand, from the perspective of parameterization, we prove that this
problem is {\it W[1]}-hard when parameterized by a structure of the graph
called treewidth. Based on this hardness result, we present exact
fixed-parameter tractable algorithms when parameterized by treewidth and
maximum capacity of the vertices. This algorithm is further extended to obtain
pseudo-polynomial time approximation schemes for planar graphs
Approximating Multilinear Monomial Coefficients and Maximum Multilinear Monomials in Multivariate Polynomials
This paper is our third step towards developing a theory of testing monomials
in multivariate polynomials and concentrates on two problems: (1) How to
compute the coefficients of multilinear monomials; and (2) how to find a
maximum multilinear monomial when the input is a polynomial. We
first prove that the first problem is \#P-hard and then devise a
upper bound for this problem for any polynomial represented by an arithmetic
circuit of size . Later, this upper bound is improved to for
polynomials. We then design fully polynomial-time randomized
approximation schemes for this problem for polynomials. On the
negative side, we prove that, even for polynomials with terms of
degree , the first problem cannot be approximated at all for any
approximation factor , nor {\em "weakly approximated"} in a much relaxed
setting, unless P=NP. For the second problem, we first give a polynomial time
-approximation algorithm for polynomials with terms of
degrees no more a constant . On the inapproximability side, we
give a lower bound, for any on the
approximation factor for polynomials. When terms in these
polynomials are constrained to degrees , we prove a lower
bound, assuming ; and a higher lower bound, assuming the
Unique Games Conjecture
- …