90,162 research outputs found
Bundling Equilibrium in Combinatorial auctions
This paper analyzes individually-rational ex post equilibrium in the VC
(Vickrey-Clarke) combinatorial auctions. If is a family of bundles of
goods, the organizer may restrict the participants by requiring them to submit
their bids only for bundles in . The -VC combinatorial auctions
(multi-good auctions) obtained in this way are known to be
individually-rational truth-telling mechanisms. In contrast, this paper deals
with non-restricted VC auctions, in which the buyers restrict themselves to
bids on bundles in , because it is rational for them to do so. That is,
it may be that when the buyers report their valuation of the bundles in
, they are in an equilibrium. We fully characterize those that
induce individually rational equilibrium in every VC auction, and we refer to
the associated equilibrium as a bundling equilibrium. The number of bundles in
represents the communication complexity of the equilibrium. A special
case of bundling equilibrium is partition-based equilibrium, in which
is a field, that is, it is generated by a partition. We analyze the tradeoff
between communication complexity and economic efficiency of bundling
equilibrium, focusing in particular on partition-based equilibrium
Simple Heuristics Yield Provable Algorithms for Masked Low-Rank Approximation
In , one is given and binary mask matrix . The goal is to
find a rank- matrix for which:
where and is a given
error parameter. Depending on the choice of , this problem captures factor
analysis, low-rank plus diagonal decomposition, robust PCA, low-rank matrix
completion, low-rank plus block matrix approximation, and many problems. Many
of these problems are NP-hard, and while some algorithms with provable
guarantees are known, they either 1) run in time or
2) make strong assumptions, e.g., that is incoherent or that is random.
In this work, we show that a common polynomial time heuristic, which simply
sets to where is , and then finds a standard low-rank
approximation, yields bicriteria approximation guarantees for this problem. In
particular, for rank depending on the $public\ coin\ partition\
numberWk'L(L) \leq OPT +
\epsilon \|A\|_F^2randomized\ communication\ complexityWk' = k \cdot poly(\log n/\epsilon)$.
Further, we show that different models of communication yield algorithms for
natural variants of masked low-rank approximation. For example, multi-player
number-in-hand communication complexity connects to masked tensor decomposition
and non-deterministic communication complexity to masked Boolean low-rank
factorization.Comment: ITCS 202
Quantum vs. Classical Read-once Branching Programs
The paper presents the first nontrivial upper and lower bounds for
(non-oblivious) quantum read-once branching programs. It is shown that the
computational power of quantum and classical read-once branching programs is
incomparable in the following sense: (i) A simple, explicit boolean function on
2n input bits is presented that is computable by error-free quantum read-once
branching programs of size O(n^3), while each classical randomized read-once
branching program and each quantum OBDD for this function with bounded
two-sided error requires size 2^{\Omega(n)}. (ii) Quantum branching programs
reading each input variable exactly once are shown to require size
2^{\Omega(n)} for computing the set-disjointness function DISJ_n from
communication complexity theory with two-sided error bounded by a constant
smaller than 1/2-2\sqrt{3}/7. This function is trivially computable even by
deterministic OBDDs of linear size. The technically most involved part is the
proof of the lower bound in (ii). For this, a new model of quantum
multi-partition communication protocols is introduced and a suitable extension
of the information cost technique of Jain, Radhakrishnan, and Sen (2003) to
this model is presented.Comment: 35 pages. Lower bound for disjointness: Error in application of info
theory corrected and regularity of quantum read-once BPs (each variable at
least once) added as additional assumption of the theorem. Some more informal
explanations adde
NP-hardness of circuit minimization for multi-output functions
Can we design efficient algorithms for finding fast algorithms? This question is captured by various circuit minimization problems, and algorithms for the corresponding tasks have significant practical applications. Following the work of Cook and Levin in the early 1970s, a central question is whether minimizing the circuit size of an explicitly given function is NP-complete. While this is known to hold in restricted models such as DNFs, making progress with respect to more expressive classes of circuits has been elusive.
In this work, we establish the first NP-hardness result for circuit minimization of total functions in the setting of general (unrestricted) Boolean circuits. More precisely, we show that computing the minimum circuit size of a given multi-output Boolean function f : {0,1}^n ? {0,1}^m is NP-hard under many-one polynomial-time randomized reductions. Our argument builds on a simpler NP-hardness proof for the circuit minimization problem for (single-output) Boolean functions under an extended set of generators.
Complementing these results, we investigate the computational hardness of minimizing communication. We establish that several variants of this problem are NP-hard under deterministic reductions. In particular, unless ? = ??, no polynomial-time computable function can approximate the deterministic two-party communication complexity of a partial Boolean function up to a polynomial. This has consequences for the class of structural results that one might hope to show about the communication complexity of partial functions
On Characterizing the Data Movement Complexity of Computational DAGs for Parallel Execution
Technology trends are making the cost of data movement increasingly dominant,
both in terms of energy and time, over the cost of performing arithmetic
operations in computer systems. The fundamental ratio of aggregate data
movement bandwidth to the total computational power (also referred to the
machine balance parameter) in parallel computer systems is decreasing. It is
there- fore of considerable importance to characterize the inherent data
movement requirements of parallel algorithms, so that the minimal architectural
balance parameters required to support it on future systems can be well
understood. In this paper, we develop an extension of the well-known red-blue
pebble game to develop lower bounds on the data movement complexity for the
parallel execution of computational directed acyclic graphs (CDAGs) on parallel
systems. We model multi-node multi-core parallel systems, with the total
physical memory distributed across the nodes (that are connected through some
interconnection network) and in a multi-level shared cache hierarchy for
processors within a node. We also develop new techniques for lower bound
characterization of non-homogeneous CDAGs. We demonstrate the use of the
methodology by analyzing the CDAGs of several numerical algorithms, to develop
lower bounds on data movement for their parallel execution
Multi-criteria scheduling of pipeline workflows
Mapping workflow applications onto parallel platforms is a challenging
problem, even for simple application patterns such as pipeline graphs. Several
antagonist criteria should be optimized, such as throughput and latency (or a
combination). In this paper, we study the complexity of the bi-criteria mapping
problem for pipeline graphs on communication homogeneous platforms. In
particular, we assess the complexity of the well-known chains-to-chains problem
for different-speed processors, which turns out to be NP-hard. We provide
several efficient polynomial bi-criteria heuristics, and their relative
performance is evaluated through extensive simulations
- …