104,310 research outputs found
Query complexity of Boolean functions on the middle slice of the cube
We study the query complexity of slices of Boolean functions. Among other
results we show that there exists a Boolean function for which we need to query
all but 7 input bits to compute its value, even if we know beforehand that the
number of 0's and 1's in the input are the same, i.e. when our input is from
the middle slice. This answers a question of Byramji. Our proof is
non-constructive, but we also propose a concrete candidate function that might
have the above property. Our results are related to certain natural discrepancy
type questions that -- somewhat surprisingly -- have not been studied before.Comment: 10 page
Quantum and Classical Strong Direct Product Theorems and Optimal Time-Space Tradeoffs
A strong direct product theorem says that if we want to compute k independent
instances of a function, using less than k times the resources needed for one
instance, then our overall success probability will be exponentially small in
k. We establish such theorems for the classical as well as quantum query
complexity of the OR function. This implies slightly weaker direct product
results for all total functions. We prove a similar result for quantum
communication protocols computing k instances of the Disjointness function.
Our direct product theorems imply a time-space tradeoff T^2*S=Omega(N^3) for
sorting N items on a quantum computer, which is optimal up to polylog factors.
They also give several tight time-space and communication-space tradeoffs for
the problems of Boolean matrix-vector multiplication and matrix multiplication.Comment: 22 pages LaTeX. 2nd version: some parts rewritten, results are
essentially the same. A shorter version will appear in IEEE FOCS 0
Levelable Sets and the Algebraic Structure of Parameterizations
Asking which sets are fixed-parameter tractable for a given parameterization
constitutes much of the current research in parameterized complexity theory.
This approach faces some of the core difficulties in complexity theory. By
focussing instead on the parameterizations that make a given set
fixed-parameter tractable, we circumvent these difficulties. We isolate
parameterizations as independent measures of complexity and study their
underlying algebraic structure. Thus we are able to compare parameterizations,
which establishes a hierarchy of complexity that is much stronger than that
present in typical parameterized algorithms races. Among other results, we find
that no practically fixed-parameter tractable sets have optimal
parameterizations
WdW-patches in AdS and complexity change under conformal transformations II
We study the null-boundaries of Wheeler-de Witt (WdW) patches in three
dimensional Poincare-AdS, when the selected boundary timeslice is an arbitrary
(non-constant) function, presenting some useful analytic statements about them.
Special attention will be given to the piecewise smooth nature of the
null-boundaries, due to the emergence of caustics and null-null joint curves.
This is then applied, in the spirit of our previous paper arXiv:1806.08376, to
the problem of how complexity of the CFT groundstate changes under a small
local conformal transformation according to the action (CA) proposal. In stark
contrast to the volume (CV) proposal, where this change is only proportional to
the second order in the infinitesimal expansion parameter , we show
that in the CA case we obtain terms of order and even
. This has strong implications for the possible
field-theory duals of the CA proposal, ruling out an entire class of them.Comment: 31 pages + appendices, 9 figures v2: minor improvements, matches
published versio
Efficient Computation of Expected Hypervolume Improvement Using Box Decomposition Algorithms
In the field of multi-objective optimization algorithms, multi-objective
Bayesian Global Optimization (MOBGO) is an important branch, in addition to
evolutionary multi-objective optimization algorithms (EMOAs). MOBGO utilizes
Gaussian Process models learned from previous objective function evaluations to
decide the next evaluation site by maximizing or minimizing an infill
criterion. A common criterion in MOBGO is the Expected Hypervolume Improvement
(EHVI), which shows a good performance on a wide range of problems, with
respect to exploration and exploitation. However, so far it has been a
challenge to calculate exact EHVI values efficiently. In this paper, an
efficient algorithm for the computation of the exact EHVI for a generic case is
proposed. This efficient algorithm is based on partitioning the integration
volume into a set of axis-parallel slices. Theoretically, the upper bound time
complexities are improved from previously and , for two- and
three-objective problems respectively, to , which is
asymptotically optimal. This article generalizes the scheme in higher
dimensional case by utilizing a new hyperbox decomposition technique, which was
proposed by D{\"a}chert et al, EJOR, 2017. It also utilizes a generalization of
the multilayered integration scheme that scales linearly in the number of
hyperboxes of the decomposition. The speed comparison shows that the proposed
algorithm in this paper significantly reduces computation time. Finally, this
decomposition technique is applied in the calculation of the Probability of
Improvement (PoI)
Joint User-Association and Resource-Allocation in Virtualized Wireless Networks
In this paper, we consider a down-link transmission of multicell virtualized
wireless networks (VWNs) where users of different service providers (slices)
within a specific region are served by a set of base stations (BSs) through
orthogonal frequency division multiple access (OFDMA). In particular, we
develop a joint BS assignment, sub-carrier and power allocation algorithm to
maximize the network throughput, while satisfying the minimum required rate of
each slice. Under the assumption that each user at each transmission instance
can connect to no more than one BS, we introduce the user-association factor
(UAF) to represent the joint sub-carrier and BS assignment as the optimization
variable vector in the mathematical problem formulation. Sub-carrier reuse is
allowed in different cells, but not within one cell. As the proposed
optimization problem is inherently non-convex and NP-hard, by applying the
successive convex approximation (SCA) and complementary geometric programming
(CGP), we develop an efficient two-step iterative approach with low
computational complexity to solve the proposed problem. For a given
power-allocation, Step 1 derives the optimum userassociation and subsequently,
for an obtained user-association, Step 2 find the optimum power-allocation.
Simulation results demonstrate that the proposed iterative algorithm
outperforms the traditional approach in which each user is assigned to the BS
with the largest average value of signal strength, and then, joint sub-carrier
and power allocation is obtained for the assigned users of each cell.
Especially, for the cell-edge users, simulation results reveal a coverage
improvement up to 57% and 71% for uniform and non-uniform users distribution,
respectively leading to more reliable transmission and higher spectrum
efficiency for VWN
Sum-Of-Squares Lower Bounds for the Minimum Circuit Size Problem
We prove lower bounds for the Minimum Circuit Size Problem (MCSP) in the Sum-of-Squares (SoS) proof system. Our main result is that for every Boolean function f: {0,1}? ? {0,1}, SoS requires degree ?(s^{1-?}) to prove that f does not have circuits of size s (for any s > poly(n)). As a corollary we obtain that there are no low degree SoS proofs of the statement NP ? P/poly.
We also show that for any 0 < ? < 1 there are Boolean functions with circuit complexity larger than 2^{n^?} but SoS requires size 2^{2^?(n^?)} to prove this. In addition we prove analogous results on the minimum monotone circuit size for monotone Boolean slice functions.
Our approach is quite general. Namely, we show that if a proof system Q has strong enough constraint satisfaction problem lower bounds that only depend on good expansion of the constraint-variable incidence graph and, furthermore, Q is expressive enough that variables can be substituted by local Boolean functions, then the MCSP problem is hard for Q
- …