90,627 research outputs found
Block Sensitivity of Minterm-Transitive Functions
Boolean functions with symmetry properties are interesting from a complexity
theory perspective; extensive research has shown that these functions, if
nonconstant, must have high `complexity' according to various measures.
In recent work of this type, Sun gave bounds on the block sensitivity of
nonconstant Boolean functions invariant under a transitive permutation group.
Sun showed that all such functions satisfy bs(f) = Omega(N^{1/3}), and that
there exists such a function for which bs(f) = O(N^{3/7}ln N). His example
function belongs to a subclass of transitively invariant functions called the
minterm-transitive functions (defined in earlier work by Chakraborty).
We extend these results in two ways. First, we show that nonconstant
minterm-transitive functions satisfy bs(f) = Omega(N^{3/7}). Thus Sun's example
function has nearly minimal block sensitivity for this subclass. Second, we
give an improved example: a minterm-transitive function for which bs(f) =
O(N^{3/7}ln^{1/7}N).Comment: 10 page
Low-Sensitivity Functions from Unambiguous Certificates
We provide new query complexity separations against sensitivity for total
Boolean functions: a power separation between deterministic (and even
randomized or quantum) query complexity and sensitivity, and a power
separation between certificate complexity and sensitivity. We get these
separations by using a new connection between sensitivity and a seemingly
unrelated measure called one-sided unambiguous certificate complexity
(). We also show that is lower-bounded by fractional block
sensitivity, which means we cannot use these techniques to get a
super-quadratic separation between and . We also provide a
quadratic separation between the tree-sensitivity and decision tree complexity
of Boolean functions, disproving a conjecture of Gopalan, Servedio, Tal, and
Wigderson (CCC 2016).
Along the way, we give a power separation between certificate
complexity and one-sided unambiguous certificate complexity, improving the
power separation due to G\"o\"os (FOCS 2015). As a consequence, we
obtain an improved lower-bound on the
co-nondeterministic communication complexity of the Clique vs. Independent Set
problem.Comment: 25 pages. This version expands the results and adds Pooya Hatami and
Avishay Tal as author
Lower Bounds on Quantum Query Complexity
Shor's and Grover's famous quantum algorithms for factoring and searching
show that quantum computers can solve certain computational problems
significantly faster than any classical computer. We discuss here what quantum
computers_cannot_ do, and specifically how to prove limits on their
computational power. We cover the main known techniques for proving lower
bounds, and exemplify and compare the methods.Comment: survey, 23 page
Average Sensitivity of Graph Algorithms
In modern applications of graphs algorithms, where the graphs of interest are
large and dynamic, it is unrealistic to assume that an input representation
contains the full information of a graph being studied. Hence, it is desirable
to use algorithms that, even when only a (large) subgraph is available, output
solutions that are close to the solutions output when the whole graph is
available. We formalize this idea by introducing the notion of average
sensitivity of graph algorithms, which is the average earth mover's distance
between the output distributions of an algorithm on a graph and its subgraph
obtained by removing an edge, where the average is over the edges removed and
the distance between two outputs is the Hamming distance.
In this work, we initiate a systematic study of average sensitivity. After
deriving basic properties of average sensitivity such as composition, we
provide efficient approximation algorithms with low average sensitivities for
concrete graph problems, including the minimum spanning forest problem, the
global minimum cut problem, the minimum - cut problem, and the maximum
matching problem. In addition, we prove that the average sensitivity of our
global minimum cut algorithm is almost optimal, by showing a nearly matching
lower bound. We also show that every algorithm for the 2-coloring problem has
average sensitivity linear in the number of vertices. One of the main ideas
involved in designing our algorithms with low average sensitivity is the
following fact; if the presence of a vertex or an edge in the solution output
by an algorithm can be decided locally, then the algorithm has a low average
sensitivity, allowing us to reuse the analyses of known sublinear-time
algorithms and local computation algorithms (LCAs). Using this connection, we
show that every LCA for 2-coloring has linear query complexity, thereby
answering an open question.Comment: 39 pages, 1 figur
Beyond Reuse Distance Analysis: Dynamic Analysis for Characterization of Data Locality Potential
Emerging computer architectures will feature drastically decreased flops/byte
(ratio of peak processing rate to memory bandwidth) as highlighted by recent
studies on Exascale architectural trends. Further, flops are getting cheaper
while the energy cost of data movement is increasingly dominant. The
understanding and characterization of data locality properties of computations
is critical in order to guide efforts to enhance data locality. Reuse distance
analysis of memory address traces is a valuable tool to perform data locality
characterization of programs. A single reuse distance analysis can be used to
estimate the number of cache misses in a fully associative LRU cache of any
size, thereby providing estimates on the minimum bandwidth requirements at
different levels of the memory hierarchy to avoid being bandwidth bound.
However, such an analysis only holds for the particular execution order that
produced the trace. It cannot estimate potential improvement in data locality
through dependence preserving transformations that change the execution
schedule of the operations in the computation. In this article, we develop a
novel dynamic analysis approach to characterize the inherent locality
properties of a computation and thereby assess the potential for data locality
enhancement via dependence preserving transformations. The execution trace of a
code is analyzed to extract a computational directed acyclic graph (CDAG) of
the data dependences. The CDAG is then partitioned into convex subsets, and the
convex partitioning is used to reorder the operations in the execution trace to
enhance data locality. The approach enables us to go beyond reuse distance
analysis of a single specific order of execution of the operations of a
computation in characterization of its data locality properties. It can serve a
valuable role in identifying promising code regions for manual transformation,
as well as assessing the effectiveness of compiler transformations for data
locality enhancement. We demonstrate the effectiveness of the approach using a
number of benchmarks, including case studies where the potential shown by the
analysis is exploited to achieve lower data movement costs and better
performance.Comment: Transaction on Architecture and Code Optimization (2014
- …