1,243 research outputs found
Lower Bounds on Query Complexity for Testing Bounded-Degree CSPs
In this paper, we consider lower bounds on the query complexity for testing
CSPs in the bounded-degree model.
First, for any ``symmetric'' predicate except \equ
where , we show that every (randomized) algorithm that distinguishes
satisfiable instances of CSP(P) from instances -far
from satisfiability requires queries where is the
number of variables and is a constant that depends on and
. This breaks a natural lower bound , which is
obtained by the birthday paradox. We also show that every one-sided error
tester requires queries for such . These results are hereditary
in the sense that the same results hold for any predicate such that
. For EQU, we give a one-sided error tester
whose query complexity is . Also, for 2-XOR (or,
equivalently E2LIN2), we show an lower bound for
distinguishing instances between -close to and -far
from satisfiability.
Next, for the general k-CSP over the binary domain, we show that every
algorithm that distinguishes satisfiable instances from instances
-far from satisfiability requires queries. The
matching NP-hardness is not known, even assuming the Unique Games Conjecture or
the -to- Conjecture. As a corollary, for Maximum Independent Set on
graphs with vertices and a degree bound , we show that every
approximation algorithm within a factor d/\poly\log d and an additive error
of requires queries. Previously, only super-constant
lower bounds were known
Optimal Constant-Time Approximation Algorithms and (Unconditional) Inapproximability Results for Every Bounded-Degree CSP
Raghavendra (STOC 2008) gave an elegant and surprising result: if Khot's
Unique Games Conjecture (STOC 2002) is true, then for every constraint
satisfaction problem (CSP), the best approximation ratio is attained by a
certain simple semidefinite programming and a rounding scheme for it. In this
paper, we show that similar results hold for constant-time approximation
algorithms in the bounded-degree model. Specifically, we present the
followings: (i) For every CSP, we construct an oracle that serves an access, in
constant time, to a nearly optimal solution to a basic LP relaxation of the
CSP. (ii) Using the oracle, we give a constant-time rounding scheme that
achieves an approximation ratio coincident with the integrality gap of the
basic LP. (iii) Finally, we give a generic conversion from integrality gaps of
basic LPs to hardness results. All of those results are \textit{unconditional}.
Therefore, for every bounded-degree CSP, we give the best constant-time
approximation algorithm among all. A CSP instance is called -far from
satisfiability if we must remove at least an -fraction of constraints
to make it satisfiable. A CSP is called testable if there is a constant-time
algorithm that distinguishes satisfiable instances from -far
instances with probability at least . Using the results above, we also
derive, under a technical assumption, an equivalent condition under which a CSP
is testable in the bounded-degree model
Testing List H-Homomorphisms
Let be an undirected graph. In the List -Homomorphism Problem, given
an undirected graph with a list constraint for each
variable , the objective is to find a list -homomorphism , that is, for every and whenever .
We consider the following problem: given a map as an oracle
access, the objective is to decide with high probability whether is a list
-homomorphism or \textit{far} from any list -homomorphisms. The
efficiency of an algorithm is measured by the number of accesses to .
In this paper, we classify graphs with respect to the query complexity
for testing list -homomorphisms and show the following trichotomy holds: (i)
List -homomorphisms are testable with a constant number of queries if and
only if is a reflexive complete graph or an irreflexive complete bipartite
graph. (ii) List -homomorphisms are testable with a sublinear number of
queries if and only if is a bi-arc graph. (iii) Testing list
-homomorphisms requires a linear number of queries if is not a bi-arc
graph
Average Sensitivity of Graph Algorithms
In modern applications of graphs algorithms, where the graphs of interest are
large and dynamic, it is unrealistic to assume that an input representation
contains the full information of a graph being studied. Hence, it is desirable
to use algorithms that, even when only a (large) subgraph is available, output
solutions that are close to the solutions output when the whole graph is
available. We formalize this idea by introducing the notion of average
sensitivity of graph algorithms, which is the average earth mover's distance
between the output distributions of an algorithm on a graph and its subgraph
obtained by removing an edge, where the average is over the edges removed and
the distance between two outputs is the Hamming distance.
In this work, we initiate a systematic study of average sensitivity. After
deriving basic properties of average sensitivity such as composition, we
provide efficient approximation algorithms with low average sensitivities for
concrete graph problems, including the minimum spanning forest problem, the
global minimum cut problem, the minimum - cut problem, and the maximum
matching problem. In addition, we prove that the average sensitivity of our
global minimum cut algorithm is almost optimal, by showing a nearly matching
lower bound. We also show that every algorithm for the 2-coloring problem has
average sensitivity linear in the number of vertices. One of the main ideas
involved in designing our algorithms with low average sensitivity is the
following fact; if the presence of a vertex or an edge in the solution output
by an algorithm can be decided locally, then the algorithm has a low average
sensitivity, allowing us to reuse the analyses of known sublinear-time
algorithms and local computation algorithms (LCAs). Using this connection, we
show that every LCA for 2-coloring has linear query complexity, thereby
answering an open question.Comment: 39 pages, 1 figur
Spectral Norm Regularization for Improving the Generalizability of Deep Learning
We investigate the generalizability of deep learning based on the sensitivity
to input perturbation. We hypothesize that the high sensitivity to the
perturbation of data degrades the performance on it. To reduce the sensitivity
to perturbation, we propose a simple and effective regularization method,
referred to as spectral norm regularization, which penalizes the high spectral
norm of weight matrices in neural networks. We provide supportive evidence for
the abovementioned hypothesis by experimentally confirming that the models
trained using spectral norm regularization exhibit better generalizability than
other baseline methods
- …