53 research outputs found
Hardness of Finding Independent Sets in 2-Colorable Hypergraphs and of Satisfiable CSPs
This work revisits the PCP Verifiers used in the works of Hastad [Has01],
Guruswami et al.[GHS02], Holmerin[Hol02] and Guruswami[Gur00] for satisfiable
Max-E3-SAT and Max-Ek-Set-Splitting, and independent set in 2-colorable
4-uniform hypergraphs. We provide simpler and more efficient PCP Verifiers to
prove the following improved hardness results: Assuming that NP\not\subseteq
DTIME(N^{O(loglog N)}),
There is no polynomial time algorithm that, given an n-vertex 2-colorable
4-uniform hypergraph, finds an independent set of n/(log n)^c vertices, for
some constant c > 0.
There is no polynomial time algorithm that satisfies 7/8 + 1/(log n)^c
fraction of the clauses of a satisfiable Max-E3-SAT instance of size n, for
some constant c > 0.
For any fixed k >= 4, there is no polynomial time algorithm that finds a
partition splitting (1 - 2^{-k+1}) + 1/(log n)^c fraction of the k-sets of a
satisfiable Max-Ek-Set-Splitting instance of size n, for some constant c > 0.
Our hardness factor for independent set in 2-colorable 4-uniform hypergraphs
is an exponential improvement over the previous results of Guruswami et
al.[GHS02] and Holmerin[Hol02]. Similarly, our inapproximability of (log
n)^{-c} beyond the random assignment threshold for Max-E3-SAT and
Max-Ek-Set-Splitting is an exponential improvement over the previous bounds
proved in [Has01], [Hol02] and [Gur00]. The PCP Verifiers used in our results
avoid the use of a variable bias parameter used in previous works, which leads
to the improved hardness thresholds in addition to simplifying the analysis
substantially. Apart from standard techniques from Fourier Analysis, for the
first mentioned result we use a mixing estimate of Markov Chains based on
uniform reverse hypercontractivity over general product spaces from the work of
Mossel et al.[MOS13].Comment: 23 Page
Relaxed Locally Correctable Codes
Locally decodable codes (LDCs) and locally correctable codes (LCCs) are error-correcting codes in which individual bits of the message and codeword, respectively, can be recovered by querying only few bits from a noisy codeword. These codes have found numerous applications both in theory and in practice.
A natural relaxation of LDCs, introduced by Ben-Sasson et al. (SICOMP, 2006), allows the decoder to reject (i.e., refuse to answer) in case it detects that the codeword is corrupt. They call such a decoder a relaxed decoder and construct a constant-query relaxed LDC with almost-linear blocklength, which is sub-exponentially better than what is known for (full-fledged) LDCs in the constant-query regime.
We consider an analogous relaxation for local correction. Thus, a relaxed local corrector reads only few bits from a (possibly) corrupt codeword and either recovers the desired bit of the codeword, or rejects in case it detects a corruption.
We give two constructions of relaxed LCCs in two regimes, where the first optimizes the query complexity and the second optimizes the rate:
1. Constant Query Complexity: A relaxed LCC with polynomial blocklength whose corrector only reads a constant number of bits of the codeword. This is a sub-exponential improvement over the best constant query (full-fledged) LCCs that are known.
2. Constant Rate: A relaxed LCC with constant rate (i.e., linear blocklength) with quasi-polylogarithmic query complexity. This is a nearly sub-exponential improvement over the query complexity of a recent (full-fledged) constant-rate LCC of Kopparty et al. (STOC, 2016)
Detecting communities is Hard (And Counting Them is Even Harder)
We consider the algorithmic problem of community detection in networks. Given an undirected friendship graph G, a subset
S of vertices is an (a,b)-community if: * Every member of the community is friends with an (a)-fraction of the community; and
* every non-member is friends with at most a (b)-fraction of the
community.
[Arora, Ge, Sachdeva, Schoenebeck 2012] gave a quasi-polynomial
time algorithm for enumerating all the (a,b)-communities
for any constants a>b.
Here, we prove that, assuming the Exponential Time Hypothesis (ETH),
quasi-polynomial time is in fact necessary - and even for a much weaker
approximation desideratum. Namely, distinguishing between:
* G contains an (1,o(1))-community; and
* G does not contain a (b,b+o(1))-community
for any b.
We also prove that counting the number of (1,o(1))-communities
requires quasi-polynomial time assuming the weaker #ETH
QPTAS and Subexponential Algorithm for Maximum Clique on Disk Graphs
A (unit) disk graph is the intersection graph of closed (unit) disks in the plane. Almost three decades ago, an elegant polynomial-time algorithm was found for Maximum Clique on unit disk graphs [Clark, Colbourn, Johnson; Discrete Mathematics '90]. Since then, it has been an intriguing open question whether or not tractability can be extended to general disk graphs. We show the rather surprising structural result that a disjoint union of cycles is the complement of a disk graph if and only if at most one of those cycles is of odd length. From that, we derive the first QPTAS and subexponential algorithm running in time 2^{O~(n^{2/3})} for Maximum Clique on disk graphs. In stark contrast, Maximum Clique on intersection graphs of filled ellipses or filled triangles is unlikely to have such algorithms, even when the ellipses are close to unit disks. Indeed, we show that there is a constant ratio of approximation which cannot be attained even in time 2^{n^{1-epsilon}}, unless the Exponential Time Hypothesis fails
- …