2 research outputs found

    Complexity and Approximability of Parameterized MAX-CSPs

    Get PDF
    International audienceWe study the optimization version of constraint satisfaction problems (Max-CSPs) in the framework of parameterized complexity; the goal is to compute the maximum fraction of constraints that can be satisfied simultaneously. In standard CSPs, we want to decide whether this fraction equals one. The parameters we investigate are structural measures, such as the treewidth or the clique-width of the variable-constraint incidence graph of the CSP instance.We consider Max-CSPs with the constraint types AND, OR, PARITY, and MAJORITY, and with various parameters k, and we attempt to fully classify them into the following three cases: 1. The exact optimum can be computed in FPT time. 2. It is W[1]-hard to compute the exact optimum, but there is a randomized FPT approximation scheme (FPTAS), which computes a (1−ϵ)-approximation in time f(k,ϵ)⋅poly(n). 3. There is no FPTAS unless FPT=W[1].For the corresponding standard CSPs, we establish FPT vs. W[1]-hardness results

    From Weak to Strong LP Gaps for All CSPs

    Get PDF
    We study the approximability of constraint satisfaction problems (CSPs) by linear programming (LP) relaxations. We show that for every CSP, the approximation obtained by a basic LP relaxation, is no weaker than the approximation obtained using relaxations given by Omega(log(n)/log(log(n))) levels of the Sherali-Adams hierarchy on instances of size n. It was proved by Chan et al. [FOCS 2013] (and recently strengthened by Kothari et al. [STOC 2017]) that for CSPs, any polynomial size LP extended formulation is no stronger than relaxations obtained by a super-constant levels of the Sherali-Adams hierarchy. Combining this with our result also implies that any polynomial size LP extended formulation is no stronger than simply the basic LP, which can be thought of as the base level of the Sherali-Adams hierarchy. This essentially gives a dichotomy result for approximation of CSPs by polynomial size LP extended formulations. Using our techniques, we also simplify and strengthen the result by Khot et al. [STOC 2014] on (strong) approximation resistance for LPs. They provided a necessary and sufficient condition under which Omega(loglog n) levels of the Sherali-Adams hierarchy cannot achieve an approximation better than a random assignment. We simplify their proof and strengthen the bound to Omega(log(n)/log(log(n))) levels
    corecore