6,424 research outputs found
Algorithms and complexity for approximately counting hypergraph colourings and related problems
The past decade has witnessed advancements in designing efficient algorithms for approximating the number of solutions to constraint satisfaction problems (CSPs), especially in the local lemma regime. However, the phase transition for the computational tractability is not known. This thesis is dedicated to the prototypical problem of this kind of CSPs, the hypergraph colouring. Parameterised by the number of colours q, the arity of each hyperedge k, and the vertex maximum degree Δ, this problem falls into the regime of Lovász local lemma when Δ ≲ qᵏ. In prior, however, fast approximate counting algorithms exist when Δ ≲ qᵏ/³, and there is no known inapproximability result. In pursuit of this, our contribution is two-folded, stated as follows.
• When q, k ≥ 4 are evens and Δ ≥ 5·qᵏ/², approximating the number of hypergraph colourings is NP-hard.
• When the input hypergraph is linear and Δ ≲ qᵏ/², a fast approximate counting algorithm does exist
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Algebraic solutions of linear differential equations: an arithmetic approach
Given a linear differential equation with coefficients in , an
important question is to know whether its full space of solutions consists of
algebraic functions, or at least if one of its specific solutions is algebraic.
After presenting motivating examples coming from various branches of
mathematics, we advertise in an elementary way a beautiful local-global
arithmetic approach to these questions, initiated by Grothendieck in the late
sixties. This approach has deep ramifications and leads to the still unsolved
Grothendieck-Katz -curvature conjecture.Comment: 47 page
Maximizing Neutrality in News Ordering
The detection of fake news has received increasing attention over the past
few years, but there are more subtle ways of deceiving one's audience. In
addition to the content of news stories, their presentation can also be made
misleading or biased. In this work, we study the impact of the ordering of news
stories on audience perception. We introduce the problems of detecting
cherry-picked news orderings and maximizing neutrality in news orderings. We
prove hardness results and present several algorithms for approximately solving
these problems. Furthermore, we provide extensive experimental results and
present evidence of potential cherry-picking in the real world.Comment: 14 pages, 13 figures, accepted to KDD '2
Improved Approximation Algorithms for Steiner Connectivity Augmentation Problems
The Weighted Connectivity Augmentation Problem is the problem of augmenting
the edge-connectivity of a given graph by adding links of minimum total cost.
This work focuses on connectivity augmentation problems in the Steiner setting,
where we are not interested in the connectivity between all nodes of the graph,
but only the connectivity between a specified subset of terminals.
We consider two related settings. In the Steiner Augmentation of a Graph
problem (-SAG), we are given a -edge-connected subgraph of a graph
. The goal is to augment by including links and nodes from of
minimum cost so that the edge-connectivity between nodes of increases by 1.
In the Steiner Connectivity Augmentation Problem (-SCAP), we are given a
Steiner -edge-connected graph connecting terminals , and we seek to add
links of minimum cost to create a Steiner -edge-connected graph for .
Note that -SAG is a special case of -SCAP.
All of the above problems can be approximated to within a factor of 2 using
e.g. Jain's iterative rounding algorithm for Survivable Network Design. In this
work, we leverage the framework of Traub and Zenklusen to give a -approximation for the Steiner Ring Augmentation Problem (SRAP):
given a cycle embedded in a larger graph and
a subset of terminals , choose a subset of links of minimum cost so that has 3 pairwise edge-disjoint paths
between every pair of terminals.
We show this yields a polynomial time algorithm with approximation ratio for -SCAP. We obtain an improved approximation
guarantee of for SRAP in the case that , which
yields a -approximation for -SAG for any
Parameterized Complexity of Binary CSP: Vertex Cover, Treedepth, and Related Parameters
We investigate the parameterized complexity of Binary CSP parameterized by the vertex cover number and the treedepth of the constraint graph, as well as by a selection of related modulator-based parameters. The main findings are as follows:
- Binary CSP parameterized by the vertex cover number is W[3]-complete. More generally, for every positive integer d, Binary CSP parameterized by the size of a modulator to a treedepth-d graph is W[2d+1]-complete. This provides a new family of natural problems that are complete for odd levels of the W-hierarchy.
- We introduce a new complexity class XSLP, defined so that Binary CSP parameterized by treedepth is complete for this class. We provide two equivalent characterizations of XSLP: the first one relates XSLP to a model of an alternating Turing machine with certain restrictions on conondeterminism and space complexity, while the second one links XSLP to the problem of model-checking first-order logic with suitably restricted universal quantification. Interestingly, the proof of the machine characterization of XSLP uses the concept of universal trees, which are prominently featured in the recent work on parity games.
- We describe a new complexity hierarchy sandwiched between the W-hierarchy and the A-hierarchy: For every odd t, we introduce a parameterized complexity class S[t] with W[t] ? S[t] ? A[t], defined using a parameter that interpolates between the vertex cover number and the treedepth. We expect that many of the studied classes will be useful in the future for pinpointing the complexity of various structural parameterizations of graph problems
Bounded Relativization
Relativization is one of the most fundamental concepts in complexity theory, which explains the difficulty of resolving major open problems. In this paper, we propose a weaker notion of relativization called bounded relativization. For a complexity class ?, we say that a statement is ?-relativizing if the statement holds relative to every oracle ? ? ?. It is easy to see that every result that relativizes also ?-relativizes for every complexity class ?. On the other hand, we observe that many non-relativizing results, such as IP = PSPACE, are in fact PSPACE-relativizing.
First, we use the idea of bounded relativization to obtain new lower bound results, including the following nearly maximum circuit lower bound: for every constant ? > 0, BPE^{MCSP}/2^{?n} ? SIZE[2?/n].
We prove this by PSPACE-relativizing the recent pseudodeterministic pseudorandom generator by Lu, Oliveira, and Santhanam (STOC 2021).
Next, we study the limitations of PSPACE-relativizing proof techniques, and show that a seemingly minor improvement over the known results using PSPACE-relativizing techniques would imply a breakthrough separation NP ? L. For example:
- Impagliazzo and Wigderson (JCSS 2001) proved that if EXP ? BPP, then BPP admits infinitely-often subexponential-time heuristic derandomization. We show that their result is PSPACE-relativizing, and that improving it to worst-case derandomization using PSPACE-relativizing techniques implies NP ? L.
- Oliveira and Santhanam (STOC 2017) recently proved that every dense subset in P admits an infinitely-often subexponential-time pseudodeterministic construction, which we observe is PSPACE-relativizing. Improving this to almost-everywhere (pseudodeterministic) or (infinitely-often) deterministic constructions by PSPACE-relativizing techniques implies NP ? L.
- Santhanam (SICOMP 2009) proved that pr-MA does not have fixed polynomial-size circuits. This lower bound can be shown PSPACE-relativizing, and we show that improving it to an almost-everywhere lower bound using PSPACE-relativizing techniques implies NP ? L.
In fact, we show that if we can use PSPACE-relativizing techniques to obtain the above-mentioned improvements, then PSPACE ? EXPH. We obtain our barrier results by constructing suitable oracles computable in EXPH relative to which these improvements are impossible
Cherry picking in forests: A new characterization for the unrooted hybrid number of two phylogenetic trees
Phylogenetic networks are a special type of graph that generalize
phylogenetic trees and that are used to model non-treelike evolutionary
processes such as recombination and hybridization. In this paper, we consider
unrooted phylogenetic networks, i.e. simple, connected graphs
with leaf set , for some set of species, in which
every internal vertex in has degree three. One approach used to
construct such phylogenetic networks is to take as input a collection
of phylogenetic trees and to look for a network
that contains each tree in and that minimizes the quantity
over all such networks. Such a network always
exists, and the quantity for an optimal network
is called the hybrid number of . In this paper, we give a new
characterization for the hybrid number in case consists of two
trees. This characterization is given in terms of a cherry picking sequence for
the two trees, although to prove that our characterization holds we need to
define the sequence more generally for two forests. Cherry picking sequences
have been intensively studied for collections of rooted phylogenetic trees, but
our new sequences are the first variant of this concept that can be applied in
the unrooted setting. Since the hybrid number of two trees is equal to the
well-known tree bisection and reconnect distance between the two trees, our new
characterization also provides an alternative way to understand this important
tree distance
Consistency-Checking Problems: A Gateway to Parameterized Sample Complexity
Recently, Brand, Ganian and Simonov introduced a parameterized refinement of
the classical PAC-learning sample complexity framework. A crucial outcome of
their investigation is that for a very wide range of learning problems, there
is a direct and provable correspondence between fixed-parameter
PAC-learnability (in the sample complexity setting) and the fixed-parameter
tractability of a corresponding "consistency checking" search problem (in the
setting of computational complexity). The latter can be seen as generalizations
of classical search problems where instead of receiving a single instance, one
receives multiple yes- and no-examples and is tasked with finding a solution
which is consistent with the provided examples.
Apart from a few initial results, consistency checking problems are almost
entirely unexplored from a parameterized complexity perspective. In this
article, we provide an overview of these problems and their connection to
parameterized sample complexity, with the primary aim of facilitating further
research in this direction. Afterwards, we establish the fixed-parameter
(in)-tractability for some of the arguably most natural consistency checking
problems on graphs, and show that their complexity-theoretic behavior is
surprisingly very different from that of classical decision problems. Our new
results cover consistency checking variants of problems as diverse as (k-)Path,
Matching, 2-Coloring, Independent Set and Dominating Set, among others
Variants of Pseudo-deterministic Algorithms and Duality in TFNP
We introduce a new notion of ``faux-deterministic'' algorithms for search problems in query complexity. Roughly, for a search problem \cS, a faux-deterministic algorithm is a probability distribution over deterministic algorithms such that no computationally bounded adversary making black-box queries to a sampled algorithm can find an input on which fails to solve \cS ((x, A(x))\notin \cS). Faux-deterministic algorithms are a relaxation of \emph{pseudo-deterministic} algorithms, which are randomized algorithms with the guarantee that for any given input , the algorithm outputs a unique output with high probability. Pseudo-deterministic algorithms are statistically indistinguishable from deterministic algorithms, while faux-deterministic algorithms relax this statistical indistinguishability to computational indistinguishability.
We prove that in the query model, every verifiable search problem that has a randomized algorithm also has a faux-deterministic algorithm. By considering the pseudo-deterministic lower bound of Goldwasser et al. \cite{goldwasser_et_al:LIPIcs.CCC.2021.36}, we immediately prove an exponential gap between pseudo-deterministic and faux-deterministic complexities in query complexity. We additionally show that our faux-deterministic algorithm is also secure against quantum adversaries that can make black-box queries in superposition.
We highlight two reasons to study faux-deterministic algorithms. First, for practical purposes, one can use a faux-deterministic algorithm instead of pseudo-deterministic algorithms in most cases where the latter is required. Second, since efficient faux-deterministic algorithms exist even when pseudo-deterministic ones do not, their existence demonstrates a barrier to proving pseudo-deterministic lower bounds: Lower bounds on pseudo-determinism must distinguish pseudo-determinism from faux-determinism.
Finally, changing our perspective to the adversaries' viewpoint, we introduce a notion of ``dual problem'' \cS^{*} for search problems \cS. In the dual problem \cS^*, the input is an algorithm purporting to solve \cS, and our goal is to find an adverse input on which fails to solve \cS. We discuss several properties in the query and Turing machine model that show the new problem \cS^* is analogous to a dual for \cS
- …