236 research outputs found
On the Computational Complexity of Non-dictatorial Aggregation
We investigate when non-dictatorial aggregation is possible from an
algorithmic perspective, where non-dictatorial aggregation means that the votes
cast by the members of a society can be aggregated in such a way that the
collective outcome is not simply the choices made by a single member of the
society. We consider the setting in which the members of a society take a
position on a fixed collection of issues, where for each issue several
different alternatives are possible, but the combination of choices must belong
to a given set of allowable voting patterns. Such a set is called a
possibility domain if there is an aggregator that is non-dictatorial, operates
separately on each issue, and returns values among those cast by the society on
each issue. We design a polynomial-time algorithm that decides, given a set
of voting patterns, whether or not is a possibility domain. Furthermore, if
is a possibility domain, then the algorithm constructs in polynomial time
such a non-dictatorial aggregator for . We then show that the question of
whether a Boolean domain is a possibility domain is in NLOGSPACE. We also
design a polynomial-time algorithm that decides whether is a uniform
possibility domain, that is, whether admits an aggregator that is
non-dictatorial even when restricted to any two positions for each issue. As in
the case of possibility domains, the algorithm also constructs in polynomial
time a uniform non-dictatorial aggregator, if one exists. Then, we turn our
attention to the case where is given implicitly, either as the set of
assignments satisfying a propositional formula, or as a set of consistent
evaluations of an sequence of propositional formulas. In both cases, we provide
bounds to the complexity of deciding if is a (uniform) possibility domain.Comment: 21 page
Tractability in Constraint Satisfaction Problems: A Survey
International audienceEven though the Constraint Satisfaction Problem (CSP) is NP-complete, many tractable classes of CSP instances have been identified. After discussing different forms and uses of tractability, we describe some landmark tractable classes and survey recent theoretical results. Although we concentrate on the classical CSP, we also cover its important extensions to infinite domains and optimisation, as well as #CSP and QCSP
Backdoor Sets for CSP
A backdoor set of a CSP instance is a set of variables whose instantiation moves the instance into a fixed class of tractable instances (an island of tractability). An interesting algorithmic task is to find a small backdoor set efficiently: once it is found we can solve the instance by solving a number of tractable instances. Parameterized complexity provides an adequate framework for studying and solving this algorithmic task, where the size of the backdoor set provides a natural parameter. In this survey we present some recent parameterized complexity results on CSP backdoor sets, focusing on backdoor sets into islands of tractability that are defined in terms of constraint languages
Combining Treewidth and Backdoors for CSP
We show that CSP is fixed-parameter tractable when parameterized by the treewidth of a backdoor into any tractable CSP problem over a finite constraint language. This result combines the two prominent approaches for achieving tractability for CSP: (i) structural restrictions on the interaction between the variables and the constraints and (ii) language restrictions on the relations that can be used inside the constraints. Apart from defining the notion of backdoor-treewidth and showing how backdoors of small treewidth can be used to efficiently solve CSP, our main technical contribution is a fixed-parameter algorithm that finds a backdoor of small treewidth
A Graph Algorithmic Approach to Separate Direct from Indirect Neural Interactions
Network graphs have become a popular tool to represent complex systems
composed of many interacting subunits; especially in neuroscience, network
graphs are increasingly used to represent and analyze functional interactions
between neural sources. Interactions are often reconstructed using pairwise
bivariate analyses, overlooking their multivariate nature: it is neglected that
investigating the effect of one source on a target necessitates to take all
other sources as potential nuisance variables into account; also combinations
of sources may act jointly on a given target. Bivariate analyses produce
networks that may contain spurious interactions, which reduce the
interpretability of the network and its graph metrics. A truly multivariate
reconstruction, however, is computationally intractable due to combinatorial
explosion in the number of potential interactions. Thus, we have to resort to
approximative methods to handle the intractability of multivariate interaction
reconstruction, and thereby enable the use of networks in neuroscience. Here,
we suggest such an approximative approach in the form of an algorithm that
extends fast bivariate interaction reconstruction by identifying potentially
spurious interactions post-hoc: the algorithm flags potentially spurious edges,
which may then be pruned from the network. This produces a statistically
conservative network approximation that is guaranteed to contain non-spurious
interactions only. We describe the algorithm and present a reference
implementation to test its performance. We discuss the algorithm in relation to
other approximative multivariate methods and highlight suitable application
scenarios. Our approach is a tractable and data-efficient way of reconstructing
approximative networks of multivariate interactions. It is preferable if
available data are limited or if fully multivariate approaches are
computationally infeasible.Comment: 24 pages, 8 figures, published in PLOS On
Guarantees and Limits of Preprocessing in Constraint Satisfaction and Reasoning
We present a first theoretical analysis of the power of polynomial-time
preprocessing for important combinatorial problems from various areas in AI. We
consider problems from Constraint Satisfaction, Global Constraints,
Satisfiability, Nonmonotonic and Bayesian Reasoning under structural
restrictions. All these problems involve two tasks: (i) identifying the
structure in the input as required by the restriction, and (ii) using the
identified structure to solve the reasoning task efficiently. We show that for
most of the considered problems, task (i) admits a polynomial-time
preprocessing to a problem kernel whose size is polynomial in a structural
problem parameter of the input, in contrast to task (ii) which does not admit
such a reduction to a problem kernel of polynomial size, subject to a
complexity theoretic assumption. As a notable exception we show that the
consistency problem for the AtMost-NValue constraint admits a polynomial kernel
consisting of a quadratic number of variables and domain values. Our results
provide a firm worst-case guarantees and theoretical boundaries for the
performance of polynomial-time preprocessing algorithms for the considered
problems.Comment: arXiv admin note: substantial text overlap with arXiv:1104.2541,
arXiv:1104.556
- …