13,398 research outputs found
An Empirical Study of the Manipulability of Single Transferable Voting
Voting is a simple mechanism to combine together the preferences of multiple
agents. Agents may try to manipulate the result of voting by mis-reporting
their preferences. One barrier that might exist to such manipulation is
computational complexity. In particular, it has been shown that it is NP-hard
to compute how to manipulate a number of different voting rules. However,
NP-hardness only bounds the worst-case complexity. Recent theoretical results
suggest that manipulation may often be easy in practice. In this paper, we
study empirically the manipulability of single transferable voting (STV) to
determine if computational complexity is really a barrier to manipulation. STV
was one of the first voting rules shown to be NP-hard. It also appears one of
the harder voting rules to manipulate. We sample a number of distributions of
votes including uniform and real world elections. In almost every election in
our experiments, it was easy to compute how a single agent could manipulate the
election or to prove that manipulation by a single agent was impossible.Comment: To appear in Proceedings of the 19th European Conference on
Artificial Intelligence (ECAI 2010
Complexity of and Algorithms for Borda Manipulation
We prove that it is NP-hard for a coalition of two manipulators to compute
how to manipulate the Borda voting rule. This resolves one of the last open
problems in the computational complexity of manipulating common voting rules.
Because of this NP-hardness, we treat computing a manipulation as an
approximation problem where we try to minimize the number of manipulators.
Based on ideas from bin packing and multiprocessor scheduling, we propose two
new approximation methods to compute manipulations of the Borda rule.
Experiments show that these methods significantly outperform the previous best
known %existing approximation method. We are able to find optimal manipulations
in almost all the randomly generated elections tested. Our results suggest
that, whilst computing a manipulation of the Borda rule by a coalition is
NP-hard, computational complexity may provide only a weak barrier against
manipulation in practice
Preconditioning Kernel Matrices
The computational and storage complexity of kernel machines presents the
primary barrier to their scaling to large, modern, datasets. A common way to
tackle the scalability issue is to use the conjugate gradient algorithm, which
relieves the constraints on both storage (the kernel matrix need not be stored)
and computation (both stochastic gradients and parallelization can be used).
Even so, conjugate gradient is not without its own issues: the conditioning of
kernel matrices is often such that conjugate gradients will have poor
convergence in practice. Preconditioning is a common approach to alleviating
this issue. Here we propose preconditioned conjugate gradients for kernel
machines, and develop a broad range of preconditioners particularly useful for
kernel matrices. We describe a scalable approach to both solving kernel
machines and learning their hyperparameters. We show this approach is exact in
the limit of iterations and outperforms state-of-the-art approximations for a
given computational budget
Detecting Possible Manipulators in Elections
Manipulation is a problem of fundamental importance in the context of voting
in which the voters exercise their votes strategically instead of voting
honestly to prevent selection of an alternative that is less preferred. The
Gibbard-Satterthwaite theorem shows that there is no strategy-proof voting rule
that simultaneously satisfies certain combinations of desirable properties.
Researchers have attempted to get around the impossibility results in several
ways such as domain restriction and computational hardness of manipulation.
However these approaches have been shown to have limitations. Since prevention
of manipulation seems to be elusive, an interesting research direction
therefore is detection of manipulation. Motivated by this, we initiate the
study of detection of possible manipulators in an election.
We formulate two pertinent computational problems - Coalitional Possible
Manipulators (CPM) and Coalitional Possible Manipulators given Winner (CPMW),
where a suspect group of voters is provided as input to compute whether they
can be a potential coalition of possible manipulators. In the absence of any
suspect group, we formulate two more computational problems namely Coalitional
Possible Manipulators Search (CPMS), and Coalitional Possible Manipulators
Search given Winner (CPMSW). We provide polynomial time algorithms for these
problems, for several popular voting rules. For a few other voting rules, we
show that these problems are in NP-complete. We observe that detecting
manipulation maybe easy even when manipulation is hard, as seen for example, in
the case of the Borda voting rule.Comment: Accepted in AAMAS 201
Analysis of multilevel Monte Carlo path simulation using the Milstein discretisation
The multilevel Monte Carlo path simulation method introduced by Giles ({\it
Operations Research}, 56(3):607-617, 2008) exploits strong convergence
properties to improve the computational complexity by combining simulations
with different levels of resolution. In this paper we analyse its efficiency
when using the Milstein discretisation; this has an improved order of strong
convergence compared to the standard Euler-Maruyama method, and it is proved
that this leads to an improved order of convergence of the variance of the
multilevel estimator. Numerical results are also given for basket options to
illustrate the relevance of the analysis.Comment: 33 pages, 4 figures, to appear in Discrete and Continuous Dynamical
Systems - Series
Hardness Amplification of Optimization Problems
In this paper, we prove a general hardness amplification scheme for optimization problems based on the technique of direct products.
We say that an optimization problem ? is direct product feasible if it is possible to efficiently aggregate any k instances of ? and form one large instance of ? such that given an optimal feasible solution to the larger instance, we can efficiently find optimal feasible solutions to all the k smaller instances. Given a direct product feasible optimization problem ?, our hardness amplification theorem may be informally stated as follows:
If there is a distribution D over instances of ? of size n such that every randomized algorithm running in time t(n) fails to solve ? on 1/?(n) fraction of inputs sampled from D, then, assuming some relationships on ?(n) and t(n), there is a distribution D\u27 over instances of ? of size O(n??(n)) such that every randomized algorithm running in time t(n)/poly(?(n)) fails to solve ? on 99/100 fraction of inputs sampled from D\u27.
As a consequence of the above theorem, we show hardness amplification of problems in various classes such as NP-hard problems like Max-Clique, Knapsack, and Max-SAT, problems in P such as Longest Common Subsequence, Edit Distance, Matrix Multiplication, and even problems in TFNP such as Factoring and computing Nash equilibrium
- …