8,280 research outputs found
Class dealignment and the neighbourhood effect: Miller revisited
The concept of a neighbourhood effect within British voting patterns has largely been discarded, because no data have been available for testing it at the appropriate spatial scales. To undertake such tests, bespoke neighbourhoods have been created around the home of each respondent to the 1997 British Election Study survey in England and Wales, and small-area census data have been assembled for these to depict the socio-economic characteristics of voters' local contexts.
Analyses of voting in these small areas, divided into five equal-sized status areas, provides very strong evidence that members of each social class were much more likely to vote Labour than Conservative in the low-status than in the high-status areas. This is entirely consistent with the concept of the neighbourhood effect, but alternative explanations are feasible. The data provide very strong evidence of micro-geographical variations in voting patterns, for which further research is necessary to identify the processes involved
Feng-Rao decoding of primary codes
We show that the Feng-Rao bound for dual codes and a similar bound by
Andersen and Geil [H.E. Andersen and O. Geil, Evaluation codes from order
domain theory, Finite Fields Appl., 14 (2008), pp. 92-123] for primary codes
are consequences of each other. This implies that the Feng-Rao decoding
algorithm can be applied to decode primary codes up to half their designed
minimum distance. The technique applies to any linear code for which
information on well-behaving pairs is available. Consequently we are able to
decode efficiently a large class of codes for which no non-trivial decoding
algorithm was previously known. Among those are important families of
multivariate polynomial codes. Matsumoto and Miura in [R. Matsumoto and S.
Miura, On the Feng-Rao bound for the L-construction of algebraic geometry
codes, IEICE Trans. Fundamentals, E83-A (2000), pp. 926-930] (See also [P.
Beelen and T. H{\o}holdt, The decoding of algebraic geometry codes, in Advances
in algebraic geometry codes, pp. 49-98]) derived from the Feng-Rao bound a
bound for primary one-point algebraic geometric codes and showed how to decode
up to what is guaranteed by their bound. The exposition by Matsumoto and Miura
requires the use of differentials which was not needed in [Andersen and Geil
2008]. Nevertheless we demonstrate a very strong connection between Matsumoto
and Miura's bound and Andersen and Geil's bound when applied to primary
one-point algebraic geometric codes.Comment: elsarticle.cls, 23 pages, no figure. Version 3 added citations to the
works by I.M. Duursma and R. Pellikaa
Clearing Contamination in Large Networks
In this work, we study the problem of clearing contamination spreading
through a large network where we model the problem as a graph searching game.
The problem can be summarized as constructing a search strategy that will leave
the graph clear of any contamination at the end of the searching process in as
few steps as possible. We show that this problem is NP-hard even on directed
acyclic graphs and provide an efficient approximation algorithm. We
experimentally observe the performance of our approximation algorithm in
relation to the lower bound on several large online networks including
Slashdot, Epinions and Twitter. The experiments reveal that in most cases our
algorithm performs near optimally
Learning with many experts: model selection and sparsity
Experts classifying data are often imprecise. Recently, several models have
been proposed to train classifiers using the noisy labels generated by these
experts. How to choose between these models? In such situations, the true
labels are unavailable. Thus, one cannot perform model selection using the
standard versions of methods such as empirical risk minimization and cross
validation. In order to allow model selection, we present a surrogate loss and
provide theoretical guarantees that assure its consistency. Next, we discuss
how this loss can be used to tune a penalization which introduces sparsity in
the parameters of a traditional class of models. Sparsity provides more
parsimonious models and can avoid overfitting. Nevertheless, it has seldom been
discussed in the context of noisy labels due to the difficulty in model
selection and, therefore, in choosing tuning parameters. We apply these
techniques to several sets of simulated and real data.Comment: This is the pre-peer reviewed versio
On the Hardness of Bribery Variants in Voting with CP-Nets
We continue previous work by Mattei et al. (Mattei, N., Pini, M., Rossi, F.,
Venable, K.: Bribery in voting with CP-nets. Ann. of Math. and Artif. Intell.
pp. 1--26 (2013)) in which they study the computational complexity of bribery
schemes when voters have conditional preferences that are modeled by CP-nets.
For most of the cases they considered, they could show that the bribery problem
is solvable in polynomial time. Some cases remained open---we solve two of them
and extend the previous results to the case that voters are weighted. Moreover,
we consider negative (weighted) bribery in CP-nets, when the briber is not
allowed to pay voters to vote for his preferred candidate.Comment: improved readability; identified Cheapest Subsets to be the
enumeration variant of K.th Largest Subset, so we renamed it to K-Smallest
Subsets and point to the literatur; some more typos fixe
How Hard Is It to Control an Election by Breaking Ties?
We study the computational complexity of controlling the result of an
election by breaking ties strategically. This problem is equivalent to the
problem of deciding the winner of an election under parallel universes
tie-breaking. When the chair of the election is only asked to break ties to
choose between one of the co-winners, the problem is trivially easy. However,
in multi-round elections, we prove that it can be NP-hard for the chair to
compute how to break ties to ensure a given result. Additionally, we show that
the form of the tie-breaking function can increase the opportunities for
control. Indeed, we prove that it can be NP-hard to control an election by
breaking ties even with a two-stage voting rule.Comment: Revised and expanded version including longer proofs and additional
result
On the Computational Complexity of Non-dictatorial Aggregation
We investigate when non-dictatorial aggregation is possible from an
algorithmic perspective, where non-dictatorial aggregation means that the votes
cast by the members of a society can be aggregated in such a way that the
collective outcome is not simply the choices made by a single member of the
society. We consider the setting in which the members of a society take a
position on a fixed collection of issues, where for each issue several
different alternatives are possible, but the combination of choices must belong
to a given set of allowable voting patterns. Such a set is called a
possibility domain if there is an aggregator that is non-dictatorial, operates
separately on each issue, and returns values among those cast by the society on
each issue. We design a polynomial-time algorithm that decides, given a set
of voting patterns, whether or not is a possibility domain. Furthermore, if
is a possibility domain, then the algorithm constructs in polynomial time
such a non-dictatorial aggregator for . We then show that the question of
whether a Boolean domain is a possibility domain is in NLOGSPACE. We also
design a polynomial-time algorithm that decides whether is a uniform
possibility domain, that is, whether admits an aggregator that is
non-dictatorial even when restricted to any two positions for each issue. As in
the case of possibility domains, the algorithm also constructs in polynomial
time a uniform non-dictatorial aggregator, if one exists. Then, we turn our
attention to the case where is given implicitly, either as the set of
assignments satisfying a propositional formula, or as a set of consistent
evaluations of an sequence of propositional formulas. In both cases, we provide
bounds to the complexity of deciding if is a (uniform) possibility domain.Comment: 21 page
- …