6 research outputs found

    Plural Voting for the Twenty-First Century

    Get PDF
    Recent political developments cast doubt on the wisdom of democratic decision-making. Brexit, the Colombian people's (initial) rejection of peace with the FARC, and the election of Donald Trump suggest that the time is right to explore alternatives to democracy. In this essay, I describe and defend the epistocratic system of government which is, given current theoretical and empirical knowledge, most likely to produce optimal political outcomes—or at least better outcomes than democracy produces. To wit, we should expand the suffrage as wide as possible and weight citizens’ votes in accordance with their competence. As it turns out, the optimal system is closely related to J. S. Mill's plural voting proposal. I also explain how voters’ competences can be precisely determined, without reference to an objective standard of correctness and without generating invidious comparisons between voters

    Voting rules as statistical estimators

    Get PDF
    We adopt an `epistemic' interpretation of social decisions: there is an objectively correct choice, each voter receives a `noisy signal' of the correct choice, and the social objective is to aggregate these signals to make the best possible guess about the correct choice. One epistemic method is to fix a probability model and compute the maximum likelihood estimator (MLE), maximum a posteriori estimator (MAP) or expected utility maximizer (EUM), given the data provided by the voters. We first show that an abstract voting rule can be interpreted as MLE or MAP if and only if it is a scoring rule. We then specialize to the case of distance-based voting rules, in particular, the use of the median rule in judgement aggregation. Finally, we show how several common `quasiutilitarian' voting rules can be interpreted as EUM.voting; maximum likelihood estimator; maximum a priori estimator; expected utility maximizer; statistics; epistemic democracy; Condorcet jury theorem; scoring rule

    Voting with Limited Information and Many Alternatives

    Full text link
    The traditional axiomatic approach to voting is motivated by the problem of reconciling differences in subjective preferences. In contrast, a dominant line of work in the theory of voting over the past 15 years has considered a different kind of scenario, also fundamental to voting, in which there is a genuinely "best" outcome that voters would agree on if they only had enough information. This type of scenario has its roots in the classical Condorcet Jury Theorem; it includes cases such as jurors in a criminal trial who all want to reach the correct verdict but disagree in their inferences from the available evidence, or a corporate board of directors who all want to improve the company's revenue, but who have different information that favors different options. This style of voting leads to a natural set of questions: each voter has a {\em private signal} that provides probabilistic information about which option is best, and a central question is whether a simple plurality voting system, which tabulates votes for different options, can cause the group decision to arrive at the correct option. We show that plurality voting is powerful enough to achieve this: there is a way for voters to map their signals into votes for options in such a way that --- with sufficiently many voters --- the correct option receives the greatest number of votes with high probability. We show further, however, that any process for achieving this is inherently expensive in the number of voters it requires: succeeding in identifying the correct option with probability at least 1−η1 - \eta requires Ω(n3ϵ−2log⁡η−1)\Omega(n^3 \epsilon^{-2} \log \eta^{-1}) voters, where nn is the number of options and ϵ\epsilon is a distributional measure of the minimum difference between the options

    Optimizing Political Influence: A Jury Theorem with Dynamic Competence and Dependence

    Get PDF
    The purpose of this paper is to illustrate, formally, an ambiguity in the exercise of political influence. To wit: A voter might exert influence with an eye toward maximizing the probability that the political system (1) obtains the correct (e.g. just) outcome, or (2) obtains the outcome that he judges to be correct (just). And these are two very different things. A variant of Condorcet's Jury Theorem which incorporates the effect of influence on group competence and interdependence is developed. Analytic and numerical results are obtained, the most important of which is that it is never optimal--from the point-of-view of collective accuracy--for a voter to exert influence without limit. He ought to either refrain from influencing other voters or else exert a finite amount of influence, depending on circumstance. Philosophical lessons are drawn from the model, to include a solution to Wollheim's "Paradox in the Theory of Democracy"

    Voting rules as statistical estimators

    Get PDF
    We adopt an `epistemic' interpretation of social decisions: there is an objectively correct choice, each voter receives a `noisy signal' of the correct choice, and the social objective is to aggregate these signals to make the best possible guess about the correct choice. One epistemic method is to fix a probability model and compute the maximum likelihood estimator (MLE), maximum a posteriori estimator (MAP) or expected utility maximizer (EUM), given the data provided by the voters. We first show that an abstract voting rule can be interpreted as MLE or MAP if and only if it is a scoring rule. We then specialize to the case of distance-based voting rules, in particular, the use of the median rule in judgement aggregation. Finally, we show how several common `quasiutilitarian' voting rules can be interpreted as EUM
    corecore