39,025 research outputs found

    Lessons of Election 2000

    Get PDF
    Many people believe that Election 2000 proved only how divided the nation is over politics and policy. In contrast, this study draws six lessons from Election 2000. Congress should set up a commission to recommend changes in the electoral system; the states should have the choice of accepting the reforms and the obligation to pay for them. The Electoral College should be preserved. The framers designed the Electoral College to limit arbitrary power. Abolishing the Electoral College would weaken the states and damage federalism. The United States is a consitutional republic, not a regime based on "the will of the people." Several politicians have appealed to the will of the people in the Florida struggle. The will of the people is a concept alien to the American political tradition of limited constitutional government. Underlying public attitudes strongly supported limited government in Election 2000. Both the platforms of the candidates and public opinion polls indicate that the public's skepticism about government remains high. Campaign spending enhanced turnout and participation in Election 2000. Both the NAACP and unions spent lavishly on getting out the vote. If campaign spending is restricted, turnout will fall, contrary to the professed desire of advocates of capaign finance restrictions. Congress should not hold hearings about media mistakes. Any punishment for errors or bias by the networks on election night should be left to public opinion

    A meta-analysis of state-of-the-art electoral prediction from Twitter data

    Full text link
    Electoral prediction from Twitter data is an appealing research topic. It seems relatively straightforward and the prevailing view is overly optimistic. This is problematic because while simple approaches are assumed to be good enough, core problems are not addressed. Thus, this paper aims to (1) provide a balanced and critical review of the state of the art; (2) cast light on the presume predictive power of Twitter data; and (3) depict a roadmap to push forward the field. Hence, a scheme to characterize Twitter prediction methods is proposed. It covers every aspect from data collection to performance evaluation, through data processing and vote inference. Using that scheme, prior research is analyzed and organized to explain the main approaches taken up to date but also their weaknesses. This is the first meta-analysis of the whole body of research regarding electoral prediction from Twitter data. It reveals that its presumed predictive power regarding electoral prediction has been rather exaggerated: although social media may provide a glimpse on electoral outcomes current research does not provide strong evidence to support it can replace traditional polls. Finally, future lines of research along with a set of requirements they must fulfill are provided.Comment: 19 pages, 3 table

    Assessing candidate preference through web browsing history

    Full text link
    Predicting election outcomes is of considerable interest to candidates, political scientists, and the public at large. We propose the use of Web browsing history as a new indicator of candidate preference among the electorate, one that has potential to overcome a number of the drawbacks of election polls. However, there are a number of challenges that must be overcome to effectively use Web browsing for assessing candidate preference—including the lack of suitable ground truth data and the heterogeneity of user populations in time and space. We address these challenges, and show that the resulting methods can shed considerable light on the dynamics of voters’ candidate preferences in ways that are difficult to achieve using polls.Accepted manuscrip

    Maximum Likelihood Approach to Vote Aggregation with Variable Probabilities

    Get PDF
    Condorcet (1785) initiated the statistical approach to vote aggregation. Two centuries later, Young (1988) showed that a correct application of the maximum likelihood principle leads to the selection of rankings called Kemeny orders, which have the minimal total number of disagreements with those of the voters. The Condorcet-Kemeny-Yoiung approach is based on the assumption that the voters have the same probability of comparing correctly two alternatives and that this probability is the same for any pair of alternatives. We relax the second part of this assumption by letting the probability of comparing correctly two alternatives be increasing with the distance between two alternatives in the allegedly true ranking. This leads to a rule in which the majority in favor of one alternative against another one is given a larger weight the larger the distance between the two alternatives in the true ranking, i.e. the larger the probability that the voters compare them correctly. This rule is not Condorcet consistent. Thus, it may be different from the Kemeny rule. Yet, it is anonymous, neutral, and paretian. However, contrary to the Kemeny rule, it does not satisfy Young and Levenglick (1978)'s local independence of irrelevant alternatives. Condorcet also hinted that the Condorcet winner or the top alternative in the Condorcet ranking is not necessarily most likely to be the best. Young confirms that indeed with a constant probability close to 1/2, this alternative is the Borda winner while it is the alternative whose smallest majority is the largest when the probability is close to 1. We extend his analysis to the case of variable probabilities. Young's result implies that the Kemeny rule does not necessarily select the alternative most likely to be the best. A natural question that comes to mind is whether the rule obtained with variable probabilities does better than the Kemeny rule in this respect. It appears that this performance imporves with the rate at which the probability increases.Vote Aggregation, Kemeny Rule, Maximum Likelihood, Variable Probabilities

    A Local-Dominance Theory of Voting Equilibria

    Full text link
    It is well known that no reasonable voting rule is strategyproof. Moreover, the common Plurality rule is particularly prone to strategic behavior of the voters and empirical studies show that people often vote strategically in practice. Multiple game-theoretic models have been proposed to better understand and predict such behavior and the outcomes it induces. However, these models often make unrealistic assumptions regarding voters' behavior and the information on which they base their vote. We suggest a new model for strategic voting that takes into account voters' bounded rationality, as well as their limited access to reliable information. We introduce a simple behavioral heuristic based on \emph{local dominance}, where each voter considers a set of possible world states without assigning probabilities to them. This set is constructed based on prospective candidates' scores (e.g., available from an inaccurate poll). In a \emph{voting equilibrium}, all voters vote for candidates not dominated within the set of possible states. We prove that these voting equilibria exist in the Plurality rule for a broad class of local dominance relations (that is, different ways to decide which states are possible). Furthermore, we show that in an iterative setting where voters may repeatedly change their vote, local dominance-based dynamics quickly converge to an equilibrium if voters start from the truthful state. Weaker convergence guarantees in more general settings are also provided. Using extensive simulations of strategic voting on generated and real preference profiles, we show that convergence is fast and robust, that emerging equilibria are consistent across various starting conditions, and that they replicate widely known patterns of human voting behavior such as Duverger's law. Further, strategic voting generally improves the quality of the winner compared to truthful voting
    • …
    corecore