584 research outputs found
Reassessing the Link between Voter Heterogeneity and Political Accountability: A Latent Class Regression Model of Economic Voting
While recent research has underscored the conditioning effect of individual characteristics on economic voting behavior, most empirical studies have failed to explicitly incorporate observed heterogeneity into statistical analyses linking citizens' economic evaluations to electoral choices. In order to overcome these drawbacks, we propose a latent
class regression model to jointly analyze the determinants and influence of economic
voting in Presidential and Congressional elections. Our modeling approach allows us to
better describe the effects of individual covariates on economic voting and to test hypotheses on the existence of heterogeneous types of voters, providing an empirical basis
for assessing the relative validity of alternative explanations proposed in the literature.
Using survey data from the 2004 U.S. Presidential, Senate and House elections, we
and that voters with college education and those more interested in political campaigns
based their vote on factors other than their economic perceptions. In contrast, less educated and interested respondents assigned considerable weight to economic assessments,
with sociotropic jugdgments strongly in
uencing their vote in the Presidential election
and personal financial considerations affecting their vote in House elections. We conclude that the main distinction in the 2004 election was not between `sociotropic' and
`pocketbook' voters, but rather between `economic' and `non-economic' voters
Theoretical Foundations and Empirical Evaluations of Partisan Fairness in District-Based Democracies
We clarify the theoretical foundations of partisan fairness standards for district-based democratic electoral systems, including essential assumptions and definitions not previously recognized, formalized, or in some cases even discussed. We also offer extensive empirical evidence for assumptions with observable implications. We cover partisan symmetry, the most commonly accepted fairness standard, and other perspectives. Throughout, we follow a fundamental principle of statistical inference too often ignored in this literature—defining the quantity of interest separately so its measures can be proven wrong, evaluated, and improved. This enables us to prove which of the many newly proposed fairness measures are statistically appropriate and which are biased, limited, or not measures of the theoretical quantity they seek to estimate at all. Because real-world redistricting and gerrymandering involve complicated politics with numerous participants and conflicting goals, measures biased for partisan fairness sometimes still provide useful descriptions of other aspects of electoral systems
Throwing Out the Baby With the Bath Water: A Comment on Green, Kim and Yoon
Donald P. Green, Soo Yeon Kim, and David H. Yoon contribute to the literature on
estimating pooled times-series cross-section models in international relations (IR).
They argue that such models should be estimated with fixed effects when such
effects are statistically necessary. While we obviously have no disagreement that
sometimes fixed effects are appropriate, we show here that they are pernicious for
IR time-series cross-section models with a binary dependent variable and that they
are often problematic for IR models with a continuous dependent variable. In the
binary case, this perniciousness is the result of many pairs of nations always being
scored zero and hence having no impact on the parameter estimates; for example,
many dyads never come into conflict. In the continuous case, fixed effects are
problematic in the presence of the temporally stable regressors that are common IR
applications, such as the dyadic democracy measures used by Green, Kim, and
Yoon
Random Coefficient Models for Time-Series–Cross-Section Data
This paper considers random coefficient models (RCMs) for time-series–cross-section data. These models allow for unit to unit variation in the model parameters. After laying out the various models, we assess several issues in specifying RCMs. We then consider the finite sample properties of some standard RCM estimators, and show that the most common one, associated with Hsiao, has very poor properties. These analyses also show that a somewhat awkward combination of estimators based on Swamy’s work performs reasonably well; this awkward estimator and a Bayes estimator with an uninformative prior (due to Smith) seem to perform best. But we also see that estimators which assume full pooling perform well unless there is a large degree of unit to unit parameter heterogeneity. We also argue that the various data driven methods (whether classical or empirical Bayes or Bayes with gentle priors) tends to lead to much more heterogeneity than most political scientists would like. We speculate that fully Bayesian models, with a variety of informative priors, may be the best way to approach RCMs
Indecision Theory: Quality of Information and Voting Behavior
In this paper we show how to incorporate quality of information into a model of voting behavior. We do so in the context of the turnout decision of instrumentally rational voters who differ in their quality of information, which we refer to as ambiguity. Ambiguity is reflected by the fact that the voter's beliefs are given by a set of probabilities, each of which represents in the voter's mind a different possible scenario.
We show that in most elections voters who satisfy the Bayesian model do not strictly prefer abstaining over voting for one of the candidates. In contrast, a voter who is averse to ambiguity considers abstention strictly optimal when the candidates' policy positions are both ambiguous and they are “ambiguity complements". Abstaining is preferred since it is tantamount to mixing the prospects embodied by the two candidates, thus enabling the voter to “hedge" the candidates' ambiguity
Random Coefficient Models for Time-Series-Cross-Section Data: Monte Carlo Experiments
This article considers random coefficient models (RCMs) for time-series–cross-section data.
These models allow for unit to unit variation in the model parameters. The heart of the article
compares the finite sample properties of the fully pooled estimator, the unit by unit
(unpooled) estimator, and the (maximum likelihood) RCM estimator. The maximum likelihood
estimator RCM performs well, even where the data were generated so that the RCM
would be problematic. In an appendix, we show that the most common feasible generalized
least squares estimator of the RCM models is always inferior to the maximum likelihood
estimator, and in smaller samples dramatically so
A Statistical Model for Multiparty Electoral Data
We propose a comprehensive statistical model for analyzing multiparty, district-level elections. This
model, which provides a tool for comparative politics research analogous to that which regression
analysis provides in the American two-party context, can be used to explain or predict how
geographic distributions of electoral results depend upon economic conditions, neighborhood ethnic
compositions, campaign spending, and other features of the election campaign or aggregate areas. We also
provide new graphical representations for data exploration, model evaluation, and substantive interpretation.
We illustrate the use of this model by attempting to resolve a controversy over the size of and trend in the
electoral advantage of incumbency in Britain. Contrary to previous analyses, all based on measures now
known to be biased, we demonstrate that the advantage is small but meaningful, varies substantially across
the parties, and is not growing. Finally, we show how to estimate the party from which each party's advantage
is predominantly drawn
Comment on 'What To Do (and Not To Do) with Times-Series-Cross-Section Data'
Much as we would like to believe that the high citation count
for this article is due to the brilliance and clarity of our argument,
it is more likely that the count is due to our being in the
right place (that is, the right part of the discipline) at the right
time. In the 1960s and 1970s, serious quantitative analysis
was used primarily in the study of American politics. But
since the 1980s it has spread to the study of both comparative
politics and international relations. In comparative politics
we see in the 20 most cited Review articles Hibbs’s (1977)
and Cameron’s (1978) quantitative analyses of the political
economy of advanced industrial societies; in international
relations we see Maoz and Russett’s (1993) analysis of the
democratic peace; and these studies have been followed by
myriad others. Our article contributed to the methodology
for analyzing what has become the principal type of data used in the study of comparative politics; a related article
(Beck, Katz, and Tucker 1998), which has also had a good
citation history, dealt with analyzing this type of data with a
binary dependent variable, data heavily used in conflict studies
similar to that of Maoz and Russett’s. Thus the citations
to our methodological discussions reflect the huge amount
of work now being done in the quantitative analysis of both
comparative politics and international relations
- …