1,258 research outputs found
Election Forensics and the 2004 Venezuelan Presidential Recall Referendum as a Case Study
A referendum to recall President Hugo Ch\'{a}vez was held in Venezuela in
August of 2004. In the referendum, voters were to vote YES if they wished to
recall the President and NO if they wanted him to continue in office. The
official results were 59% NO and 41% YES. Even though the election was
monitored by various international groups including the Organization of
American States and the Carter Center (both of which declared that the
referendum had been conducted in a free and transparent manner), the outcome of
the election was questioned by other groups both inside and outside of
Venezuela. The collection of manuscripts that comprise this issue of
Statistical Science discusses the general topic of election forensics but also
focuses on different statistical approaches to explore, post-election, whether
irregularities in the voting, vote transmission or vote counting processes
could be detected in the 2004 presidential recall referendum. In this
introduction to the Venezuela issue, we discuss the more recent literature on
post-election auditing, describe the institutional context for the 2004
Venezuelan referendum, and briefly introduce each of the five contributions.Comment: Published in at http://dx.doi.org/10.1214/11-STS379 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Automatic Matching of Bullet Land Impressions
In 2009, the National Academy of Sciences published a report questioning the
scientific validity of many forensic methods including firearm examination.
Firearm examination is a forensic tool used to help the court determine whether
two bullets were fired from the same gun barrel. During the firing process,
rifling, manufacturing defects, and impurities in the barrel create striation
marks on the bullet. Identifying these striation markings in an attempt to
match two bullets is one of the primary goals of firearm examination. We
propose an automated framework for the analysis of the 3D surface measurements
of bullet land impressions which transcribes the individual characteristics
into a set of features that quantify their similarities. This makes
identification of matches easier and allows for a quantification of both
matches and matchability of barrels. The automatic matching routine we propose
manages to (a) correctly identify land impressions (the surface between two
bullet groove impressions) with too much damage to be suitable for comparison,
and (b) correctly identify all 10,384 land-to-land matches of the James Hamby
study.Comment: 27 pages, 20 figure
Assessing the Adequacy of Diets: A Brief Commentary
Estimating the proportion of the population at risk of a dietary deficiency has long been a problem and different approaches have been advocated. The author presents a summary of the features of a method developed to estimate usual nutrient intake distributions. She also discusses the type of data needed to make inferences about the proportion of the population at risk of deficiencies, and argues that, under certain assumptions, it may be possible to address the problem with data already available
Bayesian prediction and its application to the genetic evaluation of livestock
Let y represent an n x 1 observable random vector that follows the mixed linear model y = X[beta] + Zs + e. Here, X and Z are specified matrices, [beta] is a column vector of unknown fixed parameters, and s and e are statistically independent multivariate normal random column vectors with E(e) = 0, E(s) = 0, var(e) = [sigma][subscript]sp e2I, and var(s) = [sigma][subscript]sp e2[gamma]A, where A is a known, positive definite matrix. Further, [sigma][subscript]sp e2 and [gamma] are unknown, scalar-valued parameters such that [sigma][subscript]sp e2\u3e 0 and 0 ≤ [gamma] ≤ u, where u is a specified constant;The problem considered is that of Bayesian inference for a predictable random variable w = [lambda][superscript]\u27[beta] + [phi][superscript]\u27s, for the case where the prior distribution of [beta] is non-informative. It is shown that the problem of computing the posterior mean, variance, and density of w can be reduced to that of numerically evaluating one-dimensional integrals, provided the distribution of [sigma][subscript]sp e2 and [gamma] is of the general form G[subscript]1([gamma])([sigma][subscript]sp e2)[superscript] G2([gamma])exp-(2[sigma][subscript]sp e2)[superscript]-1G[subscript]3([gamma]). Here, G[subscript]1([gamma]), G[subscript]2([gamma]) and G[subscript]3([gamma]) are functions of [gamma];The mean of the posterior distribution of w can be approximated by w = [lambda][superscript]\u27[beta] + [phi][superscript]\u27 s, where [beta] and s are the mode of the joint posterior distribution of [beta] and s. If there is no upper bound on [gamma], the computation of w does not require numerical integration. The problem of computing a 100(1 - [alpha])% highest posterior density (HPD) credible set for w can be reduced to that of solving a constrained minimization problem. An algorithm is described for obtaining 100(1 - [alpha])% HPD credible sets;The feasibility of using the Bayesian methodology was evaluated by applying the methodology to predict the genetic merit of dairy bulls. A secondary objective in the analysis of these data was to compare the classical and Bayesian approaches. Classical and Bayesian predictions, along with prediction error variances and posterior variances for the breeding values of 1,028 Holstein bulls were obtained for two traits: milk production and number of days open. For these data, the Bayesian and the classical predictions were very similar
Assessing the Prevalence of Nutrient Inadequacy
The author discusses the problem of estimating the proportion of the population at risk of nutrient deficiency. While different analytical approaches to the problem have been proposed, it appears that, ideally, such an estimate would require knowledge about the joint distribution of usual intakes and nutrient requirements. The author proposes an alternative method in which it would suffice to know the mean of the requirement distribution (EAR) together with the distribution of usual intakes in the population. The paper outlines the assumptions that must hold in order for this method to be effective
An Analysis of Grain Production Decline During the Early Transition in Ukraine: Bayesian Inference
The first years of reforms in the former Soviet Union resulted in a sharp decline in agricultural production. Several reasons for the fall have been advanced, including a drop in state deliveries of production inputs, labor and management migration from the largescale collective system to the private sector, and the transition-related break in old production ties and networks. Little is known, however, about the relative contribution of all these factors to the decline in production efficiency. In this study, we quantify the contributions of weather variability, decline in input quantities, and changes in technical efficiency to the decline in Ukrainian grain production over 1989-1992. We model the stochastic production frontier using a three-level hierarchical model, and estimate its parameters from within a Bayesian framework. In the model, the time-varying technical efficiency depends on farm-specific factors. Non-informative or diffuse prior distributions are chosen where possible. We find that the decline in the use of production inputs accounts for over half of total output decline, while weather effects account for about 35% of the decline. The rest is attributable to a decline in the technical efficiency of collective farms during the transition years
Bayesian Estimation of Technical Efficiency of a Single Input
We propose estimation of a stochastic production frontier model within a Bayesian framework to obtain the posterior distribution of single-input-oriented technical efficiency at the firm level. The proposed method is applicable to the estimation of environmental efficiency of agricultural production when the technology interaction with the environment is modeled via public inputs such as soil quality and environmental conditions. All computations are carried out using Markov chain Monte Carlo methods. We illustrate the approach by applying it to production data from Ukrainian collective farms
The modes of posterior distributions for mixed linear models
Mixed linear models, also known as two-level hierarchical models, are commonly used in many applications. In this paper, we consider the marginal distribution that arises within a Bayesian framework, when the components of variance are integrated out of the joint posterior distribution. We provide analytical tools for describing the surface of the distribution of interest. The main theorem and its proof show how to determine the number of local maxima, and their approximate location and relative size. This information can be used by practitioners to assess the performance of Laplace-type integral approximations, to compute possibly disconnected highest posterior density regions, and to custom-design numerical algorithms
A Compositional Data Approach to the Prediction of Dry Milling Yields
The yield of products in the dry milling industry is largely determined by the physical properties of the corn kernel. The main objective of this paper is to investigate several statistical models of dry milling yield prediction based on physical characteristics of corn. Data consisting of one hundred corn samples representing a range of genetic traits and quality differences are used. For each corn sample, 16 physical and chemical properties plus six dry milling product yields were measured in a controlled laboratory environment.
For each corn sample, we consider a vector of dry milling product yields and a vector of physical corn characteristics. Several single product models are investigated, two of which implicitly take into account the simplex sample space of product yields. A multivariate model is considered that consists of mapping the sample space from a simplex to unrestricted Euclidean space. Comparison are performed using a jack-knife-like approach
Index Insurance, Production Practices, and Probabilistic Climate Forecasts
The failure of the development of commercially viable traditional crop insurance products and innovations in financial markers has fed a renewed interest in the search for alternatives to help producers in developing countries manage their risk exposure. Salient among these is the proposal of several index insurance schemes against weather events. Among the basic tenets are that the presence of index insurance allows producers to intensify their operations and reduce the risks of default and hence may induce creditors to offer loans at affordable rates. The two factors combined are touted as key to help producers in developing countries escape poverty traps. Improvements in seasonal climate forecasts create challenges for the design and effective functioning of the insurance against climate risks. However, very little is known about potential synergies or conflicting impacts of these two institutions, and the interactions between them and input management decisions by producers. We find that insurance and forecast may have synergistic or conflicting effects on input decisions. In the presence of (state contingent) actuarially fair insurance, producers may prefer the forecast information not to be available, especially if the management options available do not result in sufficient changes in profitability. Perhaps surprisingly, we find that forecast information may induce producers to increase the amount of insurance purchased.Climate forecast, Index insurance, Input Decisions, Risk Management, Weather risks, Risk and Uncertainty,
- …