5,886 research outputs found
Confidence Statements for Efficiency Estimates from Stochastic Frontier Models
This paper is an empirical study of the uncertainty associated with estimates from stochastic frontier models. We show how to construct confidence intervals for estimates of technical efficiency levels under different sets of assumptions ranging from the very strong to the relatively weak. We demonstrate empirically how the degree of uncertainty associated with these estimates relates to the strength of the assumptions made and to various features of the data.Confidence intervals, stochastic frontier models, efficiency measurement
Ejido reform and the NAFTA
North American Free Trade Agreement ; Mexico ; Agriculture - Mexico
Sampling Errors and Confidence Intervals for Order Statistics: Implementing the Family Support Act
The Family Support Act allows states to reimburse child care costs up to the 75th percentile of local market price for child care. States must carry out surveys to estimate these 75th percentiles. This estimation problem raises two major statistical issues: (1) picking a sample design that will allow one to estimate the percentiles cheaply, efficiently and equitably; and (2) assessing the sampling variability of the estimates obtained. For Massa- chusetts, we developed a sampling design that equalized the standard errors of the estimated percentiles across 65 distinct local markets. This design was chosen because state administrators felt public day care providers and child advocates would find it equitable, thus limiting costly appeals. Estimation of standard errors for the sample 75th percentiles requires estimation of the density of the population at the 75th percentile. We implement and compare a number of parametric and nonparametric methods of density estimation. A kernel estimator provides the most reasonable estimates. On the basis of the mean integrated squared error criterion we selected the Epanechnikov kernel and the Sheather-Jones automatic bandwidth selection procedure. Because some of our sample sizes were too small to rely on asymptotics, we also constructed nonparametric confidence intervals using the hypergeometric distrition. For most of our samples, these confidence intervals were similar to those based on the asymptotic standard errors. Substantively we find wide variation in the price of child care, depending on the child's age, type of care and geographic location. For full-time care, the 75th percentiles ranged from 85 per week for family day care in western Massachusetts.
Imperfect information and the Community Reinvestment Act
Community Reinvestment Act of 1977
Multiple Comparisons with the Best, with Economic Applications
In this paper we discuss a statistical method called multiple comparisons with the best, or MCB. Suppose that we have N populations, and population i has parameter value θi. Let \nopagenumbers\end, the parameter value for the ‘best’ population. Then MCB constructs joint confidence intervals for the differences \nopagenumbers\end. It is not assumed that it is known which population is best, and part of the problem is to say whether any population is so identified, at the given confidence level. This paper is meant to introduce MCB to economists. We discuss possible uses of MCB in economics. The application that we treat in most detail is the construction of confidence intervals for inefficiency measures from stochastic frontier models with panel data. We also consider an application to the analysis of labour market wage gaps
Confidence Statements for Efficiency Estimates from Stochastic Frontier Models
This paper is an empirical study of the uncertainty associated with technical efficiency estimates from stochastic frontier models. We show how to construct confidence intervals for estimates of technical efficiency levels under different sets of assumptions ranging from the very strong to the relatively weak. We demonstrate empirically how the degree of uncertainty associated with these estimates relates to the strength of the assumptions made and to various features of the data
Recommended from our members
A Symbolic Model of Cognitive Transition
Study of cognitive development on the balance scale task has inspired a wide range of human and computational work. The task requires that children predict the outcome of placing a discrete number of weights at various distances on either side of a fulcrum. The current project examined the adequacy of the symbolic learning algorithm C4.5 as a model of cognitive transition on this task. Based on a set of novel assumptions, our C4.S simulations were able to exhibit regularities found in the human data including orderly stage progression, U-shaped development, and the torque difference effect. Unlike previous successful models of the task, the current model used a single free parameter, is not restricted in the size of the balance scale that it can accommodate, and does not require the assumption of a highly structured output representation or a training environment biased towards weight or distance information. The model makes a number of predictions differing from those of previous computational efforts
Robust long-distance entanglement and a loophole-free Bell test with ions and photons
Two trapped ions that are kilometers apart can be entangled by the joint
detection of two photons, each coming from one of the ions, in a basis of
entangled states. Such a detection is possible with linear optical elements.
The use of two-photon interference allows entanglement distribution without
interferometric sensitivity to the path length of the photons. The present
method of creating entangled ions also opens up the possibility of a
loophole-free test of Bell's inequalities.Comment: published versio
- …