352 research outputs found
Analysis of Covariance With Qualitative Data
In data with a group structure, incidental parameters are included to control for missing variables. Applications include longitudinal data and sibling data. In general, the joint maximum likelihood estimator of the structural parameters is not consistent as the number of groups increases, with a fixed number of observations per group. Instead a conditional likelihood function is maximized, conditional on sufficient statistics for the incidental parameters. In the logit case, a standard conditional logit program can be used. Another solution is a random effects model, in which the distribution of the incidental parameters may depend upon the exogenous variables.
Panel Data
We consider linear predictor definitions of noncausality or strict exogeneity and show that it is restrictive to assert that there exists a time-invariant latent variable c such that x is strictly exogenous conditional on c. A restriction of this sort is necessary to justify standard techniques for controlling for unobserved individual effects. There is a parallel analysis for multivariate probit models, but now the distributional assumption for the individual effects is restrictive. This restriction can be avoided by using a conditional likelihood analysis in a logit model. Some of these ideas are illustrated by estimating union wage effects for a sample of Young Men in the National Longitudinal Survey. The results indicate that the lags and leads could have been generated just by an unobserved individual effect, which gives some support for analysis of covariance-type estimates. These estimates indicate a substantial omitted variable bias. We also present estimates of a model of female labor force participation, focusing on the relationship between participation and fertility. Unlike the wage example, there is evidence against conditional strict exogeneity; if we ignore this evidence, the probit and logit approaches give conflicting results.
Recommended from our members
Arbitrage, Factor Structure, and Mean-Variance Analysis on Large Asset Markets
We examine the implications of arbitrage in a market with many assets. The absence of arbitrage opportunities implies that the linear functionals that give the mean and cost of a portfolio are continuous; hence there exist unique portfolios that represent these functionals. These portfolios span the mean-variance efficient set. We resolve the question of when a market with many assets permits so much diversification that risk-free investment opportunities are available. Ross 112, 141 showed that if there is a factor structure, then the mean returns are approximately linear functions of factor loadings. We define an approximate factor structure and show that this weaker restriction is sufficient for Ross' result. If the covariance matrix of the asset returns has only K unbounded eigenvalues, then there is an approximate factor structure and it is unique. The corresponding K eigenvectors converge and play the role of factor loadings. Hence only a principal component analysis is needed in empirical work.Economic
Multivariate Refression Models for Paned Data
Under stationarity, the heterogeneous stoahastic processes are the non-ergodic ones. We show that if a distributed lag is of finite order, then its coefficients are unconditional means of the underlying random coefficients. This result is applied to linear transformations of the process. The estimation framework is a multivariate wide-sense regression function. The identification analysis requires certain restrictions on the coefficients. The actual regression function is nonlinear, and so we provide a theory of inference for linear approximations. It rests on obtaining the asymptotic distribution of functions of sample moments. Restrictions are imposed by using a minimum distance estimator; it is generally more efficient than the conventional estimators.
Using a Quadtree Algorithm To Assess Line of Sight
A matched pair of computer algorithms determines whether line of sight (LOS) is obstructed by terrain. These algorithms were originally designed for use in conjunction with combat-simulation software in military training exercises, but could also be used for such commercial purposes as evaluating lines of sight for antennas or determining what can be seen from a "room with a view." The quadtree preparation algorithm operates on an array of digital elevation data and only needs to be run once for a terrain region, which can be quite large. Relatively little computation time is needed, as each elevation value is considered only one and one-third times. The LOS assessment algorithm uses that quadtree to answer LOS queries. To determine whether LOS is obstructed, a piecewise-planar (or higher-order) terrain skin is computationally draped over the digital elevation data. Adjustments are made to compensate for curvature of the Earth and for refraction of the LOS by the atmosphere. Average computing time appears to be proportional to the number of queries times the logarithm of the number of elevation data points. Accuracy is as high as is possible for the available elevation data, and symmetric results are assured. In the simulation, the LOS query program runs as a separate process, thereby making more random-access memory available for other computations
Recommended from our members
Hierarchical Bayes Models with Many Instrumental Variables
In this paper, we explore Bayesian inference in models with many instrumental variables that are potentially weakly correlated with the endogenous regressor. The prior distribution has a hierarchical (nested) structure. We apply the methods to the Angrist-Krueger (AK, 1991) analysis of returns to schooling using instrumental variables formed by interacting quarter of birth with state/year dummy variables. Bound, Jaeger, and Baker (1995) show that randomly generated instrumental variables, designed to match the AK data set, give two-stage least squares results that look similar to the results based on the actual instrumental variables. Using a hierarchical model with the AK data, we find a posterior distribution for the parameter of interest that is tight and plausible. Using data with randomly generated instruments, the posterior distribution is diffuse. Most of the information in the AK data can in fact be extracted with quarter of birth as the single instrumental variable. Using artificial data patterned on the AK data, we find that if all the information had been in the interactions between quarter of birth and state/year dummies, then the hierarchical model would still have led to precise inferences, whereas the single instrument model would have suggested that there was no information in the data. We conclude that hierarchical modeling is a conceptually straightforward way of efficiently combining many weak instrumental variables.Economic
Recommended from our members
Nonparametric Applications of Bayesian Inference
The paper evaluates the usefulness of a nonparametric approach to Bayesian inference by presenting two applications. The approach is due to Ferguson (1973, 1974) and Rubin (1981). Our first application considers an educational choice problem. We focus on obtaining a predictive distribution for earnings corresponding to various levels of schooling. This predictive distribution incorporates the parameter uncertainty, so that it is relevant for decision making under uncertainty in the expected utility framework of microeconomics. The second application is to quantile regression. Our point here is to examine the potential of the nonparametric framework to provide inferences without making asymptotic approximations. Unlike in the first application, the standard asymptotic normal approximation turns out to not be a good guide. We also consider a comparison with a bootstrap approach.Economic
Nitric oxide treatments as adjuncts to reperfusion in acute myocardial infarction: a systematic review of experimental and clinical studies
Unmodified reperfusion therapy for acute myocardial infarction (AMI) is associated with irreversible myocardial injury beyond that sustained during ischemia. Studies in experimental models of ischemia/reperfusion and in humans undergoing reperfusion therapy for AMI have examined potential beneficial effects of nitric oxide (NO) supplemented at the time of reperfusion. Using a rigorous systematic search approach, we have identified and critically evaluated all the relevant experimental and clinical literature to assess whether exogenous NO given at reperfusion can limit infarct size. An inclusive search strategy was undertaken to identify all in vivo experimental animal and clinical human studies published in the period 1990–2014 where NO gas, nitrite, nitrate or NO donors were given to ameliorate reperfusion injury. Articles were screened at title and subsequently at abstract level, followed by objective full text analysis using a critical appraisal tool. In twenty-one animal studies, all NO treatments except nitroglycerin afforded protection against measures of reperfusion injury, including infarct size, creatinine kinase release, neutrophil accumulation and cardiac dysfunction. In three human AMI RCT’s, there was no consistent evidence of infarct limitation associated with NO treatment as an adjunct to reperfusion. Despite experimental evidence that most NO treatments can reduce infarct size when given as adjuncts to reperfusion, the value of these interventions in clinical AMI is unproven. Our study raises issues for the design of further clinical studies and emphasises the need for improved design of animal studies to reflect more accurately the comorbidities and other confounding factors seen in clinical AMI
- …