6,210 research outputs found

    Targeted Undersmoothing

    Full text link
    This paper proposes a post-model selection inference procedure, called targeted undersmoothing, designed to construct uniformly valid confidence sets for a broad class of functionals of sparse high-dimensional statistical models. These include dense functionals, which may potentially depend on all elements of an unknown high-dimensional parameter. The proposed confidence sets are based on an initially selected model and two additionally selected models, an upper model and a lower model, which enlarge the initially selected model. We illustrate application of the procedure in two empirical examples. The first example considers estimation of heterogeneous treatment effects using data from the Job Training Partnership Act of 1982, and the second example looks at estimating profitability from a mailing strategy based on estimated heterogeneous treatment effects in a direct mail marketing campaign. We also provide evidence on the finite sample performance of the proposed targeted undersmoothing procedure through a series of simulation experiments

    The Impact of a Hausman Pretest on the Size of Hypothesis Tests

    Get PDF
    This paper investigates the size properties of a two-stage test in the linear instrumental variables model when in the ļ¬rst stage a Hausman (1978) speciļ¬cation test is used as a pretest of exogeneity of a regressor. In the second stage, a simple hypothesis about a component of the structural parameter vector is tested, using a t -statistic that is based on either the ordinary least squares (OLS) or the two-stage least squares estimator (2SLS) depending on the outcome of the Hausman pretest. The asymptotic size of the two-stage test is derived in a model where weak instruments are ruled out by imposing a lower bound on the strength of the instruments. The asymptotic size is a function of this lower bound and the pretest and second stage nominal sizes. The asymptotic size increases as the lower bound and the pretest size decrease. It equals 1 for empirically relevant choices of the parameter space. It is also shown that, asymptotically, the conditional size of the second stage test, conditional on the pretest not rejecting the null of regressor exogeneity, is 1 even for a large lower bound on the strength of the instruments. The size distortion is caused by a discontinuity of the asymptotic distribution of the test statistic in the correlation parameter between the structural and reduced form error terms. The Hausman pretest does not have suļ¬€icient power against correlations that are local to zero while the OLS t-statistic takes on large values for such nonzero correlations. Instead of using the two-stage procedure, the recommendation then is to use a t-statistic based on the 2SLS estimator or, if weak instruments are a concern, the conditional likelihood ratio test by Moreira (2003)

    Graph matching with a dual-step EM algorithm

    Get PDF
    This paper describes a new approach to matching geometric structure in 2D point-sets. The novel feature is to unify the tasks of estimating transformation geometry and identifying point-correspondence matches. Unification is realized by constructing a mixture model over the bipartite graph representing the correspondence match and by affecting optimization using the EM algorithm. According to our EM framework, the probabilities of structural correspondence gate contributions to the expected likelihood function used to estimate maximum likelihood transformation parameters. These gating probabilities measure the consistency of the matched neighborhoods in the graphs. The recovery of transformational geometry and hard correspondence matches are interleaved and are realized by applying coupled update operations to the expected log-likelihood function. In this way, the two processes bootstrap one another. This provides a means of rejecting structural outliers. We evaluate the technique on two real-world problems. The first involves the matching of different perspective views of 3.5-inch floppy discs. The second example is furnished by the matching of a digital map against aerial images that are subject to severe barrel distortion due to a line-scan sampling process. We complement these experiments with a sensitivity study based on synthetic data

    Maximum Likelihood Estimation and Uniform Inference with Sporadic Identification Failure

    Get PDF
    This paper analyzes the properties of a class of estimators, tests, and confidence sets (CS's) when the parameters are not identified in parts of the parameter space. Specifically, we consider estimator criterion functions that are sample averages and are smooth functions of a parameter theta. This includes log likelihood, quasi-log likelihood, and least squares criterion functions. We determine the asymptotic distributions of estimators under lack of identification and under weak, semi-strong, and strong identification. We determine the asymptotic size (in a uniform sense) of standard t and quasi-likelihood ratio (QLR) tests and CS's. We provide methods of constructing QLR tests and CS's that are robust to the strength of identification. The results are applied to two examples: a nonlinear binary choice model and the smooth transition threshold autoregressive (STAR) model.Asymptotic size, Binary choice, Confidence set, Estimator, Identification, Likelihood, Nonlinear models, Test, Smooth transition threshold autoregression, Weak identification

    On attitude polarization under Bayesian learning with non-additive beliefs

    Get PDF
    Ample psychological evidence suggests that peopleā€™s learning behavior is often prone to a "myside bias" or "irrational belief persistence" in contrast to learning behavior exclusively based on objective data. In the context of Bayesian learning such a bias may result in diverging posterior beliefs and attitude polarization even if agents receive identical information. Such patterns cannot be explained by the standard model of rational Bayesian learning that implies convergent beliefs. As our key contribution, we therefore develop formal models of Bayesian learning with psychological bias as alternatives to rational Bayesian learning. We derive conditions under which beliefs may diverge in the learning process despite the fact that all agents observe the same - arbitrarily large - sample, which is drawn from an "objective" i.i.d. process. Furthermore, one of our learning scenarios results in attitude polarization even in the case of common priors. Key to our approach is the assumption of ambiguous beliefs that are formalized as non-additive probability measures arising in Choquet expected utility theory. As a specific feature of our approach, our models of Bayesian learning with psychological bias reduce to rational Bayesian learning in the absence of ambiguity.Non-additive Probability Measures, Choquet Expected Utility Theory, Bayesian Learning, Bounded Rationality
    • ā€¦
    corecore