9,935 research outputs found
Sharp IV bounds on average treatment effects under endogeneity and noncompliance
In the presence of an endogenous treatment and a valid instrument, causal effects are (nonparametrically) point identified only for the subpopulation of compliers, given that the treatment is monotone in the instrument. Further populations of likely policy interest have been widely ignored in econometrics. Therefore, we use treatment monotonicity and/or stochastic dominance assumptions to derive sharp bounds on the average treatment effects of the treated population, the entire population, the compliers, the always takers, and the never takers. We also provide an application to labor market data and briefly discuss testable implications of the instrumental exclusion restriction and stochastic dominance.Instrument, noncompliance, principal stratification, nonparametric bounds
Sharp bounds on causal effects under sample selection
In many empirical problems, the evaluation of treatment effects is complicated by sample selection such that the outcome is only observed for a non-random subpopulation. In the absence of instruments and/or tight parametric assumptions, treatment effects are not point identified, but can be bounded under mild restrictions. Previous work on partial identification has primarily focused on the "always selected" (whose outcomes are observed irrespective of the treatment). This paper complements those studies by considering further populations, namely the "compliers" (whose selection states react to the treatment) and the selected population. We derive sharp bounds under various assumptions (monotonicity and stochastic dominance) and provide an empirical application to a school voucher experiment.Causal inference, principal stratification, nonparametric bounds, sample selection
Quantile Regression in the Presence of Sample Selection
Most sample selection models assume that the errors are independent of the regressors. Under this assumption, all quantile and mean functions are parallel, which implies that quantile estimators cannot reveal any (per definition non-existing) heterogeneity. However, quantile estimators are useful for testing the independence assumption, because they are consistent under the null hypothesis. We propose tests for this crucial restriction that are based on the entire conditional quantile regression process after correcting for sample selection bias. Monte Carlo simulations demonstrate that they are powerful and two empirical illustrations indicate that violations of this assumption are likely to be ubiquitous in labor economics.Sample selection, quantile regression, independence, test
Testing instrument validity for LATE identification based on inequality moment constraints
This paper proposes bootstrap tests for the validity of instrumental variables (IV) in just identified treatment effect models with endogeneity. We demonstrate that the IV assumptions required for the identification of the local average treatment effect (LATE) allow us to both point identify and bound the mean potential outcomes (i) of the always takers (those treated irrespective of the instrument) under treatment and (ii) of the never takers (never treated irrespective of the instrument) under non-treatment. The point identified means must lie within their respective bounds, which provides four testable inequality moment constraints for IV validity. Furthermore, we show that a similar logic applies to testing the assumptions needed to identify distributional features (e.g., local quantile treatment effects). Finally, we discuss how testing power can be increased by imposing dominance/equality assumptions on the potential outcome distributions of different subpopulations.specification test, instrument, treatment effects, LATE, inequality moment constraints.
Parameters for apple quality: and an outline for a new quality concept - part 1 report
In real life it is hard to distinguish between the two life processes, growth and differentiation. We cannot expect one measured parameter to represent only one aspect of vital quality. But for most parameters we can recognise emphases on one or more aspects of the vital quality concept. We made this prelimary classification both by thinking about the concept and by looking at the experimental results. Also the conventional parameters are interpreted as a result of these processes in a more holistic way than usual.
We realise that various parameters all concerning the same aspect of our quality concept can show different levels of the aspect. To belong to the same aspect of the quality concept does not automatically mean that their correlation (see annex 14.2) must be high. We still have to get a lot of more experience to validate the parameters’ character. Here we present our first research on this topic, including the unanswered questions and realise that more experimental series will bring more and more certainty.
After this first project we cannot say which parameters are so similar that it makes the other redundant. Until now we learned something from every parameter to develop our quality concept. Most inspiring for the new quality concept were the crystallisations, the delayed luminescence and the Bovis-value
Sophisticated and small versus simple and sizeable: When does it pay off to introduce drifting coefficients in Bayesian VARs?
We assess the relationship between model size and complexity in the
time-varying parameter VAR framework via thorough predictive exercises for the
Euro Area, the United Kingdom and the United States. It turns out that
sophisticated dynamics through drifting coefficients are important in small
data sets while simpler models tend to perform better in sizeable data sets. To
combine best of both worlds, novel shrinkage priors help to mitigate the curse
of dimensionality, resulting in competitive forecasts for all scenarios
considered. Furthermore, we discuss dynamic model selection to improve upon the
best performing individual model for each point in time
Numerical Investigation of the Aerodynamic Properties of a Flying Wing Configuration
The numerical investigations of a generic UCAV configuration are presented. These investigations are part of the DLR internal project UCAV-2010. Compressible speed conditions are considered and presented. The DLR-F17E UCAV configuration is a flying lambda delta wing with sweep angle of 53° and varying leading edge radius. The flow field of
this UCAV configuration is dominated by vortex structures and vortex-to-vortex interaction. The paper aims to give a comparison between numerical- and experimental investigations in order to gain a deeper understanding of the complex flow physics. Furthermore, it will highlight the influence of Mach- and Reynolds number change on the flow and the overall aerodynamic behavior of the configuration. The DLR TAU-Code is used to simulate the flow field, using an unstructured grid and the turbulence model of Spalart-Allmaras. Forces and moment measurements taken in the DNW-TWG, Göttingen, on the DLR-F17E configuration serve as the experimental basis to validate the numerical findings. Findings on the SACCON configuration serve as a comparison case aiming to show possible portability between different model scales but also to find analogies between low speed (M=0.15) and compressible speed (M=0.5) scenarios. This paper builds up upon the finding within the NATO/RTO AVT-161 Research Task Group on “Assessment and Control Predictions for NATO Air and Sea Vehicles” and its findings shall serve as a basis for further experimental investigations of medium to high speed wind tunnel experiments. Furthermore, this paper addresses the importance of understanding and the ability to predict controlled- and
uncontrolled flow separation and the interaction of vortex systems in order to estimate the aerodynamic behavior within the entire flight envelope and to meet Stability- and Control
needs
- …