13,452 research outputs found
Treatment Effects on Ordinal Outcomes: Causal Estimands and Sharp Bounds
Assessing the causal effects of interventions on ordinal outcomes is an
important objective of many educational and behavioral studies. Under the
potential outcomes framework, we can define causal effects as comparisons
between the potential outcomes under treatment and control. However,
unfortunately, the average causal effect, often the parameter of interest, is
difficult to interpret for ordinal outcomes. To address this challenge, we
propose to use two causal parameters, which are defined as the probabilities
that the treatment is beneficial and strictly beneficial for the experimental
units. However, although well-defined for any outcomes and of particular
interest for ordinal outcomes, the two aforementioned parameters depend on the
association between the potential outcomes, and are therefore not identifiable
from the observed data without additional assumptions. Echoing recent advances
in the econometrics and biostatistics literature, we present the sharp bounds
of the aforementioned causal parameters for ordinal outcomes, under fixed
marginal distributions of the potential outcomes. Because the causal estimands
and their corresponding sharp bounds are based on the potential outcomes
themselves, the proposed framework can be flexibly incorporated into any chosen
models of the potential outcomes, and are directly applicable to randomized
experiments, unconfounded observational studies, and randomized experiments
with noncompliance. We illustrate our methodology via numerical examples and
three real-life applications related to educational and behavioral research.Comment: Accepted by the Journal of Education and Behavioral Statistic
The matching polytope does not admit fully-polynomial size relaxation schemes
The groundbreaking work of Rothvo{\ss} [arxiv:1311.2369] established that
every linear program expressing the matching polytope has an exponential number
of inequalities (formally, the matching polytope has exponential extension
complexity). We generalize this result by deriving strong bounds on the
polyhedral inapproximability of the matching polytope: for fixed , every polyhedral -approximation
requires an exponential number of inequalities, where is the number of
vertices. This is sharp given the well-known -approximation of size
provided by the odd-sets of size up to
. Thus matching is the first problem in , whose natural
linear encoding does not admit a fully polynomial-size relaxation scheme (the
polyhedral equivalent of an FPTAS), which provides a sharp separation from the
polynomial-size relaxation scheme obtained e.g., via constant-sized odd-sets
mentioned above.
Our approach reuses ideas from Rothvo{\ss} [arxiv:1311.2369], however the
main lower bounding technique is different. While the original proof is based
on the hyperplane separation bound (also called the rectangle corruption
bound), we employ the information-theoretic notion of common information as
introduced in Braun and Pokutta [http://eccc.hpi-web.de/report/2013/056/],
which allows to analyze perturbations of slack matrices. It turns out that the
high extension complexity for the matching polytope stem from the same source
of hardness as for the correlation polytope: a direct sum structure.Comment: 21 pages, 3 figure
Bounds on Parameters in Dynamic Discrete Choice Models
Identification of dynamic nonlinear panel data models is an important and delicate problem in econometrics. In this paper we provide insights that shed light on the identification of parameters of some commonly used models. Using this insight, we are able to show through simple calculations that point identification often fails in these models. On the other hand, these calculations also suggest that the model restricts the parameter to lie in a region that is very small in many cases, and the failure of point identification may therefore be of little practical importance in those cases. Although the emphasis is on identification, our techniques are constructive in that they can easily form the basis for consistent estimates of the identified sets.
Sensitivity Analysis for Multiple Comparisons in Matched Observational Studies through Quadratically Constrained Linear Programming
A sensitivity analysis in an observational study assesses the robustness of
significant findings to unmeasured confounding. While sensitivity analyses in
matched observational studies have been well addressed when there is a single
outcome variable, accounting for multiple comparisons through the existing
methods yields overly conservative results when there are multiple outcome
variables of interest. This stems from the fact that unmeasured confounding
cannot affect the probability of assignment to treatment differently depending
on the outcome being analyzed. Existing methods allow this to occur by
combining the results of individual sensitivity analyses to assess whether at
least one hypothesis is significant, which in turn results in an overly
pessimistic assessment of a study's sensitivity to unobserved biases. By
solving a quadratically constrained linear program, we are able to perform a
sensitivity analysis while enforcing that unmeasured confounding must have the
same impact on the treatment assignment probabilities across outcomes for each
individual in the study. We show that this allows for uniform improvements in
the power of a sensitivity analysis not only for testing the overall null of no
effect, but also for null hypotheses on \textit{specific} outcome variables
while strongly controlling the familywise error rate. We illustrate our method
through an observational study on the effect of smoking on naphthalene
exposure
- …