16 research outputs found

    Statistical Treatment Choice Based on Asymmetric Minimax Regret Criteria

    Get PDF
    This paper studies the problem of treatment choice between a status quo treatment with a known outcome distribution and an innovation whose outcomes are observed only in a representative finite sample. I evaluate statistical decision rules, which are functions that map sample outcomes into the planner’s treatment choice for the population, based on regret, which is the expected welfare loss due to assigning inferior treatments. I extend previous work that applied the minimax regret criterion to treatment choice problems by considering decision criteria that asymmetrically treat Type I regret (due to mistakenly choosing an inferior new treatment) and Type II regret (due to mistakenly rejecting a superior innovation). I derive exact finite sample solutions to these problems for experiments with normal, Bernoulli and bounded distributions of individual outcomes. In conclusion, I discuss approaches to the problem for other classes of distributions. Along the way, the paper compares asymmetric minimax regret criteria with statistical decision rules based on classical hypothesis tests.treatment effects, loss aversion, statistical decisions, hypothesis testing.

    Measuring Precision of Statistical Inference on Partially Identified Parameters

    Get PDF
    Planners of surveys and experiments that partially identify parameters of interest face trade offs between using limited resources to reduce sampling error and using them to reduce the extent of partial identification. I evaluate these trade offs in a simple statistical problem with normally distributed sample data and interval partial identification using different frequentist measures of inference precision (length of confidence intervals, minimax mean sqaured error and mean absolute deviation, minimax regret for treatment choice) and analogous Bayes measures with a flat prior. The relative value of collecting data with better identification properties (e.g., increasing response rates in surveys) depends crucially on the choice of the measure of precision. When the extent of partial identification is significant in comparison to sampling error, the length of confidence intervals, which has been used most often, assigns the lowest value to improving identification among the measures considered.statistical treatment choice; survey planning; nonresponse; mean squared error; mean absolute deviation; minimax regret

    Clinical trial design enabling {\epsilon}-optimal treatment rules

    Get PDF
    Medical research has evolved conventions for choosing sample size in randomized clinical trials that rest on the theory of hypothesis testing. Bayesians have argued that trials should be designed to maximize subjective expected utility in settings of clinical interest. This perspective is compelling given a credible prior distribution on treatment response, but Bayesians have struggled to provide guidance on specification of priors. We use the frequentist statistical decision theory of Wald (1950) to study design of trials under ambiguity. We show that {\epsilon}-optimal rules exist when trials have large enough sample size. An {\epsilon}-optimal rule has expected welfare within {\epsilon} of the welfare of the best treatment in every state of nature. Equivalently, it has maximum regret no larger than {\epsilon}. We consider trials that draw predetermined numbers of subjects at random within groups stratified by covariates and treatments. The principal analytical findings are simple sufficient conditions on sample sizes that ensure existence of {\epsilon}-optimal treatment rules when outcomes are bounded. These conditions are obtained by application of Hoeffding (1963) large deviations inequalities to evaluate the performance of empirical success rules

    Constrained Classification and Policy Learning

    Full text link
    Modern machine learning approaches to classification, including AdaBoost, support vector machines, and deep neural networks, utilize surrogate loss techniques to circumvent the computational complexity of minimizing empirical classification risk. These techniques are also useful for causal policy learning problems, since estimation of individualized treatment rules can be cast as a weighted (cost-sensitive) classification problem. Consistency of the surrogate loss approaches studied in Zhang (2004) and Bartlett et al. (2006) crucially relies on the assumption of correct specification, meaning that the specified set of classifiers is rich enough to contain a first-best classifier. This assumption is, however, less credible when the set of classifiers is constrained by interpretability or fairness, leaving the applicability of surrogate loss based algorithms unknown in such second-best scenarios. This paper studies consistency of surrogate loss procedures under a constrained set of classifiers without assuming correct specification. We show that in the setting where the constraint restricts the classifier's prediction set only, hinge losses (i.e., â„“1\ell_1-support vector machines) are the only surrogate losses that preserve consistency in second-best scenarios. If the constraint additionally restricts the functional form of the classifier, consistency of a surrogate loss approach is not guaranteed even with hinge loss. We therefore characterize conditions for the constrained set of classifiers that can guarantee consistency of hinge risk minimizing classifiers. Exploiting our theoretical results, we develop robust and computationally attractive hinge loss based procedures for a monotone classification problem

    Price as a Signal of Product Quality: Some Experimental Evidence

    Get PDF
    AbstractWe use experimental data to disentangle the signaling and budgetary effects of price on wine demand. The experimental design allows us to isolate the two effects in a simple and intuitive way. The signaling effect is present and nonlinear: it is strongly positive between 3 euros and 5 euros and undetectable between 5 euros and 8 euros. We find a similar nonlinear price–quality relationship in a large sample of wine ratings from the same price segment, supporting the hypothesis that taster behavior in the experiment is consistent with rationally using prices as signals of quality. Price signals also have greater importance for inexperienced (young) consumers. (JEL Classification: D11, D12, D82)</jats:p
    corecore