28,399 research outputs found

    Moderate Deviations Analysis of Binary Hypothesis Testing

    Full text link
    This paper is focused on the moderate-deviations analysis of binary hypothesis testing. The analysis relies on a concentration inequality for discrete-parameter martingales with bounded jumps, where this inequality forms a refinement to the Azuma-Hoeffding inequality. Relations of the analysis to the moderate deviations principle for i.i.d. random variables and to the relative entropy are considered.Comment: Presented at the 2012 IEEE International Symposium on Information Theory (ISIT 2012) at MIT, Boston, July 2012. It appears in the Proceedings of ISIT 2012 on pages 826-83

    Moderate Deviation Analysis for Classical Communication over Quantum Channels

    Full text link
    © 2017, Springer-Verlag GmbH Germany. We analyse families of codes for classical data transmission over quantum channels that have both a vanishing probability of error and a code rate approaching capacity as the code length increases. To characterise the fundamental tradeoff between decoding error, code rate and code length for such codes we introduce a quantum generalisation of the moderate deviation analysis proposed by Altŭg and Wagner as well as Polyanskiy and Verdú. We derive such a tradeoff for classical-quantum (as well as image-additive) channels in terms of the channel capacity and the channel dispersion, giving further evidence that the latter quantity characterises the necessary backoff from capacity when transmitting finite blocks of classical data. To derive these results we also study asymmetric binary quantum hypothesis testing in the moderate deviations regime. Due to the central importance of the latter task, we expect that our techniques will find further applications in the analysis of other quantum information processing tasks

    Discrete Optimization for Interpretable Study Populations and Randomization Inference in an Observational Study of Severe Sepsis Mortality

    Full text link
    Motivated by an observational study of the effect of hospital ward versus intensive care unit admission on severe sepsis mortality, we develop methods to address two common problems in observational studies: (1) when there is a lack of covariate overlap between the treated and control groups, how to define an interpretable study population wherein inference can be conducted without extrapolating with respect to important variables; and (2) how to use randomization inference to form confidence intervals for the average treatment effect with binary outcomes. Our solution to problem (1) incorporates existing suggestions in the literature while yielding a study population that is easily understood in terms of the covariates themselves, and can be solved using an efficient branch-and-bound algorithm. We address problem (2) by solving a linear integer program to utilize the worst case variance of the average treatment effect among values for unobserved potential outcomes that are compatible with the null hypothesis. Our analysis finds no evidence for a difference between the sixty day mortality rates if all individuals were admitted to the ICU and if all patients were admitted to the hospital ward among less severely ill patients and among patients with cryptic septic shock. We implement our methodology in R, providing scripts in the supplementary material

    Decision trees in epidemiological research

    Get PDF
    Background: In many studies, it is of interest to identify population subgroups that are relatively homogeneous with respect to an outcome. The nature of these subgroups can provide insight into effect mechanisms and suggest targets for tailored interventions. However, identifying relevant subgroups can be challenging with standard statistical methods. Main text: We review the literature on decision trees, a family of techniques for partitioning the population, on the basis of covariates, into distinct subgroups who share similar values of an outcome variable. We compare two decision tree methods, the popular Classification and Regression tree (CART) technique and the newer Conditional Inference tree (CTree) technique, assessing their performance in a simulation study and using data from the Box Lunch Study, a randomized controlled trial of a portion size intervention. Both CART and CTree identify homogeneous population subgroups and offer improved prediction accuracy relative to regression-based approaches when subgroups are truly present in the data. An important distinction between CART and CTree is that the latter uses a formal statistical hypothesis testing framework in building decision trees, which simplifies the process of identifying and interpreting the final tree model. We also introduce a novel way to visualize the subgroups defined by decision trees. Our novel graphical visualization provides a more scientifically meaningful characterization of the subgroups identified by decision trees. Conclusions: Decision trees are a useful tool for identifying homogeneous subgroups defined by combinations of individual characteristics. While all decision tree techniques generate subgroups, we advocate the use of the newer CTree technique due to its simplicity and ease of interpretation

    The UN in the lab

    Get PDF
    We consider two alternatives to inaction for governments combating terrorism, which we term Defense and Prevention. Defense consists of investing in resources that reduce the impact of an attack, and generates a negative externality to other governments, making their countries a more attractive objective for terrorists. In contrast, Prevention, which consists of investing in resources that reduce the ability of the terrorist organization to mount an attack, creates a positive externality by reducing the overall threat of terrorism for all. This interaction is captured using a simple 3×3 “Nested Prisoner’s Dilemma” game, with a single Nash equilibrium where both countries choose Defense. Due to the structure of this interaction, countries can benefit from coordination of policy choices, and international institutions (such as the UN) can be utilized to facilitate coordination by implementing agreements to share the burden of Prevention. We introduce an institution that implements a burden-sharing policy for Prevention, and investigate experimentally whether subjects coordinate on a cooperative strategy more frequently under different levels of cost sharing. In all treatments, burden sharing leaves the Prisoner’s Dilemma structure and Nash equilibrium of the game unchanged. We compare three levels of burden sharing to a baseline in a between-subjects design, and find that burden sharing generates a non-linear effect on the choice of the efficient Prevention strategy and overall performance. Only an institution supporting a high level of mandatory burden sharing generates a significant improvement in the use of the Prevention strategy
    corecore