4,176 research outputs found

    Confounder selection via iterative graph expansion

    Full text link
    Confounder selection, namely choosing a set of covariates to control for confounding between a treatment and an outcome, is arguably the most important step in the design of observational studies. Previous methods, such as Pearl's celebrated back-door criterion, typically require pre-specifying a causal graph, which can often be difficult in practice. We propose an interactive procedure for confounder selection that does not require pre-specifying the graph or the set of observed variables. This procedure iteratively expands the causal graph by finding what we call "primary adjustment sets" for a pair of possibly confounded variables. This can be viewed as inverting a sequence of latent projections of the underlying causal graph. Structural information in the form of primary adjustment sets is elicited from the user, bit by bit, until either a set of covariates are found to control for confounding or it can be determined that no such set exists. Other information, such as the causal relations between confounders, is not required by the procedure. We show that if the user correctly specifies the primary adjustment sets in every step, our procedure is both sound and complete.Comment: 29 pages; added link to Shiny web ap

    Relaying systems with reciprocity mismatch : impact analysis and calibration

    Get PDF
    Cooperative beamforming can provide significant performance improvement for relaying systems with the help of the channel state information (CSI). In time-division duplexing (TDD) mode, the estimated CSI will deteriorate due to the reciprocity mismatch. In this work, we examine the impact and the calibration of the reciprocity mismatch in relaying systems. To evaluate the impact of the reciprocity mismatch for all devices, the closed-form expression of the achievable rate is first derived. Then, we analyze the performance loss caused by the reciprocity mismatch at sources, relays, and destinations respectively to show that the mismatch at relays dominates the impact. To compensate the performance loss, a two-stage calibration scheme is proposed for relays. Specifically, relays perform the intra-calibration based on circuits independently. Further, the inter-calibration based on the discrete Fourier transform (DFT) codebook is operated to improve the calibration performance by cooperation transmission, which has never been considered in previous work. Finally, we derive the achievable rate after relays perform the proposed reciprocity calibration scheme and investigate the impact of estimation errors on the system performance. Simulation results are presented to verify the analytical results and to show the performance of the proposed calibration approach

    NOMA-enhanced computation over multi-access channels

    Get PDF
    Massive numbers of nodes will be connected in future wireless networks. This brings great difficulty to collect a large amount of data. Instead of collecting the data individually, computation over multi-access channels (CoMAC) provides an intelligent solution by computing a desired function over the air based on the signal-superposition property of wireless channels. To improve the spectrum efficiency in conventional CoMAC, we propose the use of non-orthogonal multiple access (NOMA) for functions in CoMAC. The desired functions are decomposed into several sub-functions, and multiple sub-functions are selected to be superposed over each resource block (RB). The corresponding achievable rate is derived based on sub-function superposition, which prevents a vanishing computation rate for large numbers of nodes. We further study the limiting case when the number of nodes goes to infinity. An exact expression of the rate is derived that provides a lower bound on the computation rate. Compared with existing CoMAC, the NOMA-based CoMAC not only achieves a higher computation rate but also provides an improved non-vanishing rate. Furthermore, the diversity order of the computation rate is derived, which shows that the system performance is dominated by the node with the worst channel gain among these sub-functions in each RB

    Chernoff-type Concentration of Empirical Probabilities in Relative Entropy

    Full text link
    We study the relative entropy of the empirical probability vector with respect to the true probability vector in multinomial sampling of kk categories, which, when multiplied by sample size nn, is also the log-likelihood ratio statistic. We generalize a recent result and show that the moment generating function of the statistic is bounded by a polynomial of degree nn on the unit interval, uniformly over all true probability vectors. We characterize the family of polynomials indexed by (k,n)(k,n) and obtain explicit formulae. Consequently, we develop Chernoff-type tail bounds, including a closed-form version from a large sample expansion of the bound minimizer. Our bound dominates the classic method-of-types bound and is competitive with the state of the art. We demonstrate with an application to estimating the proportion of unseen butterflies.Comment: 21 pages, 4 figure

    Confounder Selection: Objectives and Approaches

    Full text link
    Confounder selection is perhaps the most important step in the design of observational studies. A number of criteria, often with different objectives and approaches, have been proposed, and their validity and practical value have been debated in the literature. Here, we provide a unified review of these criteria and the assumptions behind them. We list several objectives that confounder selection methods aim to achieve and discuss the amount of structural knowledge required by different approaches. Finally, we discuss limitations of the existing approaches and implications for practitioners.Comment: 15 page
    • …
    corecore